query_id
stringlengths 32
32
| query
stringlengths 5
614
| positive_passages
listlengths 1
26
| negative_passages
listlengths 7
25
| subset
stringclasses 3
values |
---|---|---|---|---|
981b8ee24864cf71e9ad34c9967065ff | Integrating 3D structure into traffic scene understanding with RGB-D data | [
{
"docid": "5691ca09e609aea46b9fd5e7a83d165a",
"text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.",
"title": ""
}
] | [
{
"docid": "c460179cbdb40b9d89b3cc02276d54e1",
"text": "In recent years the sport of climbing has seen consistent increase in popularity. Climbing requires a complex skill set for successful and safe exercising. While elite climbers receive intensive expert coaching to refine this skill set, this progression approach is not viable for the amateur population. We have developed ClimbAX - a climbing performance analysis system that aims for replicating expert assessments and thus represents a first step towards an automatic coaching system for climbing enthusiasts. Through an accelerometer based wearable sensing platform, climber's movements are captured. An automatic analysis procedure detects climbing sessions and moves, which form the basis for subsequent performance assessment. The assessment parameters are derived from sports science literature and include: power, control, stability, speed. ClimbAX was evaluated in a large case study with 53 climbers under competition settings. We report a strong correlation between predicted scores and official competition results, which demonstrate the effectiveness of our automatic skill assessment system.",
"title": ""
},
{
"docid": "179e5b887f15b4ecf4ba92031a828316",
"text": "High efficiency power supply solutions for data centers are gaining more attention, in order to minimize the fast growing power demands of such loads, the 48V Voltage Regulator Module (VRM) for powering CPU is a promising solution replacing the legacy 12V VRM by which the bus distribution loss, cost and size can be dramatically minimized. In this paper, a two-stage 48V/12V/1.8V–250W VRM is proposed, the first stage is a high efficiency, high power density isolated — unregulated DC/DC converter (DCX) based on LLC resonant converter stepping the input voltage from 48V to 12V. The Matrix transformer concept was utilized for designing the high frequency transformer of the first stage, an enhanced termination loop for the synchronous rectifiers and a non-uniform winding structure is proposed resulting in significant increase in both power density and efficiency of the first stage converter. The second stage is a 4-phases buck converter stepping the voltage from 12V to 1.8V to the CPU. Since the CPU runs in the sleep mode most of the time a light load efficiency improvement method by changing the bus voltage from 12V to 6 V during light load operation is proposed showing more than 8% light load efficiency enhancement than fixed bus voltage. Experimental results demonstrate the high efficiency of the proposed solution reaching peak of 91% with a significant light load efficiency improvement.",
"title": ""
},
{
"docid": "31461de346fb454f296495287600a74f",
"text": "The working hypothesis of the paper is that motor images are endowed with the same properties as those of the (corresponding) motor representations, and therefore have the same functional relationship to the imagined or represented movement and the same causal role in the generation of this movement. The fact that the timing of simulated movements follows the same constraints as that of actually executed movements is consistent with this hypothesis. Accordingly, many neural mechanisms are activated during motor imagery, as revealed by a sharp increase in tendinous reflexes in the limb imagined to move, and by vegetative changes which correlate with the level of mental effort. At the cortical level, a specific pattern of activation, that closely resembles that of action execution, is observed in areas devoted to motor control. This activation might be the substrate for the effects of mental training. A hierarchical model of the organization of action is proposed: this model implies a short-term memory storage of a 'copy' of the various representational steps. These memories are erased when an action corresponding to the represented goal takes place. By contrast, if the action is incompletely or not executed, the whole system remains activated, and the content of the representation is rehearsed. This mechanism would be the substrate for conscious access to this content during motor imagery and mental training.",
"title": ""
},
{
"docid": "e054c2d3b52441eaf801e7d2dd54dce9",
"text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "bd1fdbfcc0116dcdc5114065f32a883e",
"text": "Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested.",
"title": ""
},
{
"docid": "48a45f03f31d8fc0daede6603f3b693a",
"text": "This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.",
"title": ""
},
{
"docid": "e306d50838fc5e140a8c96cd95fd3ca2",
"text": "Customer Relationship Management (CRM) is a strategy that supports an organization’s decision-making process to retain long-term and profitable relationships with its customers. Effective CRM analyses require a detailed data warehouse model that can support various CRM analyses and deep understanding on CRM-related business questions. In this paper, we present a taxonomy of CRM analysis categories. Our CRM taxonomy includes CRM strategies, CRM category analyses, CRM business questions, their potential uses, and key performance indicators (KPIs) for those analysis types. Our CRM taxonomy can be used in selecting and evaluating a data schema for CRM analyses, CRM vendors, CRM strategies, and KPIs.",
"title": ""
},
{
"docid": "860e3c429e6ae709ce9cbc4b6cb148db",
"text": "This paper presents an approach for performance analysis of modern enterprise-class server applications. In our experience, performance bottlenecks in these applications differ qualitatively from bottlenecks in smaller, stand-alone systems. Small applications and benchmarks often suffer from CPU-intensive hot spots. In contrast, enterprise-class multi-tier applications often suffer from problems that manifest not as hot spots, but as idle time, indicating a lack of forward motion. Many factors can contribute to undesirable idle time, including locking problems, excessive system-level activities like garbage collection, various resource constraints, and problems driving load.\n We present the design and methodology for WAIT, a tool to diagnosis the root cause of idle time in server applications. Given lightweight samples of Java activity on a single tier, the tool can often pinpoint the primary bottleneck on a multi-tier system. The methodology centers on an informative abstraction of the states of idleness observed in a running program. This abstraction allows the tool to distinguish, for example, between hold-ups on a database machine, insufficient load, lock contention in application code, and a conventional bottleneck due to a hot method. To compute the abstraction, we present a simple expert system based on an extensible set of declarative rules.\n WAIT can be deployed on the fly, without modifying or even restarting the application. Many groups in IBM have applied the tool to diagnosis performance problems in commercial systems, and we present a number of examples as case studies.",
"title": ""
},
{
"docid": "a8a8656f2f7cdcab79662cb150c8effa",
"text": "As networks grow both in importance and size, there is an increasing need for effective security monitors such as Network Intrusion Detection System to prevent such illicit accesses. Intrusion Detection Systems technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on hybrid fuzzy logic and neural network. The key idea is to take advantage of different classification abilities of fuzzy logic and neural network for intrusion detection system. The new model has ability to recognize an attack, to differentiate one attack from another i.e. classifying attack, and the most important, to detect new attacks with high detection rate and low false negative. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data set.",
"title": ""
},
{
"docid": "920748fbdcaf91346a40e3bf5ae53d42",
"text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].",
"title": ""
},
{
"docid": "e7eb22e4ac65696e3bb2a2611a28e809",
"text": "Cuckoo search (CS) is an efficient swarm-intelligence-based algorithm and significant developments have been made since its introduction in 2009. CS has many advantages due to its simplicity and efficiency in solving highly non-linear optimisation problems with real-world engineering applications. This paper provides a timely review of all the state-of-the-art developments in the last five years, including the discussions of theoretical background and research directions for future development of this powerful algorithm.",
"title": ""
},
{
"docid": "65cae0002bcff888d6514aa2d375da40",
"text": "We study the problem of finding efficiently computable non-degenerate multilinear maps from G1 to G2, where G1 and G2 are groups of the same prime order, and where computing discrete logarithms in G1 is hard. We present several applications to cryptography, explore directions for building such maps, and give some reasons to believe that finding examples with n > 2",
"title": ""
},
{
"docid": "a2fb1ee73713544852292721dce21611",
"text": "Large scale implementation of active RFID tag technology has been restricted by the need for battery replacement. Prolonging battery lifespan may potentially promote active RFID tags which offer obvious advantages over passive RFID systems. This paper explores some opportunities to simulate and develop a prototype RF energy harvester for 2.4 GHz band specifically designed for low power active RFID tag application. This system employs a rectenna architecture which is a receiving antenna attached to a rectifying circuit that efficiently converts RF energy to DC current. Initial ADS simulation results show that 2 V output voltage can be achieved using a 7 stage Cockroft-Walton rectifying circuitry with -4.881 dBm (0.325 mW) output power under -4 dBm (0.398 mW) input RF signal. These results lend support to the idea that RF energy harvesting is indeed promising.",
"title": ""
},
{
"docid": "08c97484fe3784e2f1fd42606b915f83",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "0e61015f3372ba177acdfcddbd0ffdfb",
"text": "INTRODUCTION\nThere are many challenges to the drug discovery process, including the complexity of the target, its interactions, and how these factors play a role in causing the disease. Traditionally, biophysics has been used for hit validation and chemical lead optimization. With its increased throughput and sensitivity, biophysics is now being applied earlier in this process to empower target characterization and hit finding. Areas covered: In this article, the authors provide an overview of how biophysics can be utilized to assess the quality of the reagents used in screening assays, to validate potential tool compounds, to test the integrity of screening assays, and to create follow-up strategies for compound characterization. They also briefly discuss the utilization of different biophysical methods in hit validation to help avoid the resource consuming pitfalls caused by the lack of hit overlap between biophysical methods. Expert opinion: The use of biophysics early on in the drug discovery process has proven crucial to identifying and characterizing targets of complex nature. It also has enabled the identification and classification of small molecules which interact in an allosteric or covalent manner with the target. By applying biophysics in this manner and at the early stages of this process, the chances of finding chemical leads with novel mechanisms of action are increased. In the future, focused screens with biophysics as a primary readout will become increasingly common.",
"title": ""
},
{
"docid": "51df36570be2707556a8958e16682612",
"text": "Through co-design of Augmented Reality (AR) based teaching material, this research aims to enhance collaborative learning experience in primary school education. It will introduce an interactive AR Book based on primary school textbook using tablets as the real time interface. The development of this AR Book employs co-design methods to involve children, teachers, educators and HCI experts from the early stages of the design process. Research insights from the co-design phase will be implemented in the AR Book design. The final outcome of the AR Book will be evaluated in the classroom to explore its effect on the collaborative experience of primary school students. The research aims to answer the question - Can Augmented Books be designed for primary school students in order to support collaboration? This main research question is divided into two sub-questions as follows - How can co-design methods be applied in designing Augmented Book with and for primary school children? And what is the effect of the proposed Augmented Book on primary school students' collaboration? This research will not only present a practical application of co-designing AR Book for and with primary school children, it will also clarify the benefit of AR for education in terms of collaborative experience.",
"title": ""
},
{
"docid": "d59e64c1865193db3aaecc202f688690",
"text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.",
"title": ""
},
{
"docid": "e748162d1e0de342983f7028156b3cf6",
"text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We also provide a simple distance-based ambient occlusion approximation as well as an ambient illumination precomputation approach, both of which account for fiber-level self-occlusion of yarn. Finally, we discuss how to use a physical-based shading model with our fiber-level cloth rendering method and how to handle cloth animations with temporal coherency. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.",
"title": ""
},
{
"docid": "5028d250c60a70c0ed6954581ab6cfa7",
"text": "Social Commerce as a result of the advancement of Social Networking Sites and Web 2.0 is increasing as a new model of online shopping. With techniques to improve the website using AJAX, Adobe Flash, XML, and RSS, Social Media era has changed the internet user behavior to be more communicative and active in internet, they love to share information and recommendation among communities. Social commerce also changes the way people shopping through online. Social commerce will be the new way of online shopping nowadays. But the new challenge is business has to provide the interactive website yet interesting website for internet users, the website should give experience to satisfy their needs. This purpose of research is to analyze the website quality (System Quality, Information Quality, and System Quality) as well as interaction feature (communication feature) impact on social commerce website and customers purchase intention. Data from 134 customers of social commerce website were used to test the model. Multiple linear regression is used to calculate the statistic result while confirmatory factor analysis was also conducted to test the validity from each variable. The result shows that website quality and communication feature are important aspect for customer purchase intention while purchasing in social commerce website.",
"title": ""
},
{
"docid": "5fbd1f14c8f4e8dc82bc86ad8b27c115",
"text": "Computer-animated characters are common in popular culture and have begun to be used as experimental tools in social cognitive neurosciences. Here we investigated how appearance of these characters' influences perception of their actions. Subjects were presented with different characters animated either with motion data captured from human actors or by interpolating between poses (keyframes) designed by an animator, and were asked to categorize the motion as biological or artificial. The response bias towards 'biological', derived from the Signal Detection Theory, decreases with characters' anthropomorphism, while sensitivity is only affected by the simplest rendering style, point-light displays. fMRI showed that the response bias correlates positively with activity in the mentalizing network including left temporoparietal junction and anterior cingulate cortex, and negatively with regions sustaining motor resonance. The absence of significant effect of the characters on the brain activity suggests individual differences in the neural responses to unfamiliar artificial agents. While computer-animated characters are invaluable tools to investigate the neural bases of social cognition, further research is required to better understand how factors such as anthropomorphism affect their perception, in order to optimize their appearance for entertainment, research or therapeutic purposes.",
"title": ""
}
] | scidocsrr |
0e6d0376110dc8b335378bf8b498dfca | Measuring the Effect of Conversational Aspects on Machine Translation Quality | [
{
"docid": "355d040cf7dd706f08ef4ce33d53a333",
"text": "Conversational participants tend to immediately and unconsciously adapt to each other’s language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner’s immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don’t receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males.",
"title": ""
},
{
"docid": "e8f431676ed0a85cb09a6462303a3ec7",
"text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.",
"title": ""
}
] | [
{
"docid": "dcacbed90f45b76e9d40c427e16e89d6",
"text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.",
"title": ""
},
{
"docid": "991c5610152acf37b9a5e90b4f89bab8",
"text": "The BioTac® is a biomimetic tactile sensor for grip control and object characterization. It has three sensing modalities: thermal flux, microvibration and force. In this paper, we discuss feature extraction and interpretation of the force modality data. The data produced by this force sensing modality during sensor-object interaction are monotonic but non-linear. Algorithms and machine learning techniques were developed and validated for extracting the radius of curvature (ROC), point of application of force (PAF) and force vector (FV). These features have varying degrees of usefulness in extracting object properties using only cutaneous information; most robots can also provide the equivalent of proprioceptive sensing. For example, PAF and ROC is useful for extracting contact points for grasp and object shape as the finger depresses and moves along an object; magnitude of FV is useful in evaluating compliance from reaction forces when a finger is pushed into an object at a given velocity while direction is important for maintaining stable grip.",
"title": ""
},
{
"docid": "054b3f9068c92545e9c2c39e0728ad17",
"text": "Data Aggregation is an important topic and a suitable technique in reducing the energy consumption of sensors nodes in wireless sensor networks (WSN’s) for affording secure and efficient big data aggregation. The wireless sensor networks have been broadly applied, such as target tracking and environment remote monitoring. However, data can be easily compromised by a vast of attacks, such as data interception and tampering of data. Data integrity protection is proposed, gives an identity-based aggregate signature scheme for wireless sensor networks with a designated verifier. The aggregate signature scheme keeps data integrity, can reduce bandwidth and storage cost. Furthermore, the security of the scheme is effectively presented based on the computation of Diffie-Hellman random oracle model.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "b842d759b124e1da0240f977d95a8b9a",
"text": "In this paper we argue for a broader view of ontology patterns and therefore present different use-cases where drawbacks of the current declarative pattern languages can be seen. We also discuss usecases where a declarative pattern approach can replace procedural-coded ontology patterns. With previous work on an ontology pattern language in mind we argue for a general pattern language.",
"title": ""
},
{
"docid": "556dbae297d06aaaeb0fd78016bd573f",
"text": "This paper presents a learning and scoring framework based on neural networks for speaker verification. The framework employs an autoencoder as its primary structure while three factors are jointly considered in the objective function for speaker discrimination. The first one, relating to the sample reconstruction error, makes the structure essentially a generative model, which benefits to learn most salient and useful properties of the data. Functioning in the middlemost hidden layer, the other two attempt to ensure that utterances spoken by the same speaker are mapped into similar identity codes in the speaker discriminative subspace, where the dispersion of all identity codes are maximized to some extent so as to avoid the effect of over-concentration. Finally, the decision score of each utterance pair is simply computed by cosine similarity of their identity codes. Dealing with utterances represented by i-vectors, the results of experiments conducted on the male portion of the core task in the NIST 2010 Speaker Recognition Evaluation (SRE) significantly demonstrate the merits of our approach over the conventional PLDA method.",
"title": ""
},
{
"docid": "734ca5ac095cc8339056fede2a642909",
"text": "The value of depth-first search or \"bacltracking\" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E dk for some constants kl, k2, and ka, where Vis the number of vertices and E is the number of edges of the graph being examined.",
"title": ""
},
{
"docid": "352bcf1c407568871880ad059053e1ec",
"text": "In this paper we present a novel system for sketching the motion of a character. The process begins by sketching a character to be animated. An animated motion is then created for the character by drawing a continuous sequence of lines, arcs, and loops. These are parsed and mapped to a parameterized set of output motions that further reflect the location and timing of the input sketch. The current system supports a repertoire of 18 different types of motions in 2D and a subset of these in 3D. The system is unique in its use of a cursive motion specification, its ability to allow for fast experimentation, and its ease of use for non-experts.",
"title": ""
},
{
"docid": "4019beb9fa6ec59b4b19c790fe8ff832",
"text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.",
"title": ""
},
{
"docid": "ad401a35f367fabf31b35586bc1d10c4",
"text": "This paper describes a small-size buck-type dc–dc converter for cellular phones. Output power MOSFETs and control circuitry are monolithically integrated. The newly developed pulse frequency modulation control integrated circuit, mounted on a planar inductor within the converter package, has a low quiescent current below 10 μA and a small chip size of 1.4 mm × 1.1 mm in a 0.35-μm CMOS process. The converter achieves a maximum efficiency of 90% and a power density above 100 W/cm<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$^3$</tex></formula>.",
"title": ""
},
{
"docid": "c3112126fa386710fb478dcfe978630e",
"text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.",
"title": ""
},
{
"docid": "2f4a4c223c13c4a779ddb546b3e3518c",
"text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.",
"title": ""
},
{
"docid": "74373dd009fc6285b8f43516d8e8bf2c",
"text": "Computational speech reconstruction algorithms have the ultimate aim of returning natural sounding speech to aphonic and dysphonic patients as well as those who can only whisper. In particular, individuals who have lost glottis function due to disease or surgery, retain the power of vocal tract modulation to some degree but they are unable to speak anything more than hoarse whispers without prosthetic aid. While whispering can be seen as a natural and secondary aspect of speech communications for most people, it becomes the primary mechanism of communications for those who have impaired voice production mechanisms, such as laryngectomees. In this paper, by considering the current limitations of speech reconstruction methods, a novel algorithm for converting whispers to normal speech is proposed and the efficiency of the algorithm is explored. The algorithm relies upon cascading mapping models and makes use of artificially generated whispers (called whisperised speech) to regenerate natural phonated speech from whispers. Using a training-based approach, the mapping models exploit whisperised speech to overcome frame to frame time alignment problems that are inherent in the speech reconstruction process. This algorithm effectively regenerates missing information in the conventional frameworks of phonated speech reconstruction, ∗Corresponding author Email address: [email protected] (Hamid R. Sharifzadeh) Preprint submitted to Journal of Computers & Electrical Engineering February 15, 2016 and is able to outperform the current state-of-the-art regeneration methods using both subjective and objective criteria.",
"title": ""
},
{
"docid": "bab429bf74fe4ce3f387a716964a867f",
"text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.",
"title": ""
},
{
"docid": "715fda02bad1633be9097cc0a0e68c8d",
"text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.",
"title": ""
},
{
"docid": "7f82ff12310f74b17ba01cac60762a8c",
"text": "For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.",
"title": ""
},
{
"docid": "edcf1cb4d09e0da19c917eab9eab3b23",
"text": "The paper describes a computerized process of myocardial perfusion diagnosis from cardiac single proton emission computed tomography (SPECT) images using data mining and knowledge discovery approach. We use a six-step knowledge discovery process. A database consisting of 267 cleaned patient SPECT images (about 3000 2D images), accompanied by clinical information and physician interpretation was created first. Then, a new user-friendly algorithm for computerizing the diagnostic process was designed and implemented. SPECT images were processed to extract a set of features, and then explicit rules were generated, using inductive machine learning and heuristic approaches to mimic cardiologist's diagnosis. The system is able to provide a set of computer diagnoses for cardiac SPECT studies, and can be used as a diagnostic tool by a cardiologist. The achieved results are encouraging because of the high correctness of diagnoses.",
"title": ""
},
{
"docid": "6a2562987d10cdc499aca15da5526ebf",
"text": "The underwater images usually suffers from non-uniform lighting, low contrast, blur and diminished colors. In this paper, we proposed an image based preprocessing technique to enhance the quality of the underwater images. The proposed technique comprises a combination of four filters such as homomorphic filtering, wavelet denoising, bilateral filter and contrast equalization. These filters are applied sequentially on degraded underwater images. The literature survey reveals that image based preprocessing algorithms uses standard filter techniques with various combinations. For smoothing the image, the image based preprocessing algorithms uses the anisotropic filter. The main drawback of the anisotropic filter is that iterative in nature and computation time is high compared to bilateral filter. In the proposed technique, in addition to other three filters, we employ a bilateral filter for smoothing the image. The experimentation is carried out in two stages. In the first stage, we have conducted various experiments on captured images and estimated optimal parameters for bilateral filter. Similarly, optimal filter bank and optimal wavelet shrinkage function are estimated for wavelet denoising. In the second stage, we conducted the experiments using estimated optimal parameters, optimal filter bank and optimal wavelet shrinkage function for evaluating the proposed technique. We evaluated the technique using quantitative based criteria such as a gradient magnitude histogram and Peak Signal to Noise Ratio (PSNR). Further, the results are qualitatively evaluated based on edge detection results. The proposed technique enhances the quality of the underwater images and can be employed prior to apply computer vision techniques.",
"title": ""
},
{
"docid": "b09c438933e0c9300e19f035eb0e9305",
"text": "A Reverse Conducting IGBT (RC-IGBT) is a promising device to reduce a size and cost of the power module thanks to the integration of IGBT and FWD into a single chip. However, it is difficult to achieve well-balanced performance between IGBT and FWD. Indeed, the total inverter loss of the conventional RC-IGBT was not so small as the individual IGBT and FWD pair. To minimize the loss, the most important key is the improvement of reverse recovery characteristics of FWD. We carefully extracted five effective parameters to improve the FWD characteristics, and investigated the impact of these parameters by using simulation and experiments. Finally, optimizing these parameters, we succeeded in fabricating the second-generation 600V class RC-IGBT with a smaller FWD loss than the first-generation RC-IGBT.",
"title": ""
},
{
"docid": "3b05b099ee7e043c43270e92ba5290bd",
"text": "In connection with a study of various aspects of the modifiability of behavior in the dancing mouse a need for definite knowledge concerning the relation of strength of stimulus to rate of learning arose. It was for the purpose of obtaining this knowledge that we planned and executed the experiments which are now to be described. Our work was greatly facilitated by the advice and assistance of Doctor E. G. MARTIN, Professor G. W. PIERCE, and Professor A. E. KENNELLY, and we desire to express here both our indebtedness and our thanks for their generous services.",
"title": ""
}
] | scidocsrr |
d3bca3025b5f26f3428a448435e5eab1 | Upsampling range data in dynamic environments | [
{
"docid": "67e16f36bb6d83c5d6eae959a7223b77",
"text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.",
"title": ""
}
] | [
{
"docid": "ce0b0543238a81c3f02c43e63a285605",
"text": "Hatebusters is a web application for actively reporting YouTube hate speech, aiming to establish an online community of volunteer citizens. Hatebusters searches YouTube for videos with potentially hateful comments, scores their comments with a classifier trained on human-annotated data and presents users those comments with the highest probability of being hate speech. It also employs gamification elements, such as achievements and leaderboards, to drive user engagement.",
"title": ""
},
{
"docid": "8dd540b33035904f63c67b57d4c97aa3",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "e3b1e52066d20e7c92e936cdb72cc32b",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "756b25456494b3ece9b240ba3957f91c",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
},
{
"docid": "2639f5d735abed38ed4f7ebf11072087",
"text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.",
"title": ""
},
{
"docid": "9f005054e640c2db97995c7540fe2034",
"text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.",
"title": ""
},
{
"docid": "f0d55892fb927c5c5324cfb7b8380bda",
"text": "The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b3bb84322c28a9d0493d9b8a626666e4",
"text": "Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.",
"title": ""
},
{
"docid": "37d4b01b77e548aa6226774be627471c",
"text": "A fully integrated 8-channel phased-array receiver at 24 GHz is demonstrated. Each channel achieves a gain of 43 dB, noise figure of 8 dB, and an IIP3 of -11dBm, consuming 29 mA of current from a 2.5 V supply. The 8-channel array has a beam-forming resolution of 22.5/spl deg/, a peak-to-null ratio of 20 dB (4-bits), a total array gain of 61 dB, and improves the signal-to-noise ratio by 9 dB.",
"title": ""
},
{
"docid": "0a2e59ab99b9666d8cf3fb31be9fa40c",
"text": "Behavioral targeting (BT) is a widely used technique for online advertising. It leverages information collected on an individual's web-browsing behavior, such as page views, search queries and ad clicks, to select the ads most relevant to user to display. With the proliferation of social networks, it is possible to relate the behavior of individuals and their social connections. Although the similarity among connected individuals are well established (i.e., homophily), it is still not clear whether and how we can leverage the activities of one's friends for behavioral targeting; whether forecasts derived from such social information are more accurate than standard behavioral targeting models. In this paper, we strive to answer these questions by evaluating the predictive power of social data across 60 consumer domains on a large online network of over 180 million users in a period of two and a half months. To our best knowledge, this is the most comprehensive study of social data in the context of behavioral targeting on such an unprecedented scale. Our analysis offers interesting insights into the value of social data for developing the next generation of targeting services.",
"title": ""
},
{
"docid": "461ee7b6a61a6d375a3ea268081f80f5",
"text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.",
"title": ""
},
{
"docid": "e63a8b6595e1526a537b0881bc270542",
"text": "The CTD which stands for “Conductivity-Temperature-Depth” is one of the most used instruments for the oceanographic measurements. MEMS based CTD sensor components consist of a conductivity sensor (C), temperature sensor (T) and a piezo resistive pressure sensor (D). CTDs are found in every marine related institute and navy throughout the world as they are used to produce the salinity profile for the area of the ocean under investigation and are also used to determine different oceanic parameters. This research paper provides the design, fabrication and initial test results on a prototype CTD sensor.",
"title": ""
},
{
"docid": "9cc23cd9bfb3e422e2b4ace1fe816855",
"text": "Evaluating surgeon skill has predominantly been a subjective task. Development of objective methods for surgical skill assessment are of increased interest. Recently, with technological advances such as robotic-assisted minimally invasive surgery (RMIS), new opportunities for objective and automated assessment frameworks have arisen. In this paper, we applied machine learning methods to automatically evaluate performance of the surgeon in RMIS. Six important movement features were used in the evaluation including completion time, path length, depth perception, speed, smoothness and curvature. Different classification methods applied to discriminate expert and novice surgeons. We test our method on real surgical data for suturing task and compare the classification result with the ground truth data (obtained by manual labeling). The experimental results show that the proposed framework can classify surgical skill level with relatively high accuracy of 85.7%. This study demonstrates the ability of machine learning methods to automatically classify expert and novice surgeons using movement features for different RMIS tasks. Due to the simplicity and generalizability of the introduced classification method, it is easy to implement in existing trainers. .",
"title": ""
},
{
"docid": "ef706ea7a6dcd5b71602ea4c28eb9bd3",
"text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.",
"title": ""
},
{
"docid": "ba3f3ca8a34e1ea6e54fe9dde673b51f",
"text": "This paper proposes a high-efficiency dual-band on-chip rectifying antenna (rectenna) at 35 and 94 GHz for wireless power transmission. The rectenna is designed in slotline (SL) and finite-width ground coplanar waveguide (FGCPW) transmission lines in a CMOS 0.13-μm process. The rectenna comprises a high gain linear tapered slot antenna (LTSA), an FGCPW to SL transition, a bandpass filter, and a full-wave rectifier. The LTSA achieves a VSWR=2 fractional bandwidth of 82% and 41%, and a gain of 7.4 and 6.5 dBi at the frequencies of 35 and 94 GHz. The measured power conversion efficiencies are 53% and 37% in free space at 35 and 94 GHz, while the incident radiation power density is 30 mW/cm2 . The fabricated rectenna occupies a compact size of 2.9 mm2.",
"title": ""
},
{
"docid": "4783e35e54d0c7f555015427cbdc011d",
"text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].",
"title": ""
},
{
"docid": "3d319572361f55dd4b91881dac2c9ace",
"text": "In this paper, a modular interleaved boost converter is first proposed by integrating a forward energy-delivering circuit with a voltage-doubler to achieve high step-up ratio and high efficiency for dc-microgrid applications. Then, steady-state analyses are made to show the merits of the proposed converter module. For closed-loop control design, the corresponding small-signal model is also derived. It is seen that, for higher power applications, more modules can be paralleled to increase the power rating and the dynamic performance. As an illustration, closed-loop control of a 450-W rating converter consisting of two paralleled modules with 24-V input and 200-V output is implemented for demonstration. Experimental results show that the modular high step-up boost converter can achieve an efficiency of 95.8% approximately.",
"title": ""
},
{
"docid": "5a62c276e7cce7c7a10109f3c3b1e401",
"text": "A miniature coplanar antenna on a perovskite substrate is analyzed and designed using short circuit technique. The overall dimensions are minimized to 0.09 λ × 0.09 λ. The antenna geometry, the design concept, as well as the simulated and the measured results are discussed in this paper.",
"title": ""
},
{
"docid": "d9aadb86785057ae5445dc894b1ef7a7",
"text": "This paper presents Circe, an environment for the analysis of natural language requirements. Circe is first presented in terms of its architecture, based on a transformational paradigm. Details are then given for the various transformation steps, including (i) a novel technique for parsing natural language requirements, and (ii) an expert system based on modular agents, embodying intensional knowledge about software systems in general. The result of all the transformations is a set of models for the requirements document, for the system described by the requirements, and for the requirements writing process. These models can be inspected, measured, and validated against a given set of criteria. Some of the features of the environment are shown by means of an example. Various stages of requirements analysis are covered, from initial sketches to pseudo-code and UML models.",
"title": ""
},
{
"docid": "b58c1e18a792974f57e9f676c1495826",
"text": "The influence of bilingualism on cognitive test performance in older adults has received limited attention in the neuropsychology literature. The aim of this study was to examine the impact of bilingualism on verbal fluency and repetition tests in older Hispanic bilinguals. Eighty-two right-handed participants (28 men and 54 women) with a mean age of 61.76 years (SD = 9.30; range = 50-84) and a mean educational level of 14.8 years (SD = 3.6; range 2-23) were selected. Forty-five of the participants were English monolinguals, 18 were Spanish monolinguals, and 19 were Spanish-English bilinguals. Verbal fluency was tested by electing a verbal description of a picture and by asking participants to generate words within phonemic and semantic categories. Repetition was tested using a sentence-repetition test. The bilinguals' test scores were compared to English monolinguals' and Spanish monolinguals' test scores. Results demonstrated equal performance of bilingual and monolingual participants in all tests except that of semantic verbal fluency. Bilinguals who learned English before age 12 performed significantly better on the English repetition test and produced a higher number of words in the description of a picture than the bilinguals who learned English after age 12. Variables such as task demands, language interference, linguistic mode, and level of bilingualism are addressed in the Discussion section.",
"title": ""
}
] | scidocsrr |
63d340f89dd18d1873c3bdaf4de2f732 | DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction | [
{
"docid": "3ca057959a24245764953a6aa1b2ed84",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
},
{
"docid": "3388d2e88fdc2db9967da4ddb452d9f1",
"text": "Entity pair provide essential information for identifying relation type. Aiming at this characteristic, Position Feature is widely used in current relation classification systems to highlight the words close to them. However, semantic knowledge involved in entity pair has not been fully utilized. To overcome this issue, we propose an Entity-pair-based Attention Mechanism, which is specially designed for relation classification. Recently, attention mechanism significantly promotes the development of deep learning in NLP. Inspired by this, for specific instance(entity pair, sentence), the corresponding entity pair information is incorporated as prior knowledge to adaptively compute attention weights for generating sentence representation. Experimental results on SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features.",
"title": ""
},
{
"docid": "c1943f443b0e7be72091250b34262a8f",
"text": "We survey recent approaches to noise reduction in distant supervision learning for relation extraction. We group them according to the principles they are based on: at-least-one constraints, topic-based models, or pattern correlations. Besides describing them, we illustrate the fundamental differences and attempt to give an outlook to potentially fruitful further research. In addition, we identify related work in sentiment analysis which could profit from approaches to noise reduction.",
"title": ""
},
{
"docid": "9c44aba7a9802f1fe95fbeb712c23759",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
}
] | [
{
"docid": "dea7d83ed497fc95f4948a5aa4787b18",
"text": "The distinguishing feature of the Fog Computing (FC) paradigm is that FC spreads communication and computing resources over the wireless access network, so as to provide resource augmentation to resource and energy-limited wireless (possibly mobile) devices. Since FC would lead to substantial reductions in energy consumption and access latency, it will play a key role in the realization of the Fog of Everything (FoE) paradigm. The core challenge of the resulting FoE paradigm is tomaterialize the seamless convergence of three distinct disciplines, namely, broadband mobile communication, cloud computing, and Internet of Everything (IoE). In this paper, we present a new IoE architecture for FC in order to implement the resulting FoE technological platform. Then, we elaborate the related Quality of Service (QoS) requirements to be satisfied by the underlying FoE technological platform. Furthermore, in order to corroborate the conclusion that advancements in the envisioned architecture description, we present: (i) the proposed energy-aware algorithm adopt Fog data center; and, (ii) the obtained numerical performance, for a real-world case study that shows that our approach saves energy consumption impressively in theFog data Center compared with the existing methods and could be of practical interest in the incoming Fog of Everything (FoE) realm.",
"title": ""
},
{
"docid": "a73df97081ec01929e06969c52775007",
"text": "Massive graphs arise naturally in a lot of applications, especially in communication networks like the internet. The size of these graphs makes it very hard or even impossible to store set of edges in the main memory. Thus, random access to the edges can't be realized, which makes most o ine algorithms unusable. This essay investigates e cient algorithms that read the edges only in a xed sequential order. Since even basic graph problems often need at least linear space in the number of vetices to be solved, the storage space bounds are relaxed compared to the classic streaming model, such that the bound is O(n · polylog n). The essay describes algorithms for approximations of the unweighted and weighted matching problem and gives a o(log1− n) lower bound for approximations of the diameter. Finally, some results for further graph problems are discussed.",
"title": ""
},
{
"docid": "6ae739344034410a570b12a57db426e3",
"text": "In recent times we tend to use a number of surveillance systems for monitoring the targeted area. This requires an enormous amount of storage space along with a lot of human power in order to implement and monitor the area under surveillance. This is supposed to be costly and not a reliable process. In this paper we propose an intelligent surveillance system that continuously monitors the targeted area and detects motion in each and every frame. If the system detects motion in the targeted area then a notification is automatically sent to the user by sms and the video starts getting recorded till the motion is stopped. Using this method the required memory space for storing the video is reduced since it doesn't store the entire video but stores the video only when a motion is detected. This is achieved by using real time video processing using open CV (computer vision / machine vision) technology and raspberry pi system.",
"title": ""
},
{
"docid": "2c5ab4dddbb6aeae4542b42f57e54d72",
"text": "Online action detection is a challenging problem: a system needs to decide what action is happening at the current frame, based on previous frames only. Fortunately in real-life, human actions are not independent from one another: there are strong (long-term) dependencies between them. An online action detection method should be able to capture these dependencies, to enable a more accurate early detection. At first sight, an LSTM seems very suitable for this problem. It is able to model both short-term and long-term patterns. It takes its input one frame at the time, updates its internal state and gives as output the current class probabilities. In practice, however, the detection results obtained with LSTMs are still quite low. In this work, we start from the hypothesis that it may be too difficult for an LSTM to learn both the interpretation of the input and the temporal patterns at the same time. We propose a two-stream feedback network, where one stream processes the input and the other models the temporal relations. We show improved detection accuracy on an artificial toy dataset and on the Breakfast Dataset [21] and the TVSeries Dataset [7], reallife datasets with inherent temporal dependencies between the actions.",
"title": ""
},
{
"docid": "51e307584d6446ba2154676d02d2cc84",
"text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.",
"title": ""
},
{
"docid": "de48b60276b27861d58aaaf501606d69",
"text": "Many environmental variables that are important for the development of chironomid larvae (such as water temperature, oxygen availability, and food quantity) are related to water depth, and a statistically strong relationship between chironomid distribution and water depth is therefore expected. This study focuses on the distribution of fossil chironomids in seven shallow lakes and one deep lake from the Plymouth Aquifer (Massachusetts, USA) and aims to assess the influence of water depth on chironomid assemblages within a lake. Multiple samples were taken per lake in order to study the distribution of fossil chironomid head capsules within a lake. Within each lake, the chironomid assemblages are diverse and the changes that are seen in the assemblages are strongly related to changes in water depth. Several thresholds (i.e., where species turnover abruptly changes) are identified in the assemblages, and most lakes show abrupt changes at about 1–2 and 5–7 m water depth. In the deep lake, changes also occur at 9.6 and 15 m depth. The distribution of many individual taxa is significantly correlated to water depth, and we show that the identification of different taxa within the genus Tanytarsus is important because different morphotypes show different responses to water depth. We conclude that the chironomid fauna is sensitive to changes in lake level, indicating that fossil chironomid assemblages can be used as a tool for quantitative reconstruction of lake level changes.",
"title": ""
},
{
"docid": "5495aeaa072a1f8f696298ebc7432045",
"text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.",
"title": ""
},
{
"docid": "d7574e4d5fd3a395907db7a7d380652b",
"text": "In this paper, we analyze and evaluate word embeddings for representation of longer texts in the multi-label document classification scenario. The embeddings are used in three convolutional neural network topologies. The experiments are realized on the Czech ČTK and English Reuters-21578 standard corpora. We compare the results of word2vec static and trainable embeddings with randomly initialized word vectors. We conclude that initialization does not play an important role for classification. However, learning of word vectors is crucial to obtain good results.",
"title": ""
},
{
"docid": "fbd05f764470b94af30c7799e94ff0f0",
"text": "Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, PsychSim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim’s underlying architecture and describe its application to a school violence scenario for illustration.",
"title": ""
},
{
"docid": "18824b0ce748e097c049440439116b77",
"text": "Before we try to specify how to give a semantic analysis of discourse, we must define what semantic analysis is and what kinds of semantic analysis can be distinguished. Such a definition will be as complex as the number of semantic theories in the various disciplines involved in the study of language: linguistics and grammar, the philosophy of language, logic, cognitive psychology, and sociology, each with several competing semantic theories. These theories will be different according to their object of analysis, their aims, and their methods. Yet, they will also have some common properties that allow us to call them semantic theories. In this chapter I first enumerate more or less intuitively a number of these common properties, then select some of them for further theoretical analysis, and finally apply the theoretical notions in actual semantic analyses of some discourse fragments. In the most general sense, semantics is a component theory within a larger semiotic theory about meaningful, symbolic, behavior. Hence we have not only a semantics of natural language utterances or acts, but also of nonverbal or paraverbal behavior, such as gestures, pictures and films, logical systems or computer languages, sign languages of the deaf, and perhaps social interaction in general. In this chapter we consider only the semantics of natural-language utterances, that is, discourses, and their component elements, such as words, phrases, clauses, sentences, paragraphs, and other identifiable discourse units. Other semiotic aspects of verbal and nonverbal communication are treated elsewhere in this Handbook. Probably the most general concept used to denote the specific object",
"title": ""
},
{
"docid": "921c7a6c3902434b250548e573816978",
"text": "Energy harvesting based on tethered kites makes use of the advantage, that these airborne wind energy systems are able to exploit higher wind speeds at higher altitudes. The setup, considered in this paper, is based on the pumping cycle, which generates energy by winching out at high tether forces, driving an electrical generator while flying crosswind and winching in at a stationary neutral position, thus leaving a net amount of generated energy. The economic operation of such airborne wind energy plants demands for a reliable control system allowing for a complete autonomous operation of cycles. This task involves the flight control of the kite as well as the operation of a winch for the tether. The focus of this paper is put on the flight control, which implements an accurate direction control towards target points allowing for eight-down pattern flights. In addition, efficient winch control strategies are provided. The paper summarises a simple comprehensible model with equations of motion in order to motivate the approach of the control system design. After an extended overview on the control system, the flight controller parts are discussed in detail. Subsequently, the winch strategies based on an optimisation scheme are presented. In order to demonstrate the real world functionality of the presented algorithms, flight data from a fully automated pumping-cycle operation of a small-scale prototype setup based on a 30 m2 kite and a 50 kW electrical motor/generator is given.",
"title": ""
},
{
"docid": "51ece87cfa463cd76c6fd60e2515c9f4",
"text": "In a 1998 speech before the California Science Center in Los Angeles, then US VicePresident Al Gore called for a global undertaking to build a multi-faceted computing system for education and research, which he termed “Digital Earth.” The vision was that of a system providing access to what is known about the planet and its inhabitants’ activities – currently and for any time in history – via responses to queries and exploratory tools. Furthermore, it would accommodate modeling extensions for predicting future conditions. Organized efforts towards realizing that vision have diminished significantly since 2001, but progress on key requisites has been made. As the 10 year anniversary of that influential speech approaches, we re-examine it from the perspective of a systematic software design process and find the envisioned system to be in many respects inclusive of concepts of distributed geolibraries and digital atlases. A preliminary definition for a particular digital earth system as: “a comprehensive, distributed geographic information and knowledge organization system,” is offered and discussed. We suggest that resumption of earlier design and focused research efforts can and should be undertaken, and may prove a worthwhile “Grand Challenge” for the GIScience community.",
"title": ""
},
{
"docid": "1b6e35187b561de95051f67c70025152",
"text": "Ž . The technology acceptance model TAM proposes that ease of use and usefulness predict applications usage. The current research investigated TAM for work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also Ž . Ž . demonstrate that 1 ease of understanding and ease of finding predict ease of use, and that 2 information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, and managers understand antecedents to users’ decisions to revisit sites relevant to their jobs. q 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "d90b6c61369ff0458843241cd30437ba",
"text": "The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics, led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review accomplishments and challenges, citing some of the key research findings and publications that have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of atmospheric leak detection and sealing, while achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trace gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal biogeochemical cycling and ranges of atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained with good health with lowered atmospheric oxygen levels could lead to major economies on the design of space stations and planetary/lunar settlements. The improved health resulting from the calorie-restricted but nutrient dense Biosphere 2 diet was the first such scientifically controlled experiment with humans. The success of Biosphere 2 in creating a diversity of terrestrial and marine environments, from rainforest to coral reef, allowed detailed studies with comprehensive measurements such that the dynamics of these complex biomic systems are now better understood. The coral reef ecosystem, the largest artificial reef ever built, catalyzed methods of study now being applied to planetary coral reef systems. Restoration ecology advanced through the creation and study of the dynamics of adaptation and self-organization of the biomes in Biosphere 2. The international interest that Biosphere 2 generated has given new impetus to the public recognition of the sciences of biospheres (biospherics), biomes and closed ecological life systems. The facility, although no longer a materially-closed ecological system, is being used as an educational facility by Columbia University as an introduction to the study of the biosphere and complex system ecology and for carbon dioxide impacts utilizing the complex ecosystems created in Biosphere '. The many lessons learned from Biosphere 2 are being used by its key team of creators in their design and operation of a laboratory-sized closed ecological system, the Laboratory Biosphere, in operation as of March 2002, and for the design of a Mars on Earth(TM) prototype life support system for manned missions to Mars and Mars surface habitats. Biosphere 2 is an important foundation for future advances in biospherics and closed ecological system research.",
"title": ""
},
{
"docid": "ffd4fc3c7d63eab3cc8a7129f31afdea",
"text": "The growth of desktop 3-D printers is driving an interest in recycled 3-D printer filament to reduce costs of distributed production. Life cycle analysis studies were performed on the recycling of high density polyethylene into filament suitable for additive layer manufacturing with 3-D printers. The conventional centralized recycling system for high population density and low population density rural locations was compared to the proposed in home, distributed recycling system. This system would involve shredding and then producing filament with an open-source plastic extruder from postconsumer plastics and then printing the extruded filament into usable, value-added parts and products with 3-D printers such as the open-source self replicating rapid prototyper, or RepRap. The embodied energy and carbon dioxide emissions were calculated for high density polyethylene recycling using SimaPro 7.2 and the database EcoInvent v2.0. The results showed that distributed recycling uses less embodied energy than the best-case scenario used for centralized recycling. For centralized recycling in a low-density population case study involving substantial embodied energy use for transportation and collection these savings for distributed recycling were found to extend to over 80%. If the distributed process is applied to the U.S. high density polyethylene currently recycled, more than 100 million MJ of energy could be conserved per annum along with the concomitant significant reductions in greenhouse gas emissions. It is concluded that with the open-source 3-D printing network expanding rapidly the potential for widespread adoption of in-home recycling of post-consumer plastic represents a novel path to a future of distributed manufacturing appropriate for both the developed and developing world with lower environmental impacts than the current system.",
"title": ""
},
{
"docid": "fa07419129af7100fc0bf38746f084aa",
"text": "We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.",
"title": ""
},
{
"docid": "8baa6af3ee08029f0a555e4f4db4e218",
"text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.",
"title": ""
},
{
"docid": "836815216224b278df229927d825e411",
"text": "Logistics demand forecasting is important for investment decision-making of infrastructure and strategy programming of the logistics industry. In this paper, a hybrid method which combines the Grey Model, artificial neural networks and other techniques in both learning and analyzing phases is proposed to improve the precision and reliability of forecasting. After establishing a learning model GNNM(1,8) for road logistics demand forecasting, we chose road freight volume as target value and other economic indicators, i.e. GDP, production value of primary industry, total industrial output value, outcomes of tertiary industry, retail sale of social consumer goods, disposable personal income, and total foreign trade value as the seven key influencing factors for logistics demand. Actual data sequences of the province of Zhejiang from years 1986 to 2008 were collected as training and test-proof samples. By comparing the forecasting results, it turns out that GNNM(1,8) is an appropriate forecasting method to yield higher accuracy and lower mean absolute percentage errors than other individual models for short-term logistics demand forecasting.",
"title": ""
},
{
"docid": "16b8a948e76a04b1703646d5e6111afe",
"text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.",
"title": ""
},
{
"docid": "a40c00b1dc4a8d795072e0a8cec09d7a",
"text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.",
"title": ""
}
] | scidocsrr |
6b5140a6b1b2d1da1a1552aa0b4eeeb2 | Deep Q-learning From Demonstrations | [
{
"docid": "a3bce6c544a08e48a566a189f66d0131",
"text": "Model-free episodic reinforcement learning problems define the environment reward with functions that often provide only sparse information throughout the task. Consequently, agents are not given enough feedback about the fitness of their actions until the task ends with success or failure. Previous work addresses this problem with reward shaping. In this paper we introduce a novel approach to improve modelfree reinforcement learning agents’ performance with a three step approach. Specifically, we collect demonstration data, use the data to recover a linear function using inverse reinforcement learning and we use the recovered function for potential-based reward shaping. Our approach is model-free and scalable to high dimensional domains. To show the scalability of our approach we present two sets of experiments in a two dimensional Maze domain, and the 27 dimensional Mario AI domain. We compare the performance of our algorithm to previously introduced reinforcement learning from demonstration algorithms. Our experiments show that our approach outperforms the state-of-the-art in cumulative reward, learning rate and asymptotic performance.",
"title": ""
}
] | [
{
"docid": "8e6d17b6d7919d76cebbcefcc854573e",
"text": "Vincent Larivière École de bibliothéconomie et des sciences de l’information, Université de Montréal, C.P. 6128, Succ. CentreVille, Montréal, QC H3C 3J7, Canada, and Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, CP 8888, Succ. Centre-Ville, Montréal, QC H3C 3P8, Canada. E-mail: [email protected]",
"title": ""
},
{
"docid": "c1d5f28d264756303fded5faa65587a2",
"text": "English vocabulary learning and ubiquitous learning have separately received considerable attention in recent years. However, research on English vocabulary learning in ubiquitous learning contexts has been less studied. In this study, we develop a ubiquitous English vocabulary learning (UEVL) system to assist students in experiencing a systematic vocabulary learning process in which ubiquitous technology is used to develop the system, and video clips are used as the material. Afterward, the technology acceptance model and partial least squares approach are used to explore students’ perspectives on the UEVL system. The results indicate that (1) both the system characteristics and the material characteristics of the UEVL system positively and significantly influence the perspectives of all students on the system; (2) the active students are interested in perceived usefulness; (3) the passive students are interested in perceived ease of use. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0df26f2f40e052cde72048b7538548c3",
"text": "Keshif is an open-source, web-based data exploration environment that enables data analytics novices to create effective visual and interactive dashboards and explore relations with minimal learning time, and data analytics experts to explore tabular data in multiple perspectives rapidly with minimal setup time. In this paper, we present a high-level overview of the exploratory features and design characteristics of Keshif, as well as its API and a selection of its implementation specifics. We conclude with a discussion of its use as an open-source project.",
"title": ""
},
{
"docid": "b0a37782d653fa03843ecdc118a56034",
"text": "Non-frontal lip views contain useful information which can be used to enhance the performance of frontal view lipreading. However, the vast majority of recent lipreading works, including the deep learning approaches which significantly outperform traditional approaches, have focused on frontal mouth images. As a consequence, research on joint learning of visual features and speech classification from multiple views is limited. In this work, we present an end-to-end multi-view lipreading system based on Bidirectional Long-Short Memory (BLSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and performs visual speech classification from multiple views and also achieves state-of-the-art performance. The model consists of multiple identical streams, one for each view, which extract features directly from different poses of mouth images. The temporal dynamics in each stream/view are modelled by a BLSTM and the fusion of multiple streams/views takes place via another BLSTM. An absolute average improvement of 3% and 3.8% over the frontal view performance is reported on the OuluVS2 database when the best two (frontal and profile) and three views (frontal, profile, 45◦) are combined, respectively. The best three-view model results in a 10.5% absolute improvement over the current multi-view state-of-the-art performance on OuluVS2, without using external databases for training, achieving a maximum classification accuracy of 96.9%.",
"title": ""
},
{
"docid": "c02697087e8efd4c1ba9f9a26fa1115b",
"text": "OBJECTIVE\nTo estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.\n\n\nDESIGN\nEstimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.\n\n\nSETTING\nCommunity, nonfederal, short-term hospitals in the United States.\n\n\nPARTICIPANTS\nPersons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.\n\n\nRESULTS\nIn the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.\n\n\nCONCLUSIONS\nOne in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050.",
"title": ""
},
{
"docid": "031562142f7a2ffc64156f9d09865604",
"text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.",
"title": ""
},
{
"docid": "9bfba29f44c585df56062582d4e35ba5",
"text": "We address the problem of optimizing recommender systems for multiple relevance objectives that are not necessarily aligned. Specifically, given a recommender system that optimizes for one aspect of relevance, semantic matching (as defined by any notion of similarity between source and target of recommendation; usually trained on CTR), we want to enhance the system with additional relevance signals that will increase the utility of the recommender system, but that may simultaneously sacrifice the quality of the semantic match. The issue is that semantic matching is only one relevance aspect of the utility function that drives the recommender system, albeit a significant aspect. In talent recommendation systems, job posters want candidates who are a good match to the job posted, but also prefer those candidates to be open to new opportunities. Recommender systems that recommend discussion groups must ensure that the groups are relevant to the users' interests, but also need to favor active groups over inactive ones. We refer to these additional relevance signals (job-seeking intent and group activity) as extraneous features, and they account for aspects of the utility function that are not captured by the semantic match (i.e. post-CTR down-stream utilities that reflect engagement: time spent reading, sharing, commenting, etc). We want to include these extraneous features into the recommendations, but we want to do so while satisfying the following requirements: 1) we do not want to drastically sacrifice the quality of the semantic match, and 2) we want to quantify exactly how the semantic match would be affected as we control the different aspects of the utility function. In this paper, we present an approach that satisfies these requirements.\n We frame our approach as a general constrained optimization problem and suggest ways in which it can be solved efficiently by drawing from recent research on optimizing non-smooth rank metrics for information retrieval. Our approach features the following characteristics: 1) it is model and feature agnostic, 2) it does not require additional labeled training data to be collected, and 3) it can be easily incorporated into an existing model as an additional stage in the computation pipeline. We validate our approach in a revenue-generating recommender system that ranks billions of candidate recommendations on a daily basis and show that a significant improvement in the utility of the recommender system can be achieved with an acceptable and predictable degradation in the semantic match quality of the recommendations.",
"title": ""
},
{
"docid": "a3a29e4f0c25c5f1e09b590048a4a1c0",
"text": "We present DeepPicar, a low-cost deep neural network based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses the same network architecture—9 layers, 27 million connections and 250K parameters—and can drive itself in real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN-based real-time control workload. We find that all tested platforms, including the Pi 3, are capable of supporting the CNN-based real-time control, from 20 Hz up to 100 Hz, depending on hardware platform. However, we find that shared resource contention remains an important issue that must be considered in applying CNN models on shared memory based embedded computing platforms; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution.",
"title": ""
},
{
"docid": "bfe76736623dfc3271be4856f5dc2eef",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "9688efb8845895d49029c07d397a336b",
"text": "Familial hypercholesterolaemia (FH) leads to elevated plasma levels of LDL-cholesterol and increased risk of premature atherosclerosis. Dietary treatment is recommended to all patients with FH in combination with lipid-lowering drug therapy. Little is known about how children with FH and their parents respond to dietary advice. The aim of the present study was to characterise the dietary habits in children with FH. A total of 112 children and young adults with FH and a non-FH group of children (n 36) were included. The children with FH had previously received dietary counselling. The FH subjects were grouped as: 12-14 years (FH (12-14)) and 18-28 years (FH (18-28)). Dietary data were collected by SmartDiet, a short self-instructing questionnaire on diet and lifestyle where the total score forms the basis for an overall assessment of the diet. Clinical and biochemical data were retrieved from medical records. The SmartDiet scores were significantly improved in the FH (12-14) subjects compared with the non-FH subjects (SmartDiet score of 31 v. 28, respectively). More FH (12-14) subjects compared with non-FH children consumed low-fat milk (64 v. 18 %, respectively), low-fat cheese (29 v. 3%, respectively), used margarine with highly unsaturated fat (74 v. 14 %, respectively). In all, 68 % of the FH (12-14) subjects and 55 % of the non-FH children had fish for dinner twice or more per week. The FH (18-28) subjects showed the same pattern in dietary choices as the FH (12-14) children. In contrast to the choices of low-fat dietary items, 50 % of the FH (12-14) subjects consumed sweet spreads or sweet drinks twice or more per week compared with only 21 % in the non-FH group. In conclusion, ordinary out-patient dietary counselling of children with FH seems to have a long-lasting effect, as the diet of children and young adults with FH consisted of more products that are favourable with regard to the fatty acid composition of the diet.",
"title": ""
},
{
"docid": "136278bd47962b54b644a77bbdaf77e3",
"text": "In this paper, we consider the grayscale template-matching problem, invariant to rotation, scale, translation, brightness and contrast, without previous operations that discard grayscale information, like detection of edges, detection of interest points or segmentation/binarization of the images. The obvious “brute force” solution performs a series of conventional template matchings between the image to analyze and the template query shape rotated by every angle, translated to every position and scaled by every factor (within some specified range of scale factors). Clearly, this takes too long and thus is not practical. We propose a technique that substantially accelerates this searching, while obtaining the same result as the original brute force algorithm. In some experiments, our algorithm was 400 times faster than the brute force algorithm. Our algorithm consists of three cascaded filters. These filters successively exclude pixels that have no chance of matching the template from further processing.",
"title": ""
},
{
"docid": "d6b87f5b6627f1a1ac5cc951c7fe0f28",
"text": "Despite a strong nonlinear behavior and a complex design, the interior permanent-magnet (IPM) machine is proposed as a good candidate among the PM machines owing to its interesting peculiarities, i.e., higher torque in flux-weakening operation, higher fault tolerance, and ability to adopt low-cost PMs. A second trend in designing PM machines concerns the adoption of fractional-slot (FS) nonoverlapped coil windings, which reduce the end winding length and consequently the Joule losses and the cost. Therefore, the adoption of an IPM machine with an FS winding aims to combine both advantages: high torque and efficiency in a wide operating region. However, the combination of an anisotropic rotor and an FS winding stator causes some problems. The interaction between the magnetomotive force harmonics due to the stator current and the rotor anisotropy causes a very high torque ripple. This paper illustrates a procedure in designing an IPM motor with the FS winding exhibiting a low torque ripple. The design strategy is based on two consecutive steps: at first, the winding is optimized by taking a multilayer structure, and then, the rotor geometry is optimized by adopting a nonsymmetric structure. As an example, a 12-slot 10-pole IPM machine is considered, achieving a torque ripple lower than 1.5% at full load.",
"title": ""
},
{
"docid": "ad091e4f66adb26d36abfc40377ee6ab",
"text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.",
"title": ""
},
{
"docid": "d38df66fe85b4d12093965e649a70fe1",
"text": "We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.",
"title": ""
},
{
"docid": "f783860e569d9f179466977db544bd01",
"text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.",
"title": ""
},
{
"docid": "b83a0341f2ead9c72eda4217e0f31ea2",
"text": "Time-series classification has attracted considerable research attention due to the various domains where time-series data are observed, ranging from medicine to econometrics. Traditionally, the focus of time-series classification has been on short time-series data composed of a few patterns exhibiting variabilities, while recently there have been attempts to focus on longer series composed of multiple local patrepeating with an arbitrary irregularity. The primary contribution of this paper relies on presenting a method which can detect local patterns in repetitive time-series via fitting local polynomial functions of a specified degree. We capture the repetitiveness degrees of time-series datasets via a new measure. Furthermore, our method approximates local polynomials in linear time and ensures an overall linear running time complexity. The coefficients of the polynomial functions are converted to symbolic words via equi-area discretizations of the coefficients' distributions. The symbolic polynomial words enable the detection of similar local patterns by assigning the same word to similar polynomials. Moreover, a histogram of the frequencies of the words is constructed from each time-series' bag of words. Each row of the histogram enables a new representation for the series and symbolizes the occurrence of local patterns and their frequencies. In an experimental comparison against state-of-the-art baselines on repetitive datasets, our method demonstrates significant improvements in terms of prediction accuracy.",
"title": ""
},
{
"docid": "baa3d41ba1970125301b0fdd9380a966",
"text": "This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.",
"title": ""
},
{
"docid": "c410b6cd3f343fc8b8c21e23e58013cd",
"text": "Virtualization is increasingly being used to address server management and administration issues like flexible resource allocation, service isolation and workload migration. In a virtualized environment, the virtual machine monitor (VMM) is the primary resource manager and is an attractive target for implementing system features like scheduling, caching, and monitoring. However, the lackof runtime information within the VMM about guest operating systems, sometimes called the semantic gap, is a significant obstacle to efficiently implementing some kinds of services.In this paper we explore techniques that can be used by a VMM to passively infer useful information about a guest operating system's unified buffer cache and virtual memory system. We have created a prototype implementation of these techniques inside the Xen VMM called Geiger and show that it can accurately infer when pages are inserted into and evicted from a system's buffer cache. We explore several nuances involved in passively implementing eviction detection that have not previously been addressed, such as the importance of tracking disk block liveness, the effect of file system journaling, and the importance of accounting for the unified caches found in modern operating systems.Using case studies we show that the information provided by Geiger enables a VMM to implement useful VMM-level services. We implement a novel working set size estimator which allows the VMM to make more informed memory allocation decisions. We also show that a VMM can be used to drastically improve the hit rate in remote storage caches by using eviction-based cache placement without modifying the application or operating system storage interface. Both case studies hint at a future where inference techniques enable a broad new class of VMM-level functionality.",
"title": ""
},
{
"docid": "2a56b6e6dcab0817e6ab4dfa8826fc49",
"text": "Considerable data and analysis support the detection of one or more supernovae (SNe) at a distance of about 50 pc, ∼2.6 million years ago. This is possibly related to the extinction event around that time and is a member of a series of explosions that formed the Local Bubble in the interstellar medium. We build on previous work, and propagate the muon flux from SN-initiated cosmic rays from the surface to the depths of the ocean. We find that the radiation dose from the muons will exceed the total present surface dose from all sources at depths up to 1 km and will persist for at least the lifetime of marine megafauna. It is reasonable to hypothesize that this increase in radiation load may have contributed to a newly documented marine megafaunal extinction at that time.",
"title": ""
},
{
"docid": "764a1d2571ed45dd56aea44efd4f5091",
"text": "BACKGROUND\nThere exists some ambiguity regarding the exact anatomical limits of the orbicularis retaining ligament, particularly its medial boundary in both the superior and inferior orbits. Precise understanding of this anatomy is necessary during periorbital rejuvenation.\n\n\nMETHODS\nSixteen fresh hemifacial cadaver dissections were performed in the anatomy laboratory to evaluate the anatomy of the orbicularis retaining ligament. Dissection was assisted by magnification with loupes and the operating microscope.\n\n\nRESULTS\nA ligamentous system was found that arises from the inferior and superior orbital rim that is truly periorbital. This ligament spans the entire circumference of the orbit from the medial to the lateral canthus. There exists a fusion line between the orbital septum and the orbicularis retaining ligament in the superior orbit, indistinguishable from the arcus marginalis of the inferior orbital rim. Laterally, the orbicularis retaining ligament contributes to the lateral canthal ligament, consistent with previous studies. No contribution to the medial canthus was identified in this study.\n\n\nCONCLUSIONS\nThe orbicularis retaining ligament is a true, circumferential \"periorbital\" structure. This ligament may serve two purposes: (1) to act as a fixation point for the orbicularis muscle of the upper and lower eyelids and (2) to protect the ocular globe. With techniques of periorbital injection with fillers and botulinum toxin becoming ever more popular, understanding the orbicularis retaining ligament's function as a partitioning membrane is mandatory for avoiding ocular complications. As a support structure, examples are shown of how manipulation of this ligament may benefit canthopexy, septal reset, and brow-lift procedures as described by Hoxworth.",
"title": ""
}
] | scidocsrr |
f9657119e4fdea6594c89addb1fd6be3 | On the wafer/pad friction of chemical-mechanical planarization (CMP) processes - Part I: modeling and analysis | [
{
"docid": "d1bd5406b31cec137860a73b203d6bef",
"text": "A chemical-mechanical planarization (CMP) model based on lubrication theory is developed which accounts for pad compressibility, pad porosity and means of slurry delivery. Slurry ®lm thickness and velocity distributions between the pad and the wafer are predicted using the model. Two regimes of CMP operation are described: the lubrication regime (for ,40±70 mm slurry ®lm thickness) and the contact regime (for thinner ®lms). These regimes are identi®ed for two different pads using experimental copper CMP data and the predictions of the model. The removal rate correlation based on lubrication and mass transport theory agrees well with our experimental data in the lubrication regime. q 2000 Elsevier Science S.A. All rights reserved.",
"title": ""
},
{
"docid": "e03795645ca53f6d4f903ff8ff227054",
"text": "This paper presents the experimental validation and some application examples of the proposed wafer/pad friction models for linear chemical-mechanical planarization (CMP) processes in the companion paper. An experimental setup of a linear CMP polisher is first presented and some polishing processes are then designed for validation of the wafer/pad friction modeling and analysis. The friction torques of both the polisher spindle and roller systems are used to monitor variations of the friction coefficient in situ . Verification of the friction model under various process parameters is presented. Effects of pad conditioning and the wafer film topography on wafer/pad friction are experimentally demonstrated. Finally, several application examples are presented showing the use of the roller motor current measurement for real-time process monitoring and control.",
"title": ""
}
] | [
{
"docid": "c5c64d7fcd9b4804f7533978026dcfbd",
"text": "This paper presents a new method to control multiple micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. We use the fact that all magnetic agents orient to the global input magnetic field to modulate the local attraction-repulsion forces between nearby agents. Here we study these controlled interaction magnetic forces for agents at a water-air interface and devise two controllers to regulate the inter-agent spacing and heading of the set, for motion in two dimensions. Simulation and experimental demonstrations show the feasibility of the idea and its potential for the completion of complex tasks using teams of microrobots. Average tracking error of less than 73 μm and 14° is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical disk-shape agents with nominal radius of 500 μm and thickness of 80 μm operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "5dfda76bf2065850492406fdf7cfed81",
"text": "We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure-ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automatically segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made possible by our use of an Implicit Shape Model, which integrates both capabilities into a common probabilistic framework. This model can be thought of as a non-parametric approach which can easily handle configurations of large numbers of object parts. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a natural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with an MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method outperforms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in different articulations and with widely varying texture patterns, even under significant partial occlusion.",
"title": ""
},
{
"docid": "2282c06ea5e203b7e94095334bba05b9",
"text": "Exploring and surveying the world has been an important goal of humankind for thousands of years. Entering the 21st century, the Earth has almost been fully digitally mapped. Widespread deployment of GIS (Geographic Information Systems) technology and a tremendous increase of both satellite and street-level mapping over the last decade enables the public to view large portions of the world using computer applications such as Bing Maps or Google Earth.",
"title": ""
},
{
"docid": "61a2b0e51b27f46124a8042d59c0f022",
"text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.",
"title": ""
},
{
"docid": "16fbebf500be1bf69027d3a35d85362b",
"text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.",
"title": ""
},
{
"docid": "e3524dfc6939238e9e2f49440c1090ea",
"text": "This paper presents modeling approaches for step-up grid-connected photovoltaic systems intended to provide analytical tools for control design. The first approach is based on a voltage source representation of the bulk capacitor interacting with the grid-connected inverter, which is a common model for large DC buses and closed-loop inverters. The second approach considers the inverter of a double-stage PV system as a Norton equivalent, which is widely accepted for open-loop inverters. In addition, the paper considers both ideal and realistic models for the DC/DC converter that interacts with the PV module, providing four mathematical models to cover a wide range of applications. The models are expressed in state space representation to simplify its use in analysis and control design, and also to be easily implemented in simulation software, e.g., Matlab. The PV system was analyzed to demonstrate the non-minimum phase condition for all the models, which is an important aspect to select the control technique. Moreover, the system observability and controllability were studied to define design criteria. Finally, the analytical results are illustrated by means of detailed simulations, and the paper results are validated in an experimental test bench.",
"title": ""
},
{
"docid": "33c453cec25a77e1bde4ecb353fc678b",
"text": "This article introduces the functional model of self-disclosure on social network sites by integrating a functional theory of self-disclosure and research on audience representations as situational cues for activating interpersonal goals. According to this model, people pursue strategic goals and disclose differently depending on social media affordances, and self-disclosure goals mediate between media affordances and disclosure intimacy. The results of the empirical study examining self-disclosure motivations and characteristics in Facebook status updates, wall posts, and private messaging lend support to this model and provide insights into the motivational drivers of self-disclosure on SNSs, helping to reconcile traditional views on self-disclosure and self-disclosing behaviors in new media contexts.",
"title": ""
},
{
"docid": "b113d45660629847afbd7faade1f3a71",
"text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) is presented. An Archimedean spiral slot is used to excite the rectangular DRA for wideband CP radiation. The operating principle of the proposed antenna is based on using a broadband feeding structure to excite the DRA. A prototype of the proposed antenna is designed, fabricated, and measured. Good agreement between the simulated and measured results is attained, and a wide 3-dB axial-ratio (AR) bandwidth of 25.5% is achieved.",
"title": ""
},
{
"docid": "024e95f41a48e8409bd029c14e6acb3a",
"text": "This communication investigates the application of metamaterial absorber (MA) to waveguide slot antenna to reduce its radar cross section (RCS). A novel ultra-thin MA is presented, and its absorbing characteristics and mechanism are analyzed. The PEC ground plane of waveguide slot antenna is covered by this MA. As compared with the slot antenna with a PEC ground plane, the simulation and experiment results demonstrate that the monostatic and bistatic RCS of waveguide slot antenna are reduced significantly, and the performance of antenna is preserved simultaneously.",
"title": ""
},
{
"docid": "d9a9339672121fb6c3baeb51f11bfcd8",
"text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of dierent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8466bed483a2774f7ccb44416364cf3f",
"text": "This paper proposes a semantics for incorporation that does not require the incorporated nominal to form a syntactic or morphological unit with the verb. Such a semantics is needed for languages like Hindi where semantic intuitions suggest the existence of incorporation but the evidence for syntactic fusion is not compelling. A lexical alternation between regular transitive and incorporating transitive verbs is proposed to derive the particular features of Hindi incorporation. The proposed semantics derives existential force without positing existential closure over the incorporated nominal. It also builds in modality into the meaning of the incorporating verb. This proposal is compared to two other recent proposals for the interpretation of incorporated arguments. The cross-linguistic implications of the analysis developed on the basis of Hindi are also discussed. 1. Identifying Incorporation The primary identification of the phenomenon known as noun incorporation is based on morphological and syntactic evidence about the shape and position of the nominal element involved. Consider the Inuit example in (1a) as well as the more familiar example of English compounding in (1b): 1a. Angunguaq eqalut-tur-p-u-q West Greenlandic -Inuit A-ABS salmon-eat-IND-[-tr]-3S Van Geenhoven (1998) “Angunguaq ate salmon.” b. Mary went apple-picking. The thematic object in (1a) occurs inside the verbal complex, and this affects transitivity. The verb has intransitive marking and the subject has absolutive case instead of the expected ergative. The nominal itself is a bare stem. There is no determiner, case marking, plurality or modification. In other words, an incorporated nominal is an N, not a DP or an NP. Similar comments apply to the English compound in (1b), though it should be noted that English does not have [V N+V] compounds. Though the reasons for this are not particularly well-understood at this time, my purpose in introducing English compounds here is for expository purposes only. A somewhat less obvious case of noun incorporation is attested in Niuean, discussed by Massam (2001). Niuean is an SVO language with obligatory V fronting. Massam notes that in addition to expect VSO order, there also exist sentences with VOS order in Niuean: 1 There can be external modifiers with (a limited set of) determiners, case marking etc. in what is known as the phenomenon of ‘doubling’.",
"title": ""
},
{
"docid": "ad004dd47449b977cd30f2454c5af77a",
"text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.",
"title": ""
},
{
"docid": "f21e55c7509124be8fabfb1d706d76aa",
"text": "CTCF and BORIS (CTCFL), two paralogous mammalian proteins sharing nearly identical DNA binding domains, are thought to function in a mutually exclusive manner in DNA binding and transcriptional regulation. Here we show that these two proteins co-occupy a specific subset of regulatory elements consisting of clustered CTCF binding motifs (termed 2xCTSes). BORIS occupancy at 2xCTSes is largely invariant in BORIS-positive cancer cells, with the genomic pattern recapitulating the germline-specific BORIS binding to chromatin. In contrast to the single-motif CTCF target sites (1xCTSes), the 2xCTS elements are preferentially found at active promoters and enhancers, both in cancer and germ cells. 2xCTSes are also enriched in genomic regions that escape histone to protamine replacement in human and mouse sperm. Depletion of the BORIS gene leads to altered transcription of a large number of genes and the differentiation of K562 cells, while the ectopic expression of this CTCF paralog leads to specific changes in transcription in MCF7 cells. We discover two functionally and structurally different classes of CTCF binding regions, 2xCTSes and 1xCTSes, revealed by their predisposition to bind BORIS. We propose that 2xCTSes play key roles in the transcriptional program of cancer and germ cells.",
"title": ""
},
{
"docid": "6e3e881cb1bb05101ad0f38e3f21e547",
"text": "Mechanical valves used for aortic valve replacement (AVR) continue to be associated with bleeding risks because of anticoagulation therapy, while bioprosthetic valves are at risk of structural valve deterioration requiring reoperation. This risk/benefit ratio of mechanical and bioprosthetic valves has led American and European guidelines on valvular heart disease to be consistent in recommending the use of mechanical prostheses in patients younger than 60 years of age. Despite these recommendations, the use of bioprosthetic valves has significantly increased over the last decades in all age groups. A systematic review of manuscripts applying propensity-matching or multivariable analysis to compare the usage of mechanical vs. bioprosthetic valves found either similar outcomes between the two types of valves or favourable outcomes with mechanical prostheses, particularly in younger patients. The risk/benefit ratio and choice of valves will be impacted by developments in valve designs, anticoagulation therapy, reducing the required international normalized ratio, and transcatheter and minimally invasive procedures. However, there is currently no evidence to support lowering the age threshold for implanting a bioprosthesis. Physicians in the Heart Team and patients should be cautious in pursuing more bioprosthetic valve use until its benefit is clearly proven in middle-aged patients.",
"title": ""
},
{
"docid": "5a4b73a1357809a547773fa8982172dd",
"text": "In this paper, we present a method for cup boundary detection from monocular colour fundus image to help quantify cup changes. The method is based on anatomical evidence such as vessel bends at cup boundary, considered relevant by glaucoma experts. Vessels are modeled and detected in a curvature space to better handle inter-image variations. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A reliable subset called r-bends is derived using a multi-stage strategy and a local splinetting is used to obtain the desired cup boundary. The method has been successfully tested on 133 images comprising 32 normal and 101 glaucomatous images against three glaucoma experts. The proposed method shows high sensitivity in cup to disk ratio-based glaucoma detection and local assessment of the detected cup boundary shows good consensus with the expert markings.",
"title": ""
},
{
"docid": "f3f70e5ba87399e9d44bda293a231399",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "0ce0db75982c205b581bc24060b9e2a4",
"text": "Maxim Gumin's WaveFunctionCollapse (WFC) algorithm is an example-driven image generation algorithm emerging from the craft practice of procedural content generation. In WFC, new images are generated in the style of given examples by ensuring every local window of the output occurs somewhere in the input. Operationally, WFC implements a non-backtracking, greedy search method. This paper examines WFC as an instance of constraint solving methods. We trace WFC's explosive influence on the technical artist community, explain its operation in terms of ideas from the constraint solving literature, and probe its strengths by means of a surrogate implementation using answer set programming.",
"title": ""
},
{
"docid": "16ee3eb990a49bdff840609ae79f26e3",
"text": "Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.",
"title": ""
},
{
"docid": "2a717b823caaaa0187d25b04305f13ee",
"text": "BACKGROUND\nDo peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance.\n\n\nMETHODOLOGY\nParticipants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active).\n\n\nPRINCIPAL FINDINGS\nComfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants.\n\n\nCONCLUSIONS\nThese findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space.",
"title": ""
},
{
"docid": "3a501184ca52dedde44e79d2c66e78df",
"text": "China’s New Silk Road initiative is a multistate commercial project as grandiose as it is ambitious. Comprised of an overland economic “belt” and a maritime transit component, it envisages the development of a trade network traversing numerous countries and continents. Major investments in infrastructure are to establish new commercial hubs along the route, linking regions together via railroads, ports, energy transit systems, and technology. A relatively novel concept introduced by China’s President Xi Jinping in 2013, several projects related to the New Silk Road initiative—also called “One Belt, One Road” (OBOR, or B&R)—are being planned, are under construction, or have been recently completed. The New Silk Road is a fluid concept in its formative stages: it encompasses a variety of projects and is all-inclusive in terms of countries welcomed to participate. For these reasons, it has been labeled an abstract or visionary project. However, those in the region can attest that the New Silk Road is a reality, backed by Chinese hard currency. Thus, while Washington continues to deliberate on an overarching policy toward Asia, Beijing is making inroads—literally and figuratively— across the region and beyond.",
"title": ""
}
] | scidocsrr |
fb3ec739ae67416aa9f0feacf4d301c9 | Computational Technique for an Efficient Classification of Protein Sequences With Distance-Based Sequence Encoding Algorithm | [
{
"docid": "c3525081c0f4eec01069dd4bd5ef12ab",
"text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.",
"title": ""
}
] | [
{
"docid": "d8042183e064ffba69b54246b17b9ff4",
"text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.",
"title": ""
},
{
"docid": "69d3c943755734903b9266ca2bd2fad1",
"text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.",
"title": ""
},
{
"docid": "a2cf369a67507d38ac1a645e84525497",
"text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.",
"title": ""
},
{
"docid": "60ac1fa826816d39562104849fff8f46",
"text": "The increased attention to environmentalism in western societies has been accompanied by a rise in ecotourism, i.e. ecologically sensitive travel to remote areas to learn about ecosystems, as well as in cultural tourism, focusing on the people who are a part of ecosystems. Increasingly, the internet has partnered with ecotourism companies to provide information about destinations and facilitate travel arrangements. This study reviews the literature linking ecotourism and sustainable development, as well as prior research showing that cultures have been historically commodified in tourism advertising for developing countries destinations. We examine seven websites advertising ecotourism and cultural tourism and conclude that: (1) advertisements for natural and cultural spaces are not always consistent with the discourse of sustainability; and (2) earlier critiques of the commodification of culture in print advertising extend to internet advertising also.",
"title": ""
},
{
"docid": "46170fe683c78a767cb15c0ac3437e83",
"text": "Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.",
"title": ""
},
{
"docid": "3a58c1a2e4428c0b875e1202055e5b13",
"text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "918bf13ef0289eb9b78309c83e963b26",
"text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.",
"title": ""
},
{
"docid": "640fd96e02d8aa69be488323f77b40ba",
"text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.",
"title": ""
},
{
"docid": "aa3c0d7d023e1f9795df048ee44d92ec",
"text": "Correspondence Institute of Computer Science, University of Tartu, Juhan Liivi 2, 50409 Tartu, Estonia Email: [email protected] Summary Blockchain platforms, such as Ethereum, allow a set of actors to maintain a ledger of transactions without relying on a central authority and to deploy scripts, called smart contracts, that are executedwhenever certain transactions occur. These features can be used as basic building blocks for executing collaborative business processes between mutually untrusting parties. However, implementing business processes using the low-level primitives provided by blockchain platforms is cumbersome and error-prone. In contrast, established business process management systems, such as those based on the standard Business Process Model and Notation (BPMN), provide convenient abstractions for rapid development of process-oriented applications. This article demonstrates how to combine the advantages of a business process management system with those of a blockchain platform. The article introduces a blockchain-based BPMN execution engine, namely Caterpillar. Like any BPMN execution engine, Caterpillar supports the creation of instances of a process model and allows users to monitor the state of process instances and to execute tasks thereof. The specificity of Caterpillar is that the state of each process instance is maintained on the (Ethereum) blockchain and the workflow routing is performed by smart contracts generated by a BPMN-to-Solidity compiler. The Caterpillar compiler supports a large array of BPMN constructs, including subprocesses, multi-instances activities and event handlers. The paper describes the architecture of Caterpillar, and the interfaces it provides to support the monitoring of process instances, the allocation and execution of work items, and the execution of service tasks.",
"title": ""
},
{
"docid": "8e082f030aa5c5372fe327d4291f1864",
"text": "The Internet of Things (IoT) describes the interconnection of objects (or Things) for various purposes including identification, communication, sensing, and data collection. “Things” in this context range from traditional computing devices like Personal Computers (PC) to general household objects embedded with capabilities for sensing and/or communication through the use of technologies such as Radio Frequency Identification (RFID). This conceptual paper, from a philosophical viewpoint, introduces an initial set of guiding principles also referred to in the paper as commandments that can be applied by all the stakeholders involved in the IoT during its introduction, deployment and thereafter. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]",
"title": ""
},
{
"docid": "f376948c1b8952b0b19efad3c5ca0471",
"text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …",
"title": ""
},
{
"docid": "7d68eaf1d9916b0504ac13f5ff9ef980",
"text": "The success of Bitcoin largely relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here essentially means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system. We prove, in this paper, that blockchain is actually unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is at least exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities.",
"title": ""
},
{
"docid": "01165a990d16000ac28b0796e462147a",
"text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.",
"title": ""
},
{
"docid": "71bafd4946377eaabff813bffd5617d7",
"text": "Autumn-seeded winter cereals acquire tolerance to freezing temperatures and become vernalized by exposure to low temperature (LT). The level of accumulated LT tolerance depends on the cold acclimation rate and factors controlling timing of floral transition at the shoot apical meristem. In this study, genomic loci controlling the floral transition time were mapped in a winter wheat (T. aestivum L.) doubled haploid (DH) mapping population segregating for LT tolerance and rate of phenological development. The final leaf number (FLN), days to FLN, and days to anthesis were determined for 142 DH lines grown with and without vernalization in controlled environments. Analysis of trait data by composite interval mapping (CIM) identified 11 genomic regions that carried quantitative trait loci (QTLs) for the developmental traits studied. CIM analysis showed that the time for floral transition in both vernalized and non-vernalized plants was controlled by common QTL regions on chromosomes 1B, 2A, 2B, 6A and 7A. A QTL identified on chromosome 4A influenced floral transition time only in vernalized plants. Alleles of the LT-tolerant parent, Norstar, delayed floral transition at all QTLs except at the 2A locus. Some of the QTL alleles delaying floral transition also increased the length of vegetative growth and delayed flowering time. The genes underlying the QTLs identified in this study encode factors involved in regional adaptation of cold hardy winter wheat.",
"title": ""
},
{
"docid": "1865a404c970d191ed55e7509b21fb9e",
"text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1",
"title": ""
},
{
"docid": "7ad00ade30fad561b4caca2fb1326ed8",
"text": "Today, digital games are available on a variety of mobile devices, such as tablet devices, portable game consoles and smart phones. Not only that, the latest mixed reality technology on mobile devices allows mobile games to integrate the real world environment into gameplay. However, little has been done to test whether the surroundings of play influence gaming experience. In this paper, we describe two studies done to test the effect of surroundings on immersion. Study One uses mixed reality games to investigate whether the integration of the real world environment reduces engagement. Whereas Study Two explored the effects of manipulating the lighting level, and therefore reducing visibility, of the surroundings. We found that immersion is reduced in the conditions where visibility of the surroundings is high. We argue that higher awareness of the surroundings has a strong impact on gaming experience.",
"title": ""
},
{
"docid": "afe1be9e13ca6e2af2c5177809e7c893",
"text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].",
"title": ""
},
{
"docid": "f284c6e32679d8413e366d2daf1d4613",
"text": "Summary form only given. Existing studies on ensemble classifiers typically take a static approach in assembling individual classifiers, in which all the important features are specified in advance. In this paper, we propose a new concept, dynamic ensemble, as an advanced classifier that could have dynamic component classifiers and have dynamic configurations. Toward this goal, we have substantially expanded the existing \"overproduce and choose\" paradigm for ensemble construction. A new algorithm called BAGA is proposed to explore this approach. Taking a set of decision tree component classifiers as input, BAGA generates a set of candidate ensembles using combined bagging and genetic algorithm techniques so that component classifiers are determined at execution time. Empirical studies have been carried out on variations of the BAGA algorithm, where the sizes of chosen classifiers, effects of bag size, voting function and evaluation functions on the dynamic ensemble construction, are investigated.",
"title": ""
},
{
"docid": "8e74a27a3edea7cf0e88317851bc15eb",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | scidocsrr |
6964ce910279f7c1e3eaec5191d4cf7f | A Learning-based Neural Network Model for the Detection and Classification of SQL Injection Attacks | [
{
"docid": "5025766e66589289ccc31e60ca363842",
"text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.",
"title": ""
}
] | [
{
"docid": "b743159683f5cb99e7b5252dbc9ae74f",
"text": "When human agents come together to make decisions it is often the case that one human agent has more information than the other and this phenomenon is called information asymmetry and this distorts the market. Often if one human agent intends to manipulate a decision in its favor the human agent can signal wrong or right information. Alternatively, one human agent can screen for information to reduce the impact of asymmetric information on decisions. With the advent of artificial intelligence, signaling and screening have been made easier. This chapter studies the impact of artificial intelligence on the theory of asymmetric information. It is surmised that artificial intelligent agents reduce the degree of information asymmetry and thus the market where these agents are deployed become more efficient. It is also postulated that the more artificial intelligent agents there are deployed in the market the less is the volume of trades in the market. This is because for trade to happen the asymmetry of information on goods and services to be traded should exist.",
"title": ""
},
{
"docid": "4995bb31547a98adbe98c7a9f2bfa947",
"text": "This paper describes our proposed solutions designed for a STS core track within the SemEval 2016 English Semantic Textual Similarity (STS) task. Our method of similarity detection combines recursive autoencoders with a WordNet award-penalty system that accounts for semantic relatedness, and an SVM classifier, which produces the final score from similarity matrices. This solution is further supported by an ensemble classifier, combining an aligner with a bi-directional Gated Recurrent Neural Network and additional features, which then performs Linear Support Vector Regression to determine another set of scores.",
"title": ""
},
{
"docid": "49215cb8cb669aef5ea42dfb1e7d2e19",
"text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author",
"title": ""
},
{
"docid": "889dd22fcead3ce546e760bda8ef4980",
"text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.",
"title": ""
},
{
"docid": "ab47d6b0ae971a5cf0a24f1934fbee63",
"text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.",
"title": ""
},
{
"docid": "8a9603a10e5e02f6edfbd965ee11bbb9",
"text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.",
"title": ""
},
{
"docid": "f176f95d0c597b4272abe907e385befc",
"text": "This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity analysis based algorithm with content analysis. We identify three problems with the existing approach and devise algorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45% over pure connectivity analysis.",
"title": ""
},
{
"docid": "300e215e91bb49aef0fcb44c3084789e",
"text": "We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task.",
"title": ""
},
{
"docid": "370b1775eddfb6241078285872e1a009",
"text": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL’03, ACE’05, and ClueWeb’09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.",
"title": ""
},
{
"docid": "02c00d998952d935ee694922953c78d1",
"text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.",
"title": ""
},
{
"docid": "620642c5437dc26cac546080c4465707",
"text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1",
"title": ""
},
{
"docid": "162a4cab1ea0bd1e9b8980a57df7c2bf",
"text": "This paper investigates the design of power and spectrally efficient coded modulations based on amplitude phase shift keying (APSK) with application to broadband satellite communications. Emphasis is put on 64APSK constellations. The APSK modulation has merits for digital transmission over nonlinear satellite channels due to its power and spectral efficiency combined with its inherent robustness against nonlinear distortion. This scheme has been adopted in the DVB-S2 Standard for satellite digital video broadcasting. Assuming an ideal rectangular transmission pulse, for which no nonlinear inter-symbol interference is present and perfect pre-compensation of the nonlinearity takes place, we optimize the 64APSK constellation design by employing an optimization criterion based on the mutual information. This method generates an optimum constellation for each operating SNR point, that is, for each spectral efficiency. Two separate cases of interest are particularly examined: (i) the equiprobable case, where all constellation points are equiprobable and (ii) the non-equiprobable case, where the constellation points on each ring are assumed to be equiprobable but the a priory symbol probability associated per ring is assumed different for each ring. Following the mutual information-based optimization approach in each case, detailed simulation results are obtained for the optimal 64APSK constellation settings as well as the achievable shaping gain.",
"title": ""
},
{
"docid": "25822c79792325b86a90a477b6e988a1",
"text": "As the social networking sites get more popular, spammers target these sites to spread spam posts. Twitter is one of the most popular online social networking sites where users communicate and interact on various topics. Most of the current spam filtering methods in Twitter focus on detecting the spammers and blocking them. However, spammers can create a new account and start posting new spam tweets again. So there is a need for robust spam detection techniques to detect the spam at tweet level. These types of techniques can prevent the spam in real time. To detect the spam at tweet level, often features are defined, and appropriate machine learning algorithms are applied in the literature. Recently, deep learning methods are showing fruitful results on several natural language processing tasks. We want to use the potential benefits of these two types of methods for our problem. Toward this, we propose an ensemble approach for spam detection at tweet level. We develop various deep learning models based on convolutional neural networks (CNNs). Five CNNs and one feature-based model are used in the ensemble. Each CNN uses different word embeddings (Glove, Word2vec) to train the model. The feature-based model uses content-based, user-based, and n-gram features. Our approach combines both deep learning and traditional feature-based models using a multilayer neural network which acts as a meta-classifier. We evaluate our method on two data sets, one data set is balanced, and another one is imbalanced. The experimental results show that our proposed method outperforms the existing methods.",
"title": ""
},
{
"docid": "e30db40102a2d84a150c220250fa4d36",
"text": "A voltage reference circuit operating with all transistors biased in weak inversion, providing a mean reference voltage of 257.5 mV, has been fabricated in 0.18 m CMOS technology. The reference voltage can be approximated by the difference of transistor threshold voltages at room temperature. Accurate subthreshold design allows the circuit to work at room temperature with supply voltages down to 0.45 V and an average current consumption of 5.8 nA. Measurements performed over a set of 40 samples showed an average temperature coefficient of 165 ppm/ C with a standard deviation of 100 ppm/ C, in a temperature range from 0 to 125°C. The mean line sensitivity is ≈0.44%/V, for supply voltages ranging from 0.45 to 1.8 V. The power supply rejection ratio measured at 30 Hz and simulated at 10 MHz is lower than -40 dB and -12 dB, respectively. The active area of the circuit is ≈0.043mm2.",
"title": ""
},
{
"docid": "ce2f8135fe123e09b777bd147bec4bb3",
"text": "Supervised learning, e.g., classification, plays an important role in processing and organizing microblogging data. In microblogging, it is easy to mass vast quantities of unlabeled data, but would be costly to obtain labels, which are essential for supervised learning algorithms. In order to reduce the labeling cost, active learning is an effective way to select representative and informative instances to query for labels for improving the learned model. Different from traditional data in which the instances are assumed to be independent and identically distributed (i.i.d.), instances in microblogging are networked with each other. This presents both opportunities and challenges for applying active learning to microblogging data. Inspired by social correlation theories, we investigate whether social relations can help perform effective active learning on networked data. In this paper, we propose a novel Active learning framework for the classification of Networked Texts in microblogging (ActNeT). In particular, we study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from microblogging for labeling by taking advantage of social network structure. Experimental results on Twitter datasets show the benefit of incorporating network information in active learning and that the proposed framework outperforms existing state-of-the-art methods.",
"title": ""
},
{
"docid": "7b916833f0d611465e36b0b2792b2fa7",
"text": "A fully-integrated silicon-based 94-GHz direct-detection imaging receiver with on-chip Dicke switch and baseband circuitry is demonstrated. Fabricated in a 0.18-µm SiGe BiCMOS technology (fT/fMAX = 200 GHz), the receiver chip achieves a peak imager responsivity of 43 MV/W with a 3-dB bandwidth of 26 GHz. A balanced LNA topology with an embedded Dicke switch provides 30-dB gain and enables a temperature resolution of 0.3–0.4 K. The imager chip consumes 200 mW from a 1.8-V supply.",
"title": ""
},
{
"docid": "d6bbec8d1426cacba7f8388231f04add",
"text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.",
"title": ""
},
{
"docid": "62d1fc9ea1c6a5d1f64939eff3202dad",
"text": "This research applied both the traditional and the fuzzy control methods for mobile satellite antenna tracking system design. The antenna tracking and the stabilization loops were designed firstly according to the bandwidth and phase margin requirements. However, the performance would be degraded if the tracking loop gain is reduced due to parameter variation. On the other hand a PD type of fuzzy controller was also applied for tracking loop design. It can be seen that the system performance obtained by the fuzzy controller was better for low antenna tracking gain. Thus this research proposed an adaptive law by taking either traditional or fuzzy controllers for antenna tracking system depending on the tracking loop gain, then the tracking gain parameter variation effect can be reduced.",
"title": ""
},
{
"docid": "1f4c22a725fb5cb34bb1a087ba47987e",
"text": "This paper demonstrates key capabilities of Cognitive Database, a novel AI-enabled relational database system which uses an unsupervised neural network model to facilitate semantic queries over relational data. The neural network model, called word embedding, operates on an unstructured view of the database and builds a vector model that captures latent semantic context of database entities of different types. The vector model is then seamlessly integrated into the SQL infrastructure and exposed to the users via a new class of SQL-based analytics queries known as cognitive intelligence (CI) queries. The cognitive capabilities enable complex queries over multi-modal data such as semantic matching, inductive reasoning queries such as analogies, and predictive queries using entities not present in a database. We plan to demonstrate the end-to-end execution flow of the cognitive database using a Spark based prototype. Furthermore, we demonstrate the use of CI queries using a publicaly available enterprise financial dataset (with text and numeric values). A Jupyter Notebook python based implementation will also be presented.",
"title": ""
}
] | scidocsrr |
56b71e2392afb3cf4b51cffa7fa02509 | Battery management system in the Bayesian paradigm: Part I: SOC estimation | [
{
"docid": "69f36a0f043d8966dbcd7fc2607d61f8",
"text": "This paper presents a method for modeling and estimation of the state of charge (SOC) of lithium-ion (Li-Ion) batteries using neural networks (NNs) and the extended Kalman filter (EKF). The NN is trained offline using the data collected from the battery-charging process. This network finds the model needed in the state-space equations of the EKF, where the state variables are the battery terminal voltage at the previous sample and the SOC at the present sample. Furthermore, the covariance matrix for the process noise in the EKF is estimated adaptively. The proposed method is implemented on a Li-Ion battery to estimate online the actual SOC of the battery. Experimental results show a good estimation of the SOC and fast convergence of the EKF state variables.",
"title": ""
},
{
"docid": "560a19017dcc240d48bb879c3165b3e1",
"text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "f8ec5289b43504fcc96b9280ce7ce67d",
"text": "This study examined how scaffolds and student achievement levels influence inquiry and performance in a problem-based learning environment. The scaffolds were embedded within a hypermedia program that placed students at the center of a problem in which they were trying to become the youngest person to fly around the world in a balloon. One-hundred and eleven seventh grade students enrolled in a science and technology course worked in collaborative groups for a duration of 3 weeks to complete a project that included designing a balloon and a travel plan. Student groups used one of three problem-based, hypermedia programs: (1) a no scaffolding condition that did not provide access to scaffolds, (2) a scaffolding optional condition that provided access to scaffolds, but gave students the choice of whether or not to use them, and (3) a scaffolding required condition required students to complete all available scaffolds. Results revealed that students in the scaffolding optional and scaffolding required conditions performed significantly better than students in the no scaffolding condition on one of the two components of the group project. Results also showed that student achievement levels were significantly related to individual posttest scores; higherachieving students scored better on the posttest than lower-achieving students. In addition, analyses of group notebooks confirmed qualitative differences between students in the various conditions. Specifically, those in the scaffolding required condition produced more highly organized project notebooks containing a higher percentage of entries directly relevant to the problem. These findings suggest that scaffolds may enhance inquiry and performance, especially when students are required to access and",
"title": ""
},
{
"docid": "c88f5359fc6dc0cac2c0bd53cea989ee",
"text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.",
"title": ""
},
{
"docid": "edeefde21bbe1ace9a34a0ebe7bc6864",
"text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"title": ""
},
{
"docid": "6b57c73406000ca0683b275c7e164c24",
"text": "In this letter, a novel compact and broadband integrated transition between a laminated waveguide and an air-filled rectangular waveguide operating in Ka band is proposed. A three-pole filter equivalent circuit model is employed to interpret the working mechanism and to predict the performance of the transition. A back-to-back prototype of the proposed transition is designed and fabricated for proving the concept. Good agreement of the measured and simulated results is obtained. The measured result shows that the insertion loss of better than 0.26 dB from 34.8 to 37.8 GHz can be achieved.",
"title": ""
},
{
"docid": "95a58a9fa31373296af2c41e47fa0884",
"text": "Force.com is the preeminent on-demand application development platform in use today, supporting some 55,000+ organizations. Individual enterprises and commercial software-as-a-service (SaaS) vendors trust the platform to deliver robust, reliable, Internet-scale applications. To meet the extreme demands of its large user population, Force.com's foundation is a metadatadriven software architecture that enables multitenant applications.\n The focus of this paper is multitenancy, a fundamental design approach that can dramatically improve SaaS application management. This paper defines multitenancy, explains its benefits, and demonstrates why metadata-driven architectures are the premier choice for implementing multitenancy.",
"title": ""
},
{
"docid": "c69e805751421b516e084498e7fc6f44",
"text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.",
"title": ""
},
{
"docid": "0f9ef379901c686df08dd0d1bb187e22",
"text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.",
"title": ""
},
{
"docid": "ed98eb7aa069c00e2be8a27ef889b623",
"text": "The class imbalance problem has been known to hinder the learning performance of classification algorithms. Various real-world classification tasks such as text categorization suffer from this phenomenon. We demonstrate that active learning is capable of solving the problem.",
"title": ""
},
{
"docid": "8af7826c809eb3941c2e394899ca83ef",
"text": "The development of interactive rehabilitation technologies which rely on wearable-sensing for upper body rehabilitation is attracting increasing research interest. This paper reviews related research with the aim: 1) To inventory and classify interactive wearable systems for movement and posture monitoring during upper body rehabilitation, regarding the sensing technology, system measurements and feedback conditions; 2) To gauge the wearability of the wearable systems; 3) To inventory the availability of clinical evidence supporting the effectiveness of related technologies. A systematic literature search was conducted in the following search engines: PubMed, ACM, Scopus and IEEE (January 2010–April 2016). Forty-five papers were included and discussed in a new cuboid taxonomy which consists of 3 dimensions: sensing technology, feedback modalities and system measurements. Wearable sensor systems were developed for persons in: 1) Neuro-rehabilitation: stroke (n = 21), spinal cord injury (n = 1), cerebral palsy (n = 2), Alzheimer (n = 1); 2) Musculoskeletal impairment: ligament rehabilitation (n = 1), arthritis (n = 1), frozen shoulder (n = 1), bones trauma (n = 1); 3) Others: chronic pulmonary obstructive disease (n = 1), chronic pain rehabilitation (n = 1) and other general rehabilitation (n = 14). Accelerometers and inertial measurement units (IMU) are the most frequently used technologies (84% of the papers). They are mostly used in multiple sensor configurations to measure upper limb kinematics and/or trunk posture. Sensors are placed mostly on the trunk, upper arm, the forearm, the wrist, and the finger. Typically sensors are attachable rather than embedded in wearable devices and garments; although studies that embed and integrate sensors are increasing in the last 4 years. 16 studies applied knowledge of result (KR) feedback, 14 studies applied knowledge of performance (KP) feedback and 15 studies applied both in various modalities. 16 studies have conducted their evaluation with patients and reported usability tests, while only three of them conducted clinical trials including one randomized clinical trial. This review has shown that wearable systems are used mostly for the monitoring and provision of feedback on posture and upper extremity movements in stroke rehabilitation. The results indicated that accelerometers and IMUs are the most frequently used sensors, in most cases attached to the body through ad hoc contraptions for the purpose of improving range of motion and movement performance during upper body rehabilitation. Systems featuring sensors embedded in wearable appliances or garments are only beginning to emerge. Similarly, clinical evaluations are scarce and are further needed to provide evidence on effectiveness and pave the path towards implementation in clinical settings.",
"title": ""
},
{
"docid": "c5dee985cbfd6c22beca6e2dad895efa",
"text": "Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision. In this paper, we aim to provide insight on the property of convolutional neural networks, as well as a generic method to improve the performance of many CNN architectures. Specifically, we first examine existing CNN models and observe an intriguing property that the filters in the lower layers form pairs (i.e., filters with opposite phase). Inspired by our observation, we propose a novel, simple yet effective activation scheme called concatenated ReLU (CRelu) and theoretically analyze its reconstruction property in CNNs. We integrate CRelu into several state-of-the-art CNN architectures and demonstrate improvement in their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer trainable parameters. Our results suggest that better understanding of the properties of CNNs can lead to significant performance improvement with a simple modification.",
"title": ""
},
{
"docid": "1569bcea0c166d9bf2526789514609c5",
"text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.",
"title": ""
},
{
"docid": "d76980f3a0b4e0dab21583b75ee16318",
"text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.",
"title": ""
},
{
"docid": "1b646a8a45b65799bbf2e71108f420e0",
"text": "Dynamic Time Warping (DTW) is a distance measure that compares two time series after optimally aligning them. DTW is being used for decades in thousands of academic and industrial projects despite the very expensive computational complexity, O(n2). These applications include data mining, image processing, signal processing, robotics and computer graphics among many others. In spite of all this research effort, there are many myths and misunderstanding about DTW in the literature, for example \"it is too slow to be useful\" or \"the warping window size does not matter much.\" In this tutorial, we correct these misunderstandings and we summarize the research efforts in optimizing both the efficiency and effectiveness of both the basic DTW algorithm, and of the higher-level algorithms that exploit DTW such as similarity search, clustering and classification. We will discuss variants of DTW such as constrained DTW, multidimensional DTW and asynchronous DTW, and optimization techniques such as lower bounding, early abandoning, run-length encoding, bounded approximation and hardware optimization. We will discuss a multitude of application areas including physiological monitoring, social media mining, activity recognition and animal sound processing. The optimization techniques are generalizable to other domains on various data types and problems.",
"title": ""
},
{
"docid": "38d1e06642f12138f8b0a90deeb96979",
"text": "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities.",
"title": ""
},
{
"docid": "41c5dbb3e903c007ba4b8f37d40b06ef",
"text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.",
"title": ""
},
{
"docid": "65eb604a2d45f29923ba24976130adc1",
"text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.",
"title": ""
},
{
"docid": "29fa75e49d4179072ec25b8aab6b48e2",
"text": "We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituentand dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.",
"title": ""
},
{
"docid": "343ba137056cac30d0d37e17a425d53b",
"text": "This thesis explores fundamental improvements in unsupervised deep learning algorithms. Taking a theoretical perspective on the purpose of unsupervised learning, and choosing learnt approximate inference in a jointly learnt directed generative model as the approach, the main question is how existing implementations of this approach, in particular auto-encoders, could be improved by simultaneously rethinking the way they learn and the way they perform inference. In such network architectures, the availability of two opposing pathways, one for inference and one for generation, allows to exploit the symmetry between them and to let either provide feedback signals to the other. The signals can be used to determine helpful updates for the connection weights from only locally available information, removing the need for the conventional back-propagation path and mitigating the issues associated with it. Moreover, feedback loops can be added to the usual usual feed-forward network to improve inference itself. The reciprocal connectivity between regions in the brain’s neocortex provides inspiration for how the iterative revision and verification of proposed interpretations could result in a fair approximation to optimal Bayesian inference. While extracting and combining underlying ideas from research in deep learning and cortical functioning, this thesis walks through the concepts of generative models, approximate inference, local learning rules, target propagation, recirculation, lateral and biased competition, predictive coding, iterative and amortised inference, and other related topics, in an attempt to build up a complex of insights that could provide direction to future research in unsupervised deep learning methods.",
"title": ""
},
{
"docid": "d6ea13f26642dfcb28b63ff43a0b39e1",
"text": "This paper deals with the inter-turn short circuit fault analysis of Pulse Width Modulated (PWM) inverter fed three-phase Induction Motor (IM) using Finite Element Method (FEM). The short circuit in the stator winding of a 3-phase IM start with an inter-turn fault and if left undetected it progresses to a phase-phase fault or phase-ground fault. In main fed IM a popular technique known as Motor Current Signature Analysis (MCSA) is used to detect the inter-turn fault. But if the machine is fed from PWM inverter MCSA fails, due to high frequency inverter switching, the current spectrum will be rich in noise causing the fault detection difficult. An electromagnetic field analysis of inverter fed IM is carried out with 25% and 50% of stator winding inter-turn short circuit fault severity using FEM. The simulation is carried out on a 2.2kW IM using Ansys Maxwell Finite Element Analysis (FEA) tool. Comparisons are made on the various electromagnetic field parameters like flux lines distribution, flux density, radial air gap flux density between a healthy and faulty (25% & 50% severity) IM.",
"title": ""
},
{
"docid": "87c973e92ef3affcff4dac0d0183067c",
"text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.",
"title": ""
}
] | scidocsrr |
37a76d3b6c71ef173133d68ba0809244 | Printflatables: Printing Human-Scale, Functional and Dynamic Inflatable Objects | [
{
"docid": "bf83b9fef9b4558538b2207ba57b4779",
"text": "This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.",
"title": ""
}
] | [
{
"docid": "f136e875f021ea3ea67a87c6d0b1e869",
"text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.",
"title": ""
},
{
"docid": "2ce4d585edd54cede6172f74cf9ab8bb",
"text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.",
"title": ""
},
{
"docid": "64c1c37422037fc9156db42cdcdbe7fe",
"text": "[Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirements engineering is a known cause for project failures. While agile development projects often manage well without extensive requirements test cases are commonly viewed as requirements and detailed requirements are documented as test cases. [Objective] We have investigated this agile practice of using test cases as requirements to understand how test cases can support the main requirements activities, and how this practice varies. [Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2 focus groups. [Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating, verifying, and managing requirements, and when used as a documented agreement. We have identified five variants of the test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict and stand-alone manual for which the application of the practice varies concerning the time frame of requirements documentation, the requirements format, the extent to which the test cases are a machine executable specification and the use of tools which provide specific support for the practice of using test cases as requirements. [Conclusions] The findings provide empirical insight into how agile development projects manage and communicate requirements. The identified variants of the practice of using test cases as requirements can be used to perform in-depth investigations into agile requirements engineering. Practitioners can use the provided recommendations as a guide in designing and improving their agile requirements practices based on project characteristics such as number of stakeholders and rate of change.",
"title": ""
},
{
"docid": "b169e0e76f26db1f08cd84524aa10a53",
"text": "A very lightweight, broad-band, dual polarized antenna array with 128 elements for the frequency range from 7 GHz to 18 GHz has been designed, manufactured and measured. The total gain at the center frequency was measured to be 20 dBi excluding feeding network losses.",
"title": ""
},
{
"docid": "9520b99708d905d3713867fac14c3814",
"text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.",
"title": ""
},
{
"docid": "910a416dc736ec3566583c57123ac87c",
"text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman [email protected] 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.",
"title": ""
},
{
"docid": "dac5cebcbc14b82f7b8df977bed0c9d8",
"text": "While blockchain services hold great promise to improve many different industries, there are significant cybersecurity concerns which must be addressed. In this paper, we investigate security considerations for an Ethereum blockchain hosting a distributed energy management application. We have simulated a microgrid with ten buildings in the northeast U.S., and results of the transaction distribution and electricity utilization are presented. We also present the effects on energy distribution when one or two smart meters have their identities corrupted. We then propose a new approach to digital identity management that would require smart meters to authenticate with the blockchain ledger and mitigate identity-spoofing attacks. Applications of this approach to defense against port scans and DDoS, attacks are also discussed.",
"title": ""
},
{
"docid": "e5bf05ae6700078dda83eca8d2f65cd4",
"text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.",
"title": ""
},
{
"docid": "c1fecb605dcabbd411e3782c15fd6546",
"text": "Neuropathic pain is a debilitating form of chronic pain that affects 6.9-10% of the population. Health-related quality-of-life is impeded by neuropathic pain, which not only includes physical impairment, but the mental wellbeing of the patient is also hindered. A reduction in both physical and mental wellbeing bares economic costs that need to be accounted for. A variety of medications are in use for the treatment of neuropathic pain, such as calcium channel α2δ agonists, serotonin/noradrenaline reuptake inhibitors and tricyclic antidepressants. However, recent studies have indicated a lack of efficacy regarding the aforementioned medication. There is increasing clinical and pre-clinical evidence that can point to the use of ketamine, an “old” anaesthetic, in the management of neuropathic pain. Conversely, to see ketamine being used in neuropathic pain, there needs to be more conclusive evidence exploring the long-term effects of sub-anesthetic ketamine.",
"title": ""
},
{
"docid": "5b463701f83f7e6651260c8f55738146",
"text": "Heart disease diagnosis is a complex task which requires much experience and knowledge. Traditional way of predicting Heart disease is doctor’s examination or number of medical tests such as ECG, Stress Test, and Heart MRI etc. Nowadays, Health care industry contains huge amount of heath care data, which contains hidden information. This hidden information is useful for making effective decisions. Computer based information along with advanced Data mining techniques are used for appropriate results. Neural network is widely used tool for predicting Heart disease diagnosis. In this research paper, a Heart Disease Prediction system (HDPS) is developed using Neural network. The HDPS system predicts the likelihood of patient getting a Heart disease. For prediction, the system uses sex, blood pressure, cholesterol like 13 medical parameters. Here two more parameters are added i.e. obesity and smoking for better accuracy. From the results, it has been seen that neural network predict heart disease with nearly 100% accuracy.",
"title": ""
},
{
"docid": "a2f1a10c0e89f6d63f493c267759fb8f",
"text": "BACKGROUND\nPatient portals tied to provider electronic health record (EHR) systems are increasingly popular.\n\n\nPURPOSE\nTo systematically review the literature reporting the effect of patient portals on clinical care.\n\n\nDATA SOURCES\nPubMed and Web of Science searches from 1 January 1990 to 24 January 2013.\n\n\nSTUDY SELECTION\nHypothesis-testing or quantitative studies of patient portals tethered to a provider EHR that addressed patient outcomes, satisfaction, adherence, efficiency, utilization, attitudes, and patient characteristics, as well as qualitative studies of barriers or facilitators, were included.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted data and addressed discrepancies through consensus discussion.\n\n\nDATA SYNTHESIS\nFrom 6508 titles, 14 randomized, controlled trials; 21 observational, hypothesis-testing studies; 5 quantitative, descriptive studies; and 6 qualitative studies were included. Evidence is mixed about the effect of portals on patient outcomes and satisfaction, although they may be more effective when used with case management. The effect of portals on utilization and efficiency is unclear, although patient race and ethnicity, education level or literacy, and degree of comorbid conditions may influence use.\n\n\nLIMITATION\nLimited data for most outcomes and an absence of reporting on organizational and provider context and implementation processes.\n\n\nCONCLUSION\nEvidence that patient portals improve health outcomes, cost, or utilization is insufficient. Patient attitudes are generally positive, but more widespread use may require efforts to overcome racial, ethnic, and literacy barriers. Portals represent a new technology with benefits that are still unclear. Better understanding requires studies that include details about context, implementation factors, and cost.",
"title": ""
},
{
"docid": "1eef21abdf14dc430b333cac71d4fe07",
"text": "The authors have developed an adaptive matched filtering algorithm based upon an artificial neural network (ANN) for QRS detection. They use an ANN adaptive whitening filter to model the lower frequencies of the electrocardiogram (ECG) which are inherently nonlinear and nonstationary. The residual signal which contains mostly higher frequency QRS complex energy is then passed through a linear matched filter to detect the location of the QRS complex. The authors developed an algorithm to adaptively update the matched filter template from the detected QRS complex in the ECG signal itself so that the template can be customized to an individual subject. This ANN whitening filter is very effective at removing the time-varying, nonlinear noise characteristic of ECG signals. The detection rate for a very noisy patient record in the MIT/BIH arrhythmia database is 99.5% with this approach, which compares favorably to the 97.5% obtained using a linear adaptive whitening filter and the 96.5% achieved with a bandpass filtering method.<<ETX>>",
"title": ""
},
{
"docid": "a0d4089e55a0a392a2784ae50b6fa779",
"text": "Organizations place a great deal of emphasis on hiring individuals who are a good fit for the organization and the job. Among the many ways that individuals are screened for a job, the employment interview is particularly prevalent and nearly universally used (Macan, 2009; Huffcutt and Culbertson, 2011). This Research Topic is devoted to a construct that plays a critical role in our understanding of job interviews: impression management (IM). In the interview context, IM describes behaviors an individual uses to influence the impression that others have of them (Bozeman and Kacmar, 1997). For instance, a job applicant can flatter an interviewer to be seen as likable (i.e., ingratiation), play up their qualifications and abilities to be seen as competent (i.e., self-promotion), or utilize excuses or justifications to make up for a negative event or error (i.e., defensive IM; Ellis et al., 2002). IM has emerged as a central theme in the interview literature over the last several decades (for reviews, see Posthuma et al., 2002; Levashina et al., 2014). Despite some pioneering early work (e.g., Schlenker, 1980; Leary and Kowalski, 1990; Stevens and Kristof, 1995), there has been a resurgence of interest in the area over the last decade. While the literature to date has set up a solid foundational knowledge about interview IM, there are a number of emerging trends and directions. In the following, we lay out some critical areas of inquiry in interview IM, and highlight how the innovative set of papers in this Research Topic is illustrative of these new directions.",
"title": ""
},
{
"docid": "5fbb54e63158066198cdf59e1a8e9194",
"text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.",
"title": ""
},
{
"docid": "0a16eb6bfb41a708e7a660cbf4c445af",
"text": "Data from 1,010 lactating lactating, predominately component-fed Holstein cattle from 25 predominately tie-stall dairy farms in southwest Ontario were used to identify objective thresholds for defining hyperketonemia in lactating dairy cattle based on negative impacts on cow health, milk production, or both. Serum samples obtained during wk 1 and 2 postpartum and analyzed for beta-hydroxybutyrate (BHBA) concentrations that were used in analysis. Data were time-ordered so that the serum samples were obtained at least 1 d before the disease or milk recording events. Serum BHBA cutpoints were constructed at 200 micromol/L intervals between 600 and 2,000 micromol/L. Critical cutpoints for the health analysis were determined based on the threshold having the greatest sum of sensitivity and specificity for predicting the disease occurrence. For the production outcomes, models for first test day milk yield, milk fat, and milk protein percentage were constructed including covariates of parity, precalving body condition score, season of calving, test day linear score, and the random effect of herd. Each cutpoint was tested in these models to determine the threshold with the greatest impact and least risk of a type 1 error. Serum BHBA concentrations at or above 1,200 micromol/L in the first week following calving were associated with increased risks of subsequent displaced abomasum [odds ratio (OR) = 2.60] and metritis (OR = 3.35), whereas the critical threshold of BHBA in wk 2 postpartum on the risk of abomasal displacement was >or=1,800 micromol/L (OR = 6.22). The best threshold for predicting subsequent risk of clinical ketosis from serum obtained during wk 1 and wk 2 postpartum was 1,400 micromol/L of BHBA (OR = 4.25 and 5.98, respectively). There was no association between clinical mastitis and elevated serum BHBA in wk 1 or 2 postpartum, and there was no association between wk 2 BHBA and risk of metritis. Greater serum BHBA measured during the first and second week postcalving were associated with less milk yield, greater milk fat percentage, and less milk protein percentage on the first Dairy Herd Improvement test day of lactation. Impacts on first Dairy Herd Improvement test milk yield began at BHBA >or=1,200 micromol/L for wk 1 samples and >or=1,400 micromol/L for wk 2 samples. The greatest impact on yield occurred at 1,400 micromol/L (-1.88 kg/d) and 2,000 micromol/L (-3.3 kg/d) for sera from the first and second week postcalving, respectively. Hyperketonemia can be defined at 1,400 micromol/L of BHBA and in the first 2 wk postpartum increases disease risk and results in substantial loss of milk yield in early lactation.",
"title": ""
},
{
"docid": "4c563b09a10ce0b444edb645ce411d42",
"text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic",
"title": ""
},
{
"docid": "9a30008cc270ac7a0bb1a0f12dca6187",
"text": "Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.",
"title": ""
},
{
"docid": "4b8f59d1b416d4869ae38dbca0eaca41",
"text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.",
"title": ""
},
{
"docid": "ec7b348a0fe38afa02989a22aa9dcac2",
"text": "We propose a general framework for learning from labeled and unlabeled data on a directed graph in which the structure of the graph including the directionality of the edges is considered. The time complexity of the algorithm derived from this framework is nearly linear due to recently developed numerical techniques. In the absence of labeled instances, this framework can be utilized as a spectral clustering method for directed graphs, which generalizes the spectral clustering approach for undirected graphs. We have applied our framework to real-world web classification problems and obtained encouraging results.",
"title": ""
}
] | scidocsrr |
085d8ef9f29229887533b78ad8a9273a | Pain catastrophizing and kinesiophobia: predictors of chronic low back pain. | [
{
"docid": "155411fe242dd4f3ab39649d20f5340f",
"text": "Two studies are presented that investigated 'fear of movement/(re)injury' in chronic musculoskeletal pain and its relation to behavioral performance. The 1st study examines the relation among fear of movement/(re)injury (as measured with the Dutch version of the Tampa Scale for Kinesiophobia (TSK-DV)) (Kori et al. 1990), biographical variables (age, pain duration, gender, use of supportive equipment, compensation status), pain-related variables (pain intensity, pain cognitions, pain coping) and affective distress (fear and depression) in a group of 103 chronic low back pain (CLBP) patients. In the 2nd study, motoric, psychophysiologic and self-report measures of fear are taken from 33 CLBP patients who are exposed to a single and relatively simple movement. Generally, findings demonstrated that the fear of movement/(re)injury is related to gender and compensation status, and more closely to measures of catastrophizing and depression, but in a much lesser degree to pain coping and pain intensity. Furthermore, subjects who report a high degree of fear of movement/(re)injury show more fear and escape/avoidance when exposed to a simple movement. The discussion focuses on the clinical relevance of the construct of fear of movement/(re)injury and research questions that remain to be answered.",
"title": ""
}
] | [
{
"docid": "f031d0db43b5f9d9d3068916ea975d75",
"text": "Difficulties in the social domain and motor anomalies have been widely investigated in Autism Spectrum Disorder (ASD). However, they have been generally considered as independent, and therefore tackled separately. Recent advances in neuroscience have hypothesized that the cortical motor system can play a role not only as a controller of elementary physical features of movement, but also in a complex domain as social cognition. Here, going beyond previous studies on ASD that described difficulties in the motor and in the social domain separately, we focus on the impact of motor mechanisms anomalies on social functioning. We consider behavioral, electrophysiological and neuroimaging findings supporting the idea that motor cognition is a critical \"intermediate phenotype\" for ASD. Motor cognition anomalies in ASD affect the processes of extraction, codification and subsequent translation of \"external\" social information into the motor system. Intriguingly, this alternative \"motor\" approach to the social domain difficulties in ASD may be promising to bridge the gap between recent experimental findings and clinical practice, potentially leading to refined preventive approaches and successful treatments.",
"title": ""
},
{
"docid": "70991373ae71f233b0facd2b5dd1a0d3",
"text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.",
"title": ""
},
{
"docid": "c630b600a0b03e9e3ede1c0132f80264",
"text": "68 AI MAGAZINE Adaptive graphical user interfaces (GUIs) automatically tailor the presentation of functionality to better fit an individual user’s tasks, usage patterns, and abilities. A familiar example of an adaptive interface is the Windows XP start menu, where a small set of applications from the “All Programs” submenu is replicated in the top level of the “Start” menu for easier access, saving users from navigating through multiple levels of the menu hierarchy (figure 1). The potential of adaptive interfaces to reduce visual search time, cognitive load, and motor movement is appealing, and when the adaptation is successful an adaptive interface can be faster and preferred in comparison to a nonadaptive counterpart (for example, Gajos et al. [2006], Greenberg and Witten [1985]). In practice, however, many challenges exist, and, thus far, evaluation results of adaptive interfaces have been mixed. For an adaptive interface to be successful, the benefits of correct adaptations must outweigh the costs, or usability side effects, of incorrect adaptations. Often, an adaptive mechanism designed to improve one aspect of the interaction, typically motor movement or visual search, inadvertently increases effort along another dimension, such as cognitive or perceptual load. The result is that many adaptive designs that were expected to confer a benefit along one of these dimensions have failed in practice. For example, a menu that tracks how frequently each item is used and adaptively reorders itself so that items appear in order from most to least frequently accessed should improve motor performance, but in reality this design can slow users down and reduce satisfaction because of the constantly changing layout (Mitchell and Schneiderman [1989]; for example, figure 2b). Commonly cited issues with adaptive interfaces include the lack of control the user has over the adaptive process and the difficulty that users may have in predicting what the system’s response will be to a user action (Höök 2000). User evaluation of adaptive GUIs is more complex than eval-",
"title": ""
},
{
"docid": "4facc72eb8270d12d0182c7a7833736f",
"text": "We construct a family of extremely simple bijections that yield Cayley’s famous formula for counting trees. The weight preserving properties of these bijections furnish a number of multivariate generating functions for weighted Cayley trees. Essentially the same idea is used to derive bijective proofs and q-analogues for the number of spanning trees of other graphs, including the complete bipartite and complete tripartite graphs. These bijections also allow the calculation of explicit formulas for the expected number of various statistics on Cayley trees.",
"title": ""
},
{
"docid": "47949e080b4f5643dde02eb1c5c2527f",
"text": "Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.",
"title": ""
},
{
"docid": "1c126457ee6b61be69448ee00a64d557",
"text": "Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes are abundant, making them an overrepresented majority, and data of other classes are scarce, making them an underrepresented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this paper, we propose a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class-dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multiclass problems without any modification. Moreover, as opposed to data-level approaches, we do not alter the original data distribution, which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification data sets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and CoSen classifiers demonstrate the superior performance of our proposed method.",
"title": ""
},
{
"docid": "3a852aa880c564a85cc8741ce7427ced",
"text": "INTRODUCTION\nTumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin.\n\n\nOBJECTIVES\nThe goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin.\n\n\nMETHODS\nA search of the computerized database MEDLINE (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched.\n\n\nRESULTS\nA large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12).\n\n\nCONCLUSIONS\nCurcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.",
"title": ""
},
{
"docid": "4272b4a73ecd9d2b60e0c60de0469f17",
"text": "Suggesting that empirical work in the field of reading has advanced sufficiently to allow substantial agreed-upon results and conclusions, this literature review cuts through the detail of partially convergent, sometimes discrepant research findings to provide an integrated picture of how reading develops and how reading instruction should proceed. The focus of the review is prevention. Sketched is a picture of the conditions under which reading is most likely to develop easily--conditions that include stimulating preschool environments, excellent reading instruction, and the absence of any of a wide array of risk factors. It also provides recommendations for practice as well as recommendations for further research. After a preface and executive summary, chapters are (1) Introduction; (2) The Process of Learning to Read; (3) Who Has Reading Difficulties; (4) Predic:ors of Success and Failure in Reading; (5) Preventing Reading Difficulties before Kindergarten; (6) Instructional Strategies for Kindergarten and the Primary Grades; (7) Organizational Strategies for Kindergarten and the Primary Grades; (8) Helping Children with Reading Difficulties in Grades 1 to 3; (9) The Agents of Change; and (10) Recommendations for Practice and Research. Contains biographical sketches of the committee members and an index. Contains approximately 800 references.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "d994b23ea551f23215232c0771e7d6b3",
"text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).",
"title": ""
},
{
"docid": "9961f44d4ab7d0a344811186c9234f2c",
"text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.",
"title": ""
},
{
"docid": "9373cde066d8d898674a519206f1c38f",
"text": "This work proposes a novel deep network architecture to solve the camera ego-motion estimation problem. A motion estimation network generally learns features similar to optical flow (OF) fields starting from sequences of images. This OF can be described by a lower dimensional latent space. Previous research has shown how to find linear approximations of this space. We propose to use an autoencoder network to find a nonlinear representation of the OF manifold. In addition, we propose to learn the latent space jointly with the estimation task, so that the learned OF features become a more robust description of the OF input. We call this novel architecture latent space visual odometry (LS-VO). The experiments show that LS-VO achieves a considerable increase in performances with respect to baselines, while the number of parameters of the estimation network only slightly increases.",
"title": ""
},
{
"docid": "f6ad0d01cb66c1260c1074c4f35808c6",
"text": "BACKGROUND\nUnilateral spatial neglect causes difficulty attending to one side of space. Various rehabilitation interventions have been used but evidence of their benefit is lacking.\n\n\nOBJECTIVES\nTo assess whether cognitive rehabilitation improves functional independence, neglect (as measured using standardised assessments), destination on discharge, falls, balance, depression/anxiety and quality of life in stroke patients with neglect measured immediately post-intervention and at longer-term follow-up; and to determine which types of interventions are effective and whether cognitive rehabilitation is more effective than standard care or an attention control.\n\n\nSEARCH METHODS\nWe searched the Cochrane Stroke Group Trials Register (last searched June 2012), MEDLINE (1966 to June 2011), EMBASE (1980 to June 2011), CINAHL (1983 to June 2011), PsycINFO (1974 to June 2011), UK National Research Register (June 2011). We handsearched relevant journals (up to 1998), screened reference lists, and tracked citations using SCISEARCH.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs) of cognitive rehabilitation specifically aimed at spatial neglect. We excluded studies of general stroke rehabilitation and studies with mixed participant groups, unless more than 75% of their sample were stroke patients or separate stroke data were available.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected studies, extracted data, and assessed study quality. For subgroup analyses, review authors independently categorised the approach underlying the cognitive intervention as either 'top-down' (interventions that encourage awareness of the disability and potential compensatory strategies) or 'bottom-up' (interventions directed at the impairment but not requiring awareness or behavioural change, e.g. wearing prisms or patches).\n\n\nMAIN RESULTS\nWe included 23 RCTs with 628 participants (adding 11 new RCTs involving 322 new participants for this update). Only 11 studies were assessed to have adequate allocation concealment, and only four studies to have a low risk of bias in all categories assessed. Most studies measured outcomes using standardised neglect assessments: 15 studies measured effect on activities of daily living (ADL) immediately after the end of the intervention period, but only six reported persisting effects on ADL. One study (30 participants) reported discharge destination and one study (eight participants) reported the number of falls.Eighteen of the 23 included RCTs compared cognitive rehabilitation with any control intervention (placebo, attention or no treatment). Meta-analyses demonstrated no statistically significant effect of cognitive rehabilitation, compared with control, for persisting effects on either ADL (five studies, 143 participants) or standardised neglect assessments (eight studies, 172 participants), or for immediate effects on ADL (10 studies, 343 participants). In contrast, we found a statistically significant effect in favour of cognitive rehabilitation compared with control, for immediate effects on standardised neglect assessments (16 studies, 437 participants, standardised mean difference (SMD) 0.35, 95% confidence interval (CI) 0.09 to 0.62). However, sensitivity analyses including only studies of high methodological quality removed evidence of a significant effect of cognitive rehabilitation.Additionally, five of the 23 included RCTs compared one cognitive rehabilitation intervention with another. These included three studies comparing a visual scanning intervention with another cognitive rehabilitation intervention, and two studies (three comparison groups) comparing a visual scanning intervention plus another cognitive rehabilitation intervention with a visual scanning intervention alone. Only two small studies reported a measure of functional disability and there was considerable heterogeneity within these subgroups (I² > 40%) when we pooled standardised neglect assessment data, limiting the ability to draw generalised conclusions.Subgroup analyses exploring the effect of having an attention control demonstrated some evidence of a statistically significant difference between those comparing rehabilitation with attention control and those with another control or no treatment group, for immediate effects on standardised neglect assessments (test for subgroup differences, P = 0.04).\n\n\nAUTHORS' CONCLUSIONS\nThe effectiveness of cognitive rehabilitation interventions for reducing the disabling effects of neglect and increasing independence remains unproven. As a consequence, no rehabilitation approach can be supported or refuted based on current evidence from RCTs. However, there is some very limited evidence that cognitive rehabilitation may have an immediate beneficial effect on tests of neglect. This emerging evidence justifies further clinical trials of cognitive rehabilitation for neglect. However, future studies need to have appropriate high quality methodological design and reporting, to examine persisting effects of treatment and to include an attention control comparator.",
"title": ""
},
{
"docid": "b883116f741733b3bbd3933fdc1b4542",
"text": "To address concerns of TREC-style relevance judgments, we explore two improvements. The first one seeks to make relevance judgments contextual, collecting in situ feedback of users in an interactive search session and embracing usefulness as the primary judgment criterion. The second one collects multidimensional assessments to complement relevance or usefulness judgments, with four distinct alternative aspects examined in this paper - novelty, understandability, reliability, and effort.\n We evaluate different types of judgments by correlating them with six user experience measures collected from a lab user study. Results show that switching from TREC-style relevance criteria to usefulness is fruitful, but in situ judgments do not exhibit clear benefits over the judgments collected without context. In contrast, combining relevance or usefulness with the four alternative judgments consistently improves the correlation with user experience measures, suggesting future IR systems should adopt multi-aspect search result judgments in development and evaluation.\n We further examine implicit feedback techniques for predicting these judgments. We find that click dwell time, a popular indicator of search result quality, is able to predict some but not all dimensions of the judgments. We enrich the current implicit feedback methods using post-click user interaction in a search session and achieve better prediction for all six dimensions of judgments.",
"title": ""
},
{
"docid": "6702bfca88f86e0c35a8b6195d0c971c",
"text": "A hierarchical scheme for clustering data is presented which applies to spaces with a high number of dimensions ( 3 D N > ). The data set is first reduced to a smaller set of partitions (multi-dimensional bins). Multiple clustering techniques are used, including spectral clustering; however, new techniques are also introduced based on the path length between partitions that are connected to one another. A Line-of-Sight algorithm is also developed for clustering. A test bank of 12 data sets with varying properties is used to expose the strengths and weaknesses of each technique. Finally, a robust clustering technique is discussed based on reaching a consensus among the multiple approaches, overcoming the weaknesses found individually.",
"title": ""
},
{
"docid": "edfc9cb39fe45a43aed78379bafa2dfc",
"text": "We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.",
"title": ""
},
{
"docid": "b559579485358f7958eea8907c8b4b09",
"text": "Word embedding models learn a distributed vectorial representation for words, which can be used as the basis for (deep) learning models to solve a variety of natural language processing tasks. One of the main disadvantages of current word embedding models is that they learn a single representation for each word in a metric space, as a result of which they cannot appropriately model polysemous words. In this work, we develop a new word embedding model that can accurately represent such words by automatically learning multiple representations for each word, whilst remaining computationally efficient. Without any supervision, our model learns multiple, complementary embeddings that all capture different semantic structure. We demonstrate the potential merits of our model by training it on large text corpora, and evaluating it on word similarity tasks. Our proposed embedding model is competitive with the state of the art and can easily scale to large corpora due to its computational simplicity.",
"title": ""
},
{
"docid": "8700e170ba9c3e6c35008e2ccff48ef9",
"text": "Recently, Uber has emerged as a leader in the \"sharing economy\". Uber is a \"ride sharing\" service that matches willing drivers with customers looking for rides. However, unlike other open marketplaces (e.g., AirBnB), Uber is a black-box: they do not provide data about supply or demand, and prices are set dynamically by an opaque \"surge pricing\" algorithm. The lack of transparency has led to concerns about whether Uber artificially manipulate prices, and whether dynamic prices are fair to customers and drivers. In order to understand the impact of surge pricing on passengers and drivers, we present the first in-depth investigation of Uber. We gathered four weeks of data from Uber by emulating 43 copies of the Uber smartphone app and distributing them throughout downtown San Francisco (SF) and midtown Manhattan. Using our dataset, we are able to characterize the dynamics of Uber in SF and Manhattan, as well as identify key implementation details of Uber's surge price algorithm. Our observations about Uber's surge price algorithm raise important questions about the fairness and transparency of this system.",
"title": ""
},
{
"docid": "1272563e64ca327aba1be96f2e045c30",
"text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.",
"title": ""
},
{
"docid": "750e7bd1b23da324a0a51d0b589acbfb",
"text": "Various powerful people detection methods exist. Surprisingly, most approaches rely on static image features only despite the obvious potential of motion information for people detection. This paper systematically evaluates different features and classifiers in a sliding-window framework. First, our experiments indicate that incorporating motion information improves detection performance significantly. Second, the combination of multiple and complementary feature types can also help improve performance. And third, the choice of the classifier-feature combination and several implementation details are crucial to reach best performance. In contrast to many recent papers experimental results are reported for four different datasets rather than using a single one. Three of them are taken from the literature allowing for direct comparison. The fourth dataset is newly recorded using an onboard camera driving through urban environment. Consequently this dataset is more realistic and more challenging than any currently available dataset.",
"title": ""
}
] | scidocsrr |
61b9619b02f8c7f3c0d2b06f4e6b6413 | Linux kernel vulnerabilities: state-of-the-art defenses and open problems | [
{
"docid": "3724a800d0c802203835ef9f68a87836",
"text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.",
"title": ""
},
{
"docid": "68bab5e0579a0cdbaf232850e0587e11",
"text": "This article presents a new mechanism that enables applications to run correctly when device drivers fail. Because device drivers are the principal failing component in most systems, reducing driver-induced failures greatly improves overall reliability. Earlier work has shown that an operating system can survive driver failures [Swift et al. 2005], but the applications that depend on them cannot. Thus, while operating system reliability was greatly improved, application reliability generally was not.To remedy this situation, we introduce a new operating system mechanism called a shadow driver. A shadow driver monitors device drivers and transparently recovers from driver failures. Moreover, it assumes the role of the failed driver during recovery. In this way, applications using the failed driver, as well as the kernel itself, continue to function as expected.We implemented shadow drivers for the Linux operating system and tested them on over a dozen device drivers. Our results show that applications and the OS can indeed survive the failure of a variety of device drivers. Moreover, shadow drivers impose minimal performance overhead. Lastly, they can be introduced with only modest changes to the OS kernel and with no changes at all to existing device drivers.",
"title": ""
}
] | [
{
"docid": "68f10e252faf7171cac8d5ba914fcba9",
"text": "Most languages have no formal writing system and at best a limited written record. However, textual data is critical to natural language processing and particularly important for the training of language models that would facilitate speech recognition of such languages. Bilingual phonetic dictionaries are often available in some form, since lexicon creation is a fundamental task of documentary linguistics. We investigate the use of such dictionaries to improve language models when textual training data is limited to as few as 1k sentences. The method involves learning cross-lingual word embeddings as a pretraining step in the training of monolingual language models. Results across a number of languages show that language models are improved by such pre-training.",
"title": ""
},
{
"docid": "45b17b6521e84c8536ad852969b21c1d",
"text": "Previous research on online media popularity prediction concluded that the rise in popularity of online videos maintains a conventional logarithmic distribution. However, recent studies have shown that a significant portion of online videos exhibit bursty/sudden rise in popularity, which cannot be accounted for by video domain features alone. In this paper, we propose a novel transfer learning framework that utilizes knowledge from social streams (e.g., Twitter) to grasp sudden popularity bursts in online content. We develop a transfer learning algorithm that can learn topics from social streams allowing us to model the social prominence of video content and improve popularity predictions in the video domain. Our transfer learning framework has the ability to scale with incoming stream of tweets, harnessing physical world event information in real-time. Using data comprising of 10.2 million tweets and 3.5 million YouTube videos, we show that social prominence of the video topic (context) is responsible for the sudden rise in its popularity where social trends have a ripple effect as they spread from the Twitter domain to the video domain. We envision that our cross-domain popularity prediction model will be substantially useful for various media applications that could not be previously solved by traditional multimedia techniques alone.",
"title": ""
},
{
"docid": "28b7905d804cef8e54dbdf4f63f6495d",
"text": "The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most efficient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet traffic in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.",
"title": ""
},
{
"docid": "a83b417c2be604427eacf33b1db91468",
"text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.",
"title": ""
},
{
"docid": "71759cdcf18dabecf1d002727eb9d8b8",
"text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.",
"title": ""
},
{
"docid": "0cd5813a069c8955871784cd3e63aa83",
"text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.",
"title": ""
},
{
"docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9",
"text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.",
"title": ""
},
{
"docid": "03f98b18392bd178ea68ce19b13589fa",
"text": "Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.",
"title": ""
},
{
"docid": "4e46fb5c1abb3379519b04a84183b055",
"text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.",
"title": ""
},
{
"docid": "2f17160c9f01aa779b1745a57e34e1aa",
"text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.",
"title": ""
},
{
"docid": "f5df06ebd22d4eac95287b38a5c3cc6b",
"text": "We discuss the use of a double exponentially tapered slot antenna (DETSA) fabricated on flexible liquid crystal polymer (LCP) as a candidate for ultrawideband (UWB) communications systems. The features of the antenna and the effect of the antenna on a transmitted pulse are investigated. Return loss and E and H plane radiation pattern measurements are presented in several frequencies covering the whole ultra wide band. The return loss remains below -10 dB and the shape of the radiation pattern remains fairly constant in the whole UWB range (3.1 to 10.6 GHz). The main lobe characteristic of the radiation pattern remains stable even when the antenna is significantly conformed. The major effect of the conformation is an increase in the cross polarization component amplitude. The system: transmitter DETSA-channel receiver DETSA is measured in frequency domain and shows that the antenna adds very little distortion on a transmitted pulse. The distortion remains small even when both transmitter and receiver antennas are folded, although it increases slightly.",
"title": ""
},
{
"docid": "27bcbde431c340db7544b58faa597fb7",
"text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.",
"title": ""
},
{
"docid": "a583bbf2deac0bf99e2790c47598cddd",
"text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.",
"title": ""
},
{
"docid": "6e63767a96f0d57ecfe98f55c89ae778",
"text": "We investigate the use of Deep Q-Learning to control a simulated car via reinforcement learning. We start by implementing the approach of [5] ourselves, and then experimenting with various possible alterations to improve performance on our selected task. In particular, we experiment with various reward functions to induce specific driving behavior, double Q-learning, gradient update rules, and other hyperparameters. We find we are successfully able to train an agent to control the simulated car in JavaScript Racer [3] in some respects. Our agent successfully learned the turning operation, progressively gaining the ability to navigate larger sections of the simulated raceway without crashing. In obstacle avoidance, however, our agent faced challenges which we suspect are due to insufficient training time.",
"title": ""
},
{
"docid": "c71d27d4e4e9c85e3f5016fa36d20a16",
"text": "We present, GEM, the first heterogeneous graph neural network approach for detecting malicious accounts at Alipay, one of the world's leading mobile cashless payment platform. Our approach, inspired from a connected subgraph approach, adaptively learns discriminative embeddings from heterogeneous account-device graphs based on two fundamental weaknesses of attackers, i.e. device aggregation and activity aggregation. For the heterogeneous graph consists of various types of nodes, we propose an attention mechanism to learn the importance of different types of nodes, while using the sum operator for modeling the aggregation patterns of nodes in each type. Experiments show that our approaches consistently perform promising results compared with competitive methods over time.",
"title": ""
},
{
"docid": "fa99f24d38858b5951c7af587194f4e3",
"text": "Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.",
"title": ""
},
{
"docid": "951d3f81129ecafa2d271d4398d9b3e6",
"text": "The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.",
"title": ""
},
{
"docid": "37b60f30aba47a0c2bb3d31c848ee4bc",
"text": "This research analyzed the perception of Makassar’s teenagers toward Korean drama and music and their influences to them. Interviews and digital recorder were provided as instruments of the research to ten respondents who are members of Makassar Korean Lover Community. Then, in analyzing data the researchers used descriptive qualitative method that aimed to get deep information about Korean wave in Makassar. The Results of the study found that Makassar’s teenagers put enormous interest in Korean culture especially Korean drama and music. However, most respondents also realize that the presence of Korean culture has a great negative impact to them and their environments. Korean culture itself gives effect in several aspects such as the influence on behavior, Influence on the taste and Influence on the environment as well.",
"title": ""
},
{
"docid": "8b548e2c1922e6e105ab40b60fd7433c",
"text": "Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating $1024\\times 1024$ network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).",
"title": ""
},
{
"docid": "56e406924a967700fba3fe554b9a8484",
"text": "Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation.",
"title": ""
}
] | scidocsrr |
a5fac85a85177ff57a7cc5e8506bf308 | Causal Discovery from Subsampled Time Series Data by Constraint Optimization | [
{
"docid": "17deb6c21da616a73a6daedf971765c3",
"text": "Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmeasured confounders. However, the available methods have so far not been able to provide a principled account of how to handle conflicting constraints that arise from statistical variability. Here we present a new approach that preserves the versatility of Boolean constraint solving and attains a high accuracy despite the presence of statistical errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solving. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-theart ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed variables and outperforms the available constraintbased methods in accuracy.",
"title": ""
}
] | [
{
"docid": "e78e70d347fb76a79755442cabe1fbe0",
"text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.",
"title": ""
},
{
"docid": "c2558388fb20454fa6f4653b1e4ab676",
"text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.",
"title": ""
},
{
"docid": "18f739a605222415afdea4f725201fba",
"text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.",
"title": ""
},
{
"docid": "c197e1ab49287fc571f2a99a9501bf84",
"text": "X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than 100, 000 X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.",
"title": ""
},
{
"docid": "ed0d1e110347313285a6b478ff8875e3",
"text": "Data mining is an area of computer science with a huge prospective, which is the process of discovering or extracting information from large database or datasets. There are many different areas under Data Mining and one of them is Classification or the supervised learning. Classification also can be implemented through a number of different approaches or algorithms. We have conducted the comparison between three algorithms with help of WEKA (The Waikato Environment for Knowledge Analysis), which is an open source software. It contains different type's data mining algorithms. This paper explains discussion of Decision tree, Bayesian Network and K-Nearest Neighbor algorithms. Here, for comparing the result, we have used as parameters the correctly classified instances, incorrectly classified instances, time taken, kappa statistic, relative absolute error, and root relative squared error.",
"title": ""
},
{
"docid": "45c04c80a5e4c852c4e84ba66bd420dd",
"text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.",
"title": ""
},
{
"docid": "de70b208289bad1bc410bcb7a76e56df",
"text": "Instant Messaging chat sessions are realtime text-based conversations which can be analyzed using dialogue-act models. We describe a statistical approach for modelling and detecting dialogue acts in Instant Messaging dialogue. This involved the collection of a small set of task-based dialogues and annotating them with a revised tag set. We then dealt with segmentation and synchronisation issues which do not arise in spoken dialogue. The model we developed combines naive Bayes and dialogue-act n-grams to obtain better than 80% accuracy in our tagging experiment.",
"title": ""
},
{
"docid": "530906b8827394b2dde40ae98d050b7b",
"text": "The aim of transfer learning is to improve prediction accuracy on a target task by exploiting the training examples for tasks that are related to the target one. Transfer learning has received more attention in recent years, because this technique is considered to be helpful in reducing the cost of labeling. In this paper, we propose a very simple approach to transfer learning: TrBagg, which is the extension of bagging. TrBagg is composed of two stages: Many weak classifiers are first generated as in standard bagging, and these classifiers are then filtered based on their usefulness for the target task. This simplicity makes it easy to work reasonably well without severe tuning of learning parameters. Further, our algorithm equips an algorithmic scheme to avoid negative transfer. We applied TrBagg to personalized tag prediction tasks for social bookmarks Our approach has several convenient characteristics for this task such as adaptation to multiple tasks with low computational cost.",
"title": ""
},
{
"docid": "06e50887ddec8b0e858173499ce2ee11",
"text": "Over the last few years, we've seen a plethora of Internet of Things (IoT) solutions, products, and services make their way into the industry's marketplace. All such solutions will capture large amounts of data pertaining to the environment as well as their users. The IoT's objective is to learn more and better serve system users. Some IoT solutions might store data locally on devices (\"things\"), whereas others might store it in the cloud. The real value of collecting data comes through data processing and aggregation on a large scale, where new knowledge can be extracted. However, such procedures can lead to user privacy issues. This article discusses some of the main challenges of privacy in the IoT as well as opportunities for research and innovation. The authors also introduce some of the ongoing research efforts that address IoT privacy issues.",
"title": ""
},
{
"docid": "b42b17131236abc1ee3066905025aa8c",
"text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.",
"title": ""
},
{
"docid": "85908a576c13755e792d52d02947f8b3",
"text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.",
"title": ""
},
{
"docid": "e9474d646b9da5e611475f4cdfdfc30e",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "0fcd04f5dccf595d2c08cff23168ee5e",
"text": "PubChem (http://pubchem.ncbi.nlm.nih.gov) is a public repository for biological properties of small molecules hosted by the US National Institutes of Health (NIH). PubChem BioAssay database currently contains biological test results for more than 700 000 compounds. The goal of PubChem is to make this information easily accessible to biomedical researchers. In this work, we present a set of web servers to facilitate and optimize the utility of biological activity information within PubChem. These web-based services provide tools for rapid data retrieval, integration and comparison of biological screening results, exploratory structure-activity analysis, and target selectivity examination. This article reviews these bioactivity analysis tools and discusses their uses. Most of the tools described in this work can be directly accessed at http://pubchem.ncbi.nlm.nih.gov/assay/. URLs for accessing other tools described in this work are specified individually.",
"title": ""
},
{
"docid": "b4e676d4d11039c5c5feb5e549eb364f",
"text": "Abst ract Qualit at ive case st udy met hodology provides t ools f or researchers t o st udy complex phenomena wit hin t heir cont ext s. When t he approach is applied correct ly, it becomes a valuable met hod f or healt h science research t o develop t heory, evaluat e programs, and develop int ervent ions. T he purpose of t his paper is t o guide t he novice researcher in ident if ying t he key element s f or designing and implement ing qualit at ive case st udy research project s. An overview of t he t ypes of case st udy designs is provided along wit h general recommendat ions f or writ ing t he research quest ions, developing proposit ions, det ermining t he “case” under st udy, binding t he case and a discussion of dat a sources and t riangulat ion. T o f acilit at e applicat ion of t hese principles, clear examples of research quest ions, st udy proposit ions and t he dif f erent t ypes of case st udy designs are provided Keywo rds Case St udy and Qualit at ive Met hod Publicat io n Dat e 12-1-2008 Creat ive Co mmo ns License Journal Home About T his Journal Aims & Scope Edit orial Board Policies Open Access",
"title": ""
},
{
"docid": "eb4cac4ac288bc65df70f906b674ceb5",
"text": "LPWAN (Low Power Wide Area Networks) technologies have been attracting attention continuously in IoT (Internet of Things). LoRaWAN is present on the market as a LPWAN technology and it has features such as low power consumption, low transceiver chip cost and wide coverage area. In the LoRaWAN, end devices must perform a join procedure for participating in the network. Attackers could exploit the join procedure because it has vulnerability in terms of security. Replay attack is a method of exploiting the vulnerability in the join procedure. In this paper, we propose a attack scenario and a countermeasure against replay attack that may occur in the join request transfer process.",
"title": ""
},
{
"docid": "6724af38a637d61ccc2a4ad8119c6e1a",
"text": "INTRODUCTION Pivotal to athletic performance is the ability to more maintain desired athletic performance levels during particularly critical periods of competition [1], such as during pressurised situations that typically evoke elevated levels of anxiety (e.g., penalty kicks) or when exposed to unexpected adversities (e.g., unfavourable umpire calls on crucial points) [2, 3]. These kinds of situations become markedly important when athletes, who are separated by marginal physical and technical differences, are engaged in closely contested matches, games, or races [4]. It is within these competitive conditions, in particular, that athletes’ responses define their degree of success (or lack thereof); responses that are largely dependent on athletes’ psychological attributes [5]. One of these attributes appears to be mental toughness (MT), which has often been classified as a critical success factor due to the role it plays in fostering adaptive responses to positively and negatively construed pressures, situations, and events [6 8]. However, as scholars have intensified",
"title": ""
},
{
"docid": "ff8c3ce63b340a682e99540313be7fe7",
"text": "Detecting and identifying any phishing websites in real-time, particularly for e-banking is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining (DM) Techniques can be an effective tool in assessing and identifying phishing websites for e-banking since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the ‘fuzziness’ in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic (FL) combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria’s with a layer structure. The proposed e-banking phishing website model showed the significance importance of the phishing website two criteria’s (URL & Domain Identity) and (Security & Encryption) in the final phishing detection rate result, taking into consideration its characteristic association and relationship with each others as showed from the fuzzy data mining classification and association rule algorithms. Our phishing model also showed the insignificant trivial influence of the (Page Style & Content) criteria along with (Social Human Factor) criteria in the phishing detection final rate result.",
"title": ""
},
{
"docid": "27c7afd468d969509eec2b2a3260a679",
"text": "The impact of predictive genetic testing on cancer care can be measured by the increased demand for and utilization of genetic services as well as in the progress made in reducing cancer risks in known mutation carriers. Nonetheless, differential access to and utilization of genetic counseling and cancer predisposition testing among underserved racial and ethnic minorities compared with the white population has led to growing health care disparities in clinical cancer genetics that are only beginning to be addressed. Furthermore, deficiencies in the utility of genetic testing in underserved populations as a result of limited testing experience and in the effectiveness of risk-reducing interventions compound access and knowledge-base disparities. The recent literature on racial/ethnic health care disparities is briefly reviewed, and is followed by a discussion of the current limitations of risk assessment and genetic testing outside of white populations. The importance of expanded testing in underserved populations is emphasized.",
"title": ""
},
{
"docid": "788bf97b435dfbe9d31373e21bc76716",
"text": "In this paper, we study the design and workspace of a 6–6 cable-suspended parallel robot. The workspace volume is characterized as the set of points where the centroid of the moving platform can reach with tensions in all suspension cables at a constant orientation. This paper attempts to tackle some aspects of optimal design of a 6DOF cable robot by addressing the variations of the workspace volume and the accuracy of the robot using different geometric configurations, different sizes and orientations of the moving platform. The global condition index is used as a performance index of a robot with respect to the force and velocity transmission over the whole workspace. The results are used for design analysis of the cable-robot for a specific motion of the moving platform. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "8255146164ff42f8755d8e74fd24cfa1",
"text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.",
"title": ""
}
] | scidocsrr |
93bc35b87540a4c67cdb45624d821210 | The Riemann Zeros and Eigenvalue Asymptotics | [
{
"docid": "d15a2f27112c6bd8bfa2f9c01471c512",
"text": "Assuming a special version of the Montgomery-Odlyzko law on the pair correlation of zeros of the Riemann zeta function conjectured by Rudnick and Sarnak and assuming the Riemann Hypothesis, we prove new results on the prime number theorem, difference of consecutive primes, and the twin prime conjecture. 1. Introduction. Assuming the Riemann Hypothesis (RH), let us denote by 1=2 ig a nontrivial zero of a primitive L-function L
s;p attached to an irreducible cuspidal automorphic representation of GLm; m ^ 1, over Q. When m 1, this L-function is the Riemann zeta function z
s or the Dirichlet L-function L
s; c for a primitive character c. Rudnick and Sarnak [13] examined the n-level correlation for these zeros and made a far reaching conjecture which is called the Montgomery [9]-Odlyzko [11], [12] Law by Katz and Sarnak [6]. Rudnick and Sarnak also proved a case of their conjecture when a test function f has its Fourier transform b f supported in a restricted region. In this article, we will show that a version of the above conjecture for the pair correlation of zeros of the zeta function z
s implies interesting arithmetical results on prime distribution (Theorems 2, 3, and 4). These results can give us deep insight on possible ultimate bounds of these prime distribution problems. One can also see that the pair (and nlevel) correlation of zeros of zeta and L-functions is a powerful method in number theory. Our computation shows that the test function f and the support of its Fourier transform b f play a crucial role in the conjecture. To see the conjecture in Rudnick and Sarnak [13] in the case of the zeta function z
s and n 2, the pair correlation, we use a test function f
x; y which satisfies the following three conditions: (i) f
x; y f
y; x for any x; y 2 R, (ii) f
x t; y t f
x; y for any t 2 R, and (iii) f
x; y tends to 0 rapidly as j
x; yj ! 1 on the hyperplane x y 0. Arch. Math. 76 (2001) 41±50 0003-889X/01/010041-10 $ 3.50/0 Birkhäuser Verlag, Basel, 2001 Archiv der Mathematik Mathematics Subject Classification (1991): 11M26, 11N05, 11N75. 1) Supported in part by China NNSF Grant # 19701019. 2) Supported in part by USA NSF Grant # DMS 97-01225. Define the function W2
x; y 1ÿ sin p
xÿ y
p
xÿ y : Denote the Dirac function by d
x which satisfies R d
xdx 1 and defines a distribution f 7! f
0. We then define the pair correlation sum of zeros gj of the zeta function: R2
T; f ; h P g1;g2 distinct h g1 T ; g2 T f Lg1 2p ; Lg2 2p ; where T ^ 2, L log T, and h
x; y is a localized cutoff function which tends to zero rapidly when j
x; yj tends to infinity. The conjecture proposed by Rudnick and Sarnak [13] is that R2
T; f ; h 1 2p TL
",
"title": ""
}
] | [
{
"docid": "e4dba25d2528a507e4b494977fd69fc0",
"text": "The illegal distribution of a digital movie is a common and significant threat to the film industry. With the advent of high-speed broadband Internet access, a pirated copy of a digital video can now be easily distributed to a global audience. A possible means of limiting this type of digital theft is digital video watermarking whereby additional information, called a watermark, is embedded in the host video. This watermark can be extracted at the decoder and used to determine whether the video content is watermarked. This paper presents a review of the digital video watermarking techniques in which their applications, challenges, and important properties are discussed, and categorizes them based on the domain in which they embed the watermark. It then provides an overview of a few emerging innovative solutions using watermarks. Protecting a 3D video by watermarking is an emerging area of research. The relevant 3D video watermarking techniques in the literature are classified based on the image-based representations of a 3D video in stereoscopic, depth-image-based rendering, and multi-view video watermarking. We discuss each technique, and then present a survey of the literature. Finally, we provide a summary of this paper and propose some future research directions.",
"title": ""
},
{
"docid": "5e333f4620908dc643ceac8a07ff2a2d",
"text": "Convolutional Neural Networks (CNNs) have reached outstanding results in several complex visual recognition tasks, such as classification and scene parsing. CNNs are composed of multiple filtering layers that perform 2D convolutions over input images. The intrinsic parallelism in such a computation kernel makes it suitable to be effectively accelerated on parallel hardware. In this paper we propose a highly flexible and scalable architectural template for acceleration of CNNs on FPGA devices, based on the cooperation between a set of software cores and a parallel convolution engine that communicate via a tightly coupled L1 shared scratchpad. Our accelerator structure, tested on a Xilinx Zynq XC-Z7045 device, delivers peak performance up to 80 GMAC/s, corresponding to 100 MMAC/s for each DSP slice in the programmable fabric. Thanks to the flexible architecture, convolution operations can be scheduled in order to reduce input/output bandwidth down to 8 bytes per cycle without degrading the performance of the accelerator in most of the meaningful use-cases.",
"title": ""
},
{
"docid": "a4030b9aa31d4cc0a2341236d6f18b5a",
"text": "Generative adversarial networks (GANs) have achieved huge success in unsupervised learning. Most of GANs treat the discriminator as a classifier with the binary sigmoid cross entropy loss function. However, we find that the sigmoid cross entropy loss function will sometimes lead to the saturation problem in GANs learning. In this work, we propose to adopt the L2 loss function for the discriminator. The properties of the L2 loss function can improve the stabilization of GANs learning. With the usage of the L2 loss function, we propose the multi-class generative adversarial networks for the purpose of image generation with multiple classes. We evaluate the multi-class GANs on a handwritten Chinese characters dataset with 3740 classes. The experiments demonstrate that the multi-class GANs can generate elegant images on datasets with a large number of classes. Comparison experiments between the L2 loss function and the sigmoid cross entropy loss function are also conducted and the results demonstrate the stabilization of the L2 loss function.",
"title": ""
},
{
"docid": "c93a401b7ed3031ed6571bfbbf1078c8",
"text": "In this paper we propose a new footstep detection technique for data acquired using a triaxial geophone. The idea evolves from the investigation of geophone transduction principle. The technique exploits the randomness of neighbouring data vectors observed when the footstep is absent. We extend the same principle for triaxial signal denoising. Effectiveness of the proposed technique for transient detection and denoising are presented for real seismic data collected using a triaxial geophone.",
"title": ""
},
{
"docid": "f1559798e0338074f28ca4aaf953b6a1",
"text": "Example classifications (test set) [And09] Andriluka et al. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009 [Eic09] Eichner et al. Articulated Human Pose Estimation and Search in (Almost) Unconstrained Still Images. In IJCV, 2012 [Sap10] Sapp et al. Cascaded models for articulated pose estimation. In ECCV, 2010 [Yan11] Yang and Ramanan. Articulated pose estimation with flexible mixturesof-parts. In CVPR, 2011. References Human Pose Estimation (HPE) Algorithm Input",
"title": ""
},
{
"docid": "6a74c2d26f5125237929031cf1ccf204",
"text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.",
"title": ""
},
{
"docid": "bd38c3f62798ed1f0b1e2baa6462123c",
"text": "The key issue in image fusion is the process of defining evaluation indices for the output image and for multi-scale image data set. This paper attempted to develop a fusion model for plantar pressure distribution images, which is expected to contribute to feature points construction based on shoe-last surface generation and modification. First, the time series plantar pressure distribution image was preprocessed, including back removing and Laplacian of Gaussian (LoG) filter. Then, discrete wavelet transform and a multi-scale pixel conversion fusion operating using a parameter estimation optimized Gaussian mixture model (PEO-GMM) were performed. The output image was used in a fuzzy weighted evaluation system, that included the following evaluation indices: mean, standard deviation, entropy, average gradient, and spatial frequency; the difference with the reference image, including the root mean square error, signal to noise ratio (SNR), and the peak SNR; and the difference with source image including the cross entropy, joint entropy, mutual information, deviation index, correlation coefficient, and the degree of distortion. These parameters were used to evaluate the results of the comprehensive evaluation value for the synthesized image. The image reflected the fusion of plantar pressure distribution using the proposed method compared with other fusion methods, such as up-down, mean-mean, and max-min fusion. The experimental results showed that the proposed LoG filtering with PEO-GMM fusion operator outperformed other methods.",
"title": ""
},
{
"docid": "2d0b170508ce03d649cf62ceef79a05a",
"text": "Gyroscope is one of the primary sensors for air vehicle navigation and controls. This paper investigates the noise characteristics of microelectromechanical systems (MEMS) gyroscope null drift and temperature compensation. This study mainly focuses on temperature as a long-term error source. An in-house-designed inertial measurement unit (IMU) is used to perform temperature effect testing in the study. The IMU is placed into a temperature control chamber. The chamber temperature is controlled to increase from 25 C to 80 C at approximately 0.8 degrees per minute. After that, the temperature is decreased to -40 C and then returns to 25 C. The null voltage measurements clearly demonstrate the rapidly changing short-term random drift and slowly changing long-term drift due to temperature variations. The characteristics of the short-term random drifts are analyzed and represented in probability density functions. A temperature calibration mechanism is established by using an artificial neural network to compensate the long-term drift. With the temperature calibration, the attitude computation problem due to gyro drifts can be improved significantly.",
"title": ""
},
{
"docid": "3c53d2589875a60b6c85cb8873a7c9a8",
"text": "presenting with bullous pemphigoid-like lesions. Dermatol Online J 2006; 12: 19. 3 Bhawan J, Milstone E, Malhotra R, et al. Scabies presenting as bullous pemphigoid-like eruption. J Am Acad Dermatol 1991; 24: 179–181. 4 Ostlere LS, Harris D, Rustin MH. Scabies associated with a bullous pemphigoid-like eruption. Br J Dermatol 1993; 128: 217–219. 5 Parodi A, Saino M, Rebora A. Bullous pemphigoid-like scabies. Clin Exp Dermatol 1993; 18: 293. 6 Slawsky LD, Maroon M, Tyler WB, et al. Association of scabies with a bullous pemphigoid-like eruption. J Am Acad Dermatol 1996; 34: 878–879. 7 Chen MC, Luo DQ. Bullous scabies failing to respond to glucocorticoids, immunoglobulin, and cyclophosphamide. Int J Dermatol 2014; 53: 265–266. 8 Nakamura E, Taniguchi H, Ohtaki N. A case of crusted scabies with a bullous pemphigoid-like eruption and nail involvement. J Dermatol 2006; 33: 196–201. 9 Galvany Rossell L, Salleras Redonnet M, Umbert Millet P. Bullous scabies responding to ivermectin therapy. Actas Dermosifiliogr 2010; 101: 81–84. 10 Gutte RM. Bullous scabies in an adult: a case report with review of literature. Indian Dermatol Online J 2013; 4: 311–313.",
"title": ""
},
{
"docid": "43100f1c6563b4af125c1c6040daa437",
"text": "Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: [email protected]). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian",
"title": ""
},
{
"docid": "8e878e5083d922d97f8d573c54cbb707",
"text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <[email protected]>, Quanzheng Li <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.",
"title": ""
},
{
"docid": "4fea6fb309d496f9b4fd281c80a8eed7",
"text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.",
"title": ""
},
{
"docid": "78b61359d8668336b198af9ad59fe149",
"text": "This paper discusses a fuzzy cost-based failure modes, effects, and criticality analysis (FMECA) approach for wind turbines. Conventional FMECA methods use a crisp risk priority number (RPN) as a measure of criticality which suffers from the difficulty of quantifying the risk. One method of increasing wind turbine reliability is to install a condition monitoring system (CMS). The RPN can be reduced with the help of a CMS because faults can be detected at an incipient level, and preventive maintenance can be scheduled. However, the cost of installing a CMS cannot be ignored. The fuzzy cost-based FMECA method proposed in this paper takes into consideration the cost of a CMS and the benefits it brings and provides a method for determining whether it is financially profitable to install a CMS. The analysis is carried out in MATLAB® which provides functions for fuzzy logic operation and defuzzification.",
"title": ""
},
{
"docid": "11bff8c8ed48fc53c841bafcaf2a04dd",
"text": "Co-Attentions are highly effective attention mechanisms for text matching applications. Co-Attention enables the learning of pairwise attentions, i.e., learning to attend based on computing word-level affinity scores between two documents. However, text matching problems can exist in either symmetrical or asymmetrical domains. For example, paraphrase identification is a symmetrical task while question-answer matching and entailment classification are considered asymmetrical domains. In this paper, we argue that Co-Attention models in asymmetrical domains require different treatment as opposed to symmetrical domains, i.e., a concept of word-level directionality should be incorporated while learning word-level similarity scores. Hence, the standard inner product in real space commonly adopted in co-attention is not suitable. This paper leverages attractive properties of the complex vector space and proposes a co-attention mechanism based on the complex-valued inner product (Hermitian products). Unlike the real dot product, the dot product in complex space is asymmetric because the first item is conjugated. Aside from modeling and encoding directionality, our proposed approach also enhances the representation learning process. Extensive experiments on five text matching benchmark datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "bb94ac9ac0c1e1f1155fc56b13bc103e",
"text": "In contrast to the Android application layer, Android’s application framework’s internals and their influence on the platform security and user privacy are still largely a black box for us. In this paper, we establish a static runtime model of the application framework in order to study its internals and provide the first high-level classification of the framework’s protected resources. We thereby uncover design patterns that differ highly from the runtime model at the application layer. We demonstrate the benefits of our insights for security-focused analysis of the framework by re-visiting the important use-case of mapping Android permissions to framework/SDK API methods. We, in particular, present a novel mapping based on our findings that significantly improves on prior results in this area that were established based on insufficient knowledge about the framework’s internals. Moreover, we introduce the concept of permission locality to show that although framework services follow the principle of separation of duty, the accompanying permission checks to guard sensitive operations violate it.",
"title": ""
},
{
"docid": "347c3929efc37dee3230189e576f14ab",
"text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.",
"title": ""
},
{
"docid": "1468a09c57b2d83181de06236386d323",
"text": "This article provides an overview of the pathogenesis of type 2 diabetes mellitus. Discussion begins by describing normal glucose homeostasis and ingestion of a typical meal and then discusses glucose homeostasis in diabetes. Topics covered include insulin secretion in type 2 diabetes mellitus and insulin resistance, the site of insulin resistance, the interaction between insulin sensitivity and secretion, the role of adipocytes in the pathogenesis of type 2 diabetes, cellular mechanisms of insulin resistance including glucose transport and phosphorylation, glycogen and synthesis,glucose and oxidation, glycolysis, and insulin signaling.",
"title": ""
},
{
"docid": "834bc1349d6da53c277ddd7eba95dc6a",
"text": "Lymphedema is a common condition frequently seen in cancer patients who have had lymph node dissection +/- radiation treatment. Traditional management is mainly non-surgical and unsatisfactory. Surgical treatment has relied on excisional techniques in the past. Physiologic operations have more recently been devised to help improve this condition. Assessing patients and deciding which of the available operations to offer them can be challenging. MRI is an extremely useful tool in patient assessment and treatment planning. J. Surg. Oncol. 2017;115:18-22. © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "e49d1f0aa79a2913131010c9f4d88bcf",
"text": "Low power consumption is crucial for medical implant devices. A single-chip, very-low-power interface IC used in implantable pacemaker systems is presented. It contains amplifiers, filters, ADCs, battery management system, voltage multipliers, high voltage pulse generators, programmable logic and timing control. A few circuit techniques are proposed to achieve nanopower circuit operations within submicron CMOS process. Subthreshold transistor designs and switched-capacitor circuits are widely used. The 200 k transistor IC occupies 49 mm/sup 2/, is fabricated in a 0.5-/spl mu/m two-poly three-metal multi-V/sub t/ process, and consumes 8 /spl mu/W.",
"title": ""
},
{
"docid": "73f6ba4ad9559cd3c6f7a88223e4b556",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
}
] | scidocsrr |
6289f60d651706a549de7eaded26b56d | Modeling data entry rates for ASR and alternative input methods | [
{
"docid": "b876e62db8a45ab17d3a9d217e223eb7",
"text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.",
"title": ""
}
] | [
{
"docid": "a4e5a60d9ce417ef74fc70580837cd55",
"text": "Emotional processes are important to survive. The Darwinian adaptive concept of stress refers to natural selection since evolved individuals have acquired effective strategies to adapt to the environment and to unavoidable changes. If demands are abrupt and intense, there might be insufficient time to successful responses. Usually, stress produces a cognitive or perceptual evaluation (emotional memory) which motivates to make a plan, to take a decision and to perform an action to face success‐ fully the demand. Between several kinds of stresses, there are psychosocial and emotional stresses with cultural, social and political influences. The cultural changes have modified the way in which individuals socially interact. Deficits in familiar relationships and social isolation alter physical and mental health in young students, producing reduction of their capacities of facing stressors in school. Adolescence is characterized by significant physiological, anatomical, and psychological changes in boys and girls, who become vulnerable to psychiatric disorders. In particular for young adult students, anxiety and depression symptoms could interfere in their academic performance. In this chapter, we reviewed approaches to the study of anxiety and depression symptoms related with the academic performance in adolescent and graduate students. Results from available published studies in academic journals are reviewed to discuss the importance to detect information about academic performance, which leads to discover in many cases the very commonly subdiagnosed psychiatric disorders in adolescents, that is, anxiety and depression. With the reviewed evidence of how anxiety and depression in young adult students may alter their main activity in life (studying and academic performance), we © 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. discussed data in order to show a way in which professionals involved in schools could support students and stablish a routine of intervention in any case.",
"title": ""
},
{
"docid": "e2c4f9cfce1db6282fe3a23fd5d6f3a4",
"text": "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.",
"title": ""
},
{
"docid": "4cf77462459efa81f6ed856655ae7454",
"text": "Antibody response to the influenza immunization was investigated in 83 1st-semester healthy university freshmen. Elevated levels of loneliness throughout the semester and small social networks were independently associated with poorer antibody response to 1 component of the vaccine. Those with both high levels of loneliness and a small social network had the lowest antibody response. Loneliness was also associated with greater psychological stress and negative affect, less positive affect, poorer sleep efficiency and quality, and elevations in circulating levels of cortisol. However, only the stress data were consistent with mediation of the loneliness-antibody response relation. None of these variables were associated with social network size, and hence none were potential mediators of the relation between network size and immunization response.",
"title": ""
},
{
"docid": "cba5c85ee9a9c4f97f99c1fcb35d0623",
"text": "Virtualized Cloud platforms have become increasingly common and the number of online services hosted on these platforms is also increasing rapidly. A key problem faced by providers in managing these services is detecting the performance anomalies and adjusting resources accordingly. As online services generate a very large amount of monitored data in the form of time series, it becomes very difficult to process this complex data by traditional approaches. In this work, we present a novel distributed parallel approach for performance anomaly detection. We build upon Holt-Winters forecasting for automatic aberrant behavior detection in time series. First, we extend the technique to work with MapReduce paradigm. Next, we correlate the anomalous metrics with the target Service Level Objective (SLO) in order to locate the suspicious metrics. We implemented and evaluated our approach on a production Cloud encompassing IaaS and PaaS service models. Experimental results confirm that our approach is efficient and effective in capturing the metrics causing performance anomalies in large time series datasets.",
"title": ""
},
{
"docid": "92c6e4ec2497c467eaa31546e2e2be0e",
"text": "The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.",
"title": ""
},
{
"docid": "ea3ed48d47473940134027caea2679f9",
"text": "With rapid development of face recognition and detection techniques, the face has been frequently used as a biometric to find illegitimate access. It relates to a security issues of system directly, and hence, the face spoofing detection is an important issue. However, correctly classifying spoofing or genuine faces is challenging due to diverse environment conditions such as brightness and color of a face skin. Therefore we propose a novel approach to robustly find the spoofing faces using the highlight removal effect, which is based on the reflection information. Because spoofing face image is recaptured by a camera, it has additional light information. It means that spoofing image could have much more highlighted areas and abnormal reflection information. By extracting these differences, we are able to generate features for robust face spoofing detection. In addition, the spoofing face image and genuine face image have distinct textures because of surface material of medium. The skin and spoofing medium are expected to have different texture, and some genuine image characteristics are distorted such as color distribution. We achieve state-of-the-art performance by concatenating these features. It significantly outperforms especially for the error rate.",
"title": ""
},
{
"docid": "a1a4b028fba02904333140e6791709bb",
"text": "Cross-site scripting (also referred to as XSS) is a vulnerability that allows an attacker to send malicious code (usually in the form of JavaScript) to another user. XSS is one of the top 10 vulnerabilities on Web application. While a traditional cross-site scripting vulnerability exploits server-side codes, DOM-based XSS is a type of vulnerability which affects the script code being executed in the clients browser. DOM-based XSS vulnerabilities are much harder to be detected than classic XSS vulnerabilities because they reside on the script codes from Web sites. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. In this paper, we introduce a distributed scanning tool for crawling modern Web applications on a large scale and detecting, validating DOMbased XSS vulnerabilities. Very few Web vulnerability scanners can really accomplish this.",
"title": ""
},
{
"docid": "046245929e709ef2935c9413619ab3d7",
"text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance. 1999 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d8484cc7973882777f65a28fcdbb37be",
"text": "The reported power analysis attacks on hardware implementations of the MICKEY family of streams ciphers require a large number of power traces. The primary motivation of our work is to break an implementation of the cipher when only a limited number of power traces can be acquired by an adversary. In this paper, we propose a novel approach to mount a Template attack (TA) on MICKEY-128 2.0 stream cipher using Particle Swarm Optimization (PSO) generated initialization vectors (IVs). In addition, we report the results of power analysis against a MICKEY-128 2.0 implementation on a SASEBO-GII board to demonstrate our proposed attack strategy. The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based binary classifiers to segregate the power traces into the respective Hamming distance (HD) classes. The outcomes of the experiments reveal that our proposed power analysis attack strategy requires a much lesser number of IVs compared to a standard Correlation Power Analysis (CPA) attack on MICKEY-128 2.0 during the key loading phase of the cipher.",
"title": ""
},
{
"docid": "2a1eea68ab90c34fbe90e8f6ac28059e",
"text": "This article discusses how to avoid biased questions in survey instruments, how to motivate people to complete instruments and how to evaluate instruments. In the context of survey evaluation, we discuss how to assess survey reliability i.e. how reproducible a survey's data is and survey validity i.e. how well a survey instrument measures what it sets out to measure.",
"title": ""
},
{
"docid": "2d845ef6552b77fb4dd0d784233aa734",
"text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.",
"title": ""
},
{
"docid": "f90cb4fdf664e24ceeb3727eda3543b3",
"text": "The self-powering, long-lasting, and functional features of embedded wireless microsensors appeal to an ever-expanding application space in monitoring, control, and diagnosis for military, commercial, industrial, space, and biomedical applications. Extended operational life, however, is difficult to achieve when power-intensive functions like telemetry draw whatever little energy is available from energy-storage microdevices like thin-film lithium-ion batteries and/or microscale fuel cells. Harvesting ambient energy overcomes this deficit by continually replenishing the energy reservoir and indefinitely extending system lifetime. In this paper, a prototyped circuit that precharges, detects, and synchronizes to a variable voltage-constrained capacitor verifies experimentally that harvesting energy electrostatically from vibrations is possible. Experimental results show that, on average (excluding gate-drive and control losses), the system harvests 9.7 nJ/cycle by investing 1.7 nJ/cycle, yielding a net energy gain of approximately 8 nJ/cycle at an average of 1.6 ¿W (in typical applications) for every 200 pF variation. Projecting and including reasonable gate-drive and controller losses reduces the net energy gain to 6.9 nJ/cycle at 1.38 ¿W.",
"title": ""
},
{
"docid": "b76f10452e4a4b0d7408e6350b263022",
"text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.",
"title": ""
},
{
"docid": "78c6ca3a62314b1033470a03c90619be",
"text": "Metabolomics is the comprehensive study of small molecule metabolites in biological systems. By assaying and analyzing thousands of metabolites in biological samples, it provides a whole picture of metabolic status and biochemical events happening within an organism and has become an increasingly powerful tool in the disease research. In metabolomics, it is common to deal with large amounts of data generated by nuclear magnetic resonance (NMR) and/or mass spectrometry (MS). Moreover, based on different goals and designs of studies, it may be necessary to use a variety of data analysis methods or a combination of them in order to obtain an accurate and comprehensive result. In this review, we intend to provide an overview of computational and statistical methods that are commonly applied to analyze metabolomics data. The review is divided into five sections. The first two sections will introduce the background and the databases and resources available for metabolomics research. The third section will briefly describe the principles of the two main experimental methods that produce metabolomics data: MS and NMR, followed by the fourth section that describes the preprocessing of the data from these two approaches. In the fifth and the most important section, we will review four main types of analysis that can be performed on metabolomics data with examples in metabolomics. These are unsupervised learning methods, supervised learning methods, pathway analysis methods and analysis of time course metabolomics data. We conclude by providing a table summarizing the principles and tools that we discussed in this review.",
"title": ""
},
{
"docid": "9292d1a97913257cfd1e72645969a988",
"text": "A digital PLL employing an adaptive tracking technique and a novel frequency acquisition scheme achieves a wide tracking range and fast frequency acquisition. The test chip fabricated in a 0.13 mum CMOS process operates from 0.6 GHz to 2 GHz and achieves better than plusmn3200 ppm frequency tracking range when the reference clock is modulated with a 1 MHz sine wave.",
"title": ""
},
{
"docid": "c3473e7fe7b46628d384cbbe10bfe74c",
"text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.",
"title": ""
},
{
"docid": "3e94030eb03806d79c5e66aa90408fbb",
"text": "The sampling rate of the sensors in wireless sensor networks (WSNs) determines the rate of its energy consumption since most of the energy is used in sampling and transmission. To save the energy in WSNs and thus prolong the network lifetime, we present a novel approach based on the compressive sensing (CS) framework to monitor 1-D environmental information in WSNs. The proposed technique is based on CS theory to minimize the number of samples taken by sensor nodes. An innovative feature of our approach is a new random sampling scheme that considers the causality of sampling, hardware limitations and the trade-off between the randomization scheme and computational complexity. In addition, a sampling rate indicator (SRI) feedback scheme is proposed to enable the sensor to adjust its sampling rate to maintain an acceptable reconstruction performance while minimizing the number of samples. A significant reduction in the number of samples required to achieve acceptable reconstruction error is demonstrated using real data gathered by a WSN located in the Hessle Anchorage of the Humber Bridge.",
"title": ""
},
{
"docid": "c94c9913634f715049d90a55282908ca",
"text": "Indirect field oriented control for induction machine requires the knowledge of rotor time constant to estimate the rotor flux linkages. Here an online method for estimating the rotor time constant and stator resistance is presented. The problem is formulated as a nonlinear least-squares problem and a procedure is presented that guarantees the minimum is found in a finite number of steps. Experimental results are presented. Two different approaches to implementing the algorithm online are discussed. Simulations are also presented to show how the algorithm works online",
"title": ""
},
{
"docid": "670ade2a60809bd501b3d365d173f4ab",
"text": "Attack graph is a tool to analyze multi-stage, multi-host attack scenarios in a network. It is a complete graph where each attack scenario is depicted by an attack path which is essentially a series of exploits. Each exploit in the series satisfies the pre-conditions for subsequent exploits and makes a casual relationship among them. One of the intrinsic problem with the generation of such a full attack graph is its scalability. In this work, an approach based on planner has been proposed for time-efficient scalable representation of the attack graphs. A planner is a special purpose search algorithm from artificial intelligence domain, used for finding out solutions within a large state space without suffering state space explosion. A case study has also been presented and the proposed methodology is found to be efficient than some of the earlier reported works.",
"title": ""
}
] | scidocsrr |
a48a2385c64de73ec6837650edccc60c | Privacy Preserving Social Network Data Publication | [
{
"docid": "6fa6ce80c183cf9b36e56011490c0504",
"text": "Lipschitz extensions were recently proposed as a tool for designing node differentially private algorithms. However, efficiently computable Lipschitz extensions were known only for 1-dimensional functions (that is, functions that output a single real value). In this paper, we study efficiently computable Lipschitz extensions for multi-dimensional (that is, vector-valued) functions on graphs. We show that, unlike for 1-dimensional functions, Lipschitz extensions of higher-dimensional functions on graphs do not always exist, even with a non-unit stretch. We design Lipschitz extensions with small stretch for the sorted degree list and for the degree distribution of a graph. Crucially, our extensions are efficiently computable. We also develop new tools for employing Lipschitz extensions in the design of differentially private algorithms. Specifically, we generalize the exponential mechanism, a widely used tool in data privacy. The exponential mechanism is given a collection of score functions that map datasets to real values. It attempts to return the name of the function with nearly minimum value on the data set. Our generalized exponential mechanism provides better accuracy when the sensitivity of an optimal score function is much smaller than the maximum sensitivity of score functions. We use our Lipschitz extension and the generalized exponential mechanism to design a nodedifferentially private algorithm for releasing an approximation to the degree distribution of a graph. Our algorithm is much more accurate than algorithms from previous work. ∗Computer Science and Engineering Department, Pennsylvania State University. {asmith,sofya}@cse.psu.edu. Supported by NSF awards CDI-0941553 and IIS-1447700 and a Google Faculty Award. Part of this work was done while visiting Boston University’s Hariri Institute for Computation. 1 ar X iv :1 50 4. 07 91 2v 1 [ cs .C R ] 2 9 A pr 2 01 5",
"title": ""
}
] | [
{
"docid": "5c90f5a934a4d936257467a14a058925",
"text": "We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex",
"title": ""
},
{
"docid": "19fe8c6452dd827ffdd6b4c6e28bc875",
"text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.",
"title": ""
},
{
"docid": "ff93e77bb0e0b24a06780a05cc16123d",
"text": "Models in science may be used for various purposes: organizing data, synthesizing information, and making predictions. However, the value of model predictions is undermined by their uncertainty, which arises primarily from the fact that our models of complex natural systems are always open. Models can never fully specify the systems that they describe, and therefore their predictions are always subject to uncertainties that we cannot fully specify. Moreover, the attempt to make models capture the complexities of natural systems leads to a paradox: the more we strive for realism by incorporating as many as possible of the different processes and parameters that we believe to be operating in the system, the more difficult it is for us to know if our tests of the model are meaningful. A complex model may be more realistic, yet it is ironic that as we add more factors to a model, the certainty of its predictions may decrease even as our intuitive faith in the model increases. For this and other reasons, model output should not be viewed as an accurate prediction of the future state of the system. Short timeframe model output can and should be used to evaluate models and suggest avenues for future study. Model output can also generate “what if” scenarios that can help to evaluate alternative courses of action (or inaction), including worst-case and best-case outcomes. But scientists should eschew long-range deterministic predictions, which are likely to be erroneous and may damage the credibility of the communities that generate them.",
"title": ""
},
{
"docid": "d00cdbbe08a56952685118e68c0b9115",
"text": "s R esum es Canadian Undergraduate Mathematics Conference 1998 | Part 3 The Brachistochrone Problem Nils Johnson The University of British Columbia The brachistochrone problem is to nd the curve between two points down which a bead will slide in the shortest amount of time, neglecting friction and assuming conservation of energy. To solve the problem, an integral is derived that computes the amount of time it would take a bead to slide down a given curve y(x). This integral is minimized over all possible curves and yields the di erential equation y(1 + (y)) = k as a constraint for the minimizing function y(x). Solving this di erential equation shows that a cycloid (the path traced out by a point on the rim of a rolling wheel) is the solution to the brachistochrone problem. First proposed in 1696 by Johann Bernoulli, this problem is credited with having led to the development of the calculus of variations. The solution presented assumes knowledge of one-dimensional calculus and elementary di erential equations. The Theory of Error-Correcting Codes Dennis Hill University of Ottawa Coding theory is concerned with the transfer of data. There are two issues of fundamental importance. First, the data must be transferred accurately. But equally important is that the transfer be done in an e cient manner. It is the interplay of these two issues which is the core of the theory of error-correcting codes. Typically, the data is represented as a string of zeros and ones. Then a code consists of a set of such strings, each of the same length. The most fruitful approach to the subject is to consider the set f0; 1g as a two-element eld. We will then only",
"title": ""
},
{
"docid": "476aa14f6b71af480e8ab4747849d7e3",
"text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.",
"title": ""
},
{
"docid": "2526915745dda9026836347292f79d12",
"text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.",
"title": ""
},
{
"docid": "f095118c63d1531ebdbaec3565b0d91f",
"text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.",
"title": ""
},
{
"docid": "940e3a77d9dbe1da2fb2f38ae768b71e",
"text": "Layer-by-layer deposition of materials to manufacture parts—better known as three-dimensional (3D) printing or additive manufacturing—has been flourishing as a fabrication process in the past several years and now can create complex geometries for use as models, assembly fixtures, and production molds. Increasing interest has focused on the use of this technology for direct manufacturing of production parts; however, it remains generally limited to single-material fabrication, which can limit the end-use functionality of the fabricated structures. The next generation of 3D printing will entail not only the integration of dissimilar materials but the embedding of active components in order to deliver functionality that was not possible previously. Examples could include arbitrarily shaped electronics with integrated microfluidic thermal management and intelligent prostheses custom-fit to the anatomy of a specific patient. We review the state of the art in multiprocess (or hybrid) 3D printing, in which complementary processes, both novel and traditional, are combined to advance the future of manufacturing.",
"title": ""
},
{
"docid": "9a3a73f35b27d751f237365cc34c8b28",
"text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.",
"title": ""
},
{
"docid": "05127dab049ef7608932913f66db0990",
"text": "This paper presents a hybrid tele-manipulation system, comprising of a sensorized 3-D-printed soft robotic gripper and a soft fabric-based haptic glove that aim at improving grasping manipulation and providing sensing feedback to the operators. The flexible 3-D-printed soft robotic gripper broadens what a robotic gripper can do, especially for grasping tasks where delicate objects, such as glassware, are involved. It consists of four pneumatic finger actuators, casings with through hole for housing the actuators, and adjustable base. The grasping length and width can be configured easily to suit a variety of objects. The soft haptic glove is equipped with flex sensors and soft pneumatic haptic actuator, which enables the users to control the grasping, to determine whether the grasp is successful, and to identify the grasped object shape. The fabric-based soft pneumatic haptic actuator can simulate haptic perception by producing force feedback to the users. Both the soft pneumatic finger actuator and haptic actuator involve simple fabrication technique, namely 3-D-printed approach and fabric-based approach, respectively, which reduce fabrication complexity as compared to the steps involved in a traditional silicone-based approach. The sensorized soft robotic gripper is capable of picking up and holding a wide variety of objects in this study, ranging from lightweight delicate object weighing less than 50 g to objects weighing 1100 g. The soft haptic actuator can produce forces of up to 2.1 N, which is more than the minimum force of 1.5 N needed to stimulate haptic perception. The subjects are able to differentiate the two objects with significant shape differences in the pilot test. Compared to the existing soft grippers, this is the first soft sensorized 3-D-printed gripper, coupled with a soft fabric-based haptic glove that has the potential to improve the robotic grasping manipulation by introducing haptic feedback to the users.",
"title": ""
},
{
"docid": "a58769ca02b9409a983ac6d7ba69f0be",
"text": "In this paper, we describe an approach for the automatic medical annotation task of the 2008 CLEF cross-language image retrieval campaign (ImageCLEF). The data comprise 12076 fully annotated images according to the IRMA code. This work is focused on the process of feature extraction from images and hierarchical multi-label classification. To extract features from the images we used a technique called: local distribution of edges. With this techniques each image was described with 80 variables. The goal of the classification task was to classify an image according to the IRMA code. The IRMA code is organized hierarchically. Hence, as classifer we selected an extension of the predictive clustering trees (PCTs) that is able to handle this type of data. Further more, we constructed ensembles (Bagging and Random Forests) that use PCTs as base classifiers.",
"title": ""
},
{
"docid": "adddebf272a3b0fe510ea04ed7cc3837",
"text": "PURPOSE\nTo explore the association of angiographic nonperfusion in focal and diffuse recalcitrant diabetic macular edema (DME) in diabetic retinopathy (DR).\n\n\nDESIGN\nA retrospective, observational case series of patients with the diagnosis of recalcitrant DME for at least 2 years placed into 1 of 4 cohorts based on the degree of DR.\n\n\nMETHODS\nA total of 148 eyes of 76 patients met the inclusion criteria at 1 academic institution. Ultra-widefield fluorescein angiography (FA) images and spectral-domain optical coherence tomography (SD OCT) images were obtained on all patients. Ultra-widefield FA images were graded for quantity of nonperfusion, which was used to calculate ischemic index. Main outcome measures were mean ischemic index, mean change in central macular thickness (CMT), and mean number of macular photocoagulation treatments over the 2-year study period.\n\n\nRESULTS\nThe mean ischemic index was 47% (SD 25%; range 0%-99%). The mean ischemic index of eyes within Cohorts 1, 2, 3, and 4 was 0%, 34% (range 16%-51%), 53% (range 32%-89%), and 65% (range 47%-99%), respectively. The mean percentage decrease in CMT in Cohorts 1, 2, 3, and 4 were 25.2%, 19.1%, 11.6%, and 7.2%, respectively. The mean number of macular photocoagulation treatments in Cohorts 1, 2, 3, and 4 was 2.3, 4.8, 5.3, and 5.7, respectively.\n\n\nCONCLUSIONS\nEyes with larger areas of retinal nonperfusion and greater severity of DR were found to have the most recalcitrant DME, as evidenced by a greater number of macular photocoagulation treatments and less reduction in SD OCT CMT compared with eyes without retinal nonperfusion. Areas of untreated retinal nonperfusion may generate biochemical mediators that promote ischemia and recalcitrant DME.",
"title": ""
},
{
"docid": "d798bc49068356495074f92b3bfe7a4b",
"text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.",
"title": ""
},
{
"docid": "77f5216ede8babf4fb3b2bcbfc9a3152",
"text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.",
"title": ""
},
{
"docid": "fbe1e6b899b1a2e9d53d25e3fa70bd86",
"text": "Previous empirical studies examining the relationship between IT capability and accountingbased measures of firm performance report mixed results. We argue that extant research (1) has relied on aggregate overall measures of the firm’s IT capability, ignoring the specific type and nature of IT capability; and (2) has not fully considered important contextual (environmental) conditions that influence the IT capability-firm performance relationship. Drawing on the resource-based view (RBV), we advance a contingency perspective and propose that IT capabilities’ impact on firm resources is contingent on the “fit” between the type of IT capability/resource a firm possesses and the demands of the environment (industry) in which it competes. Specifically, using publicly available rankings as proxies for two types of IT capabilities (internally-focused and externally-focused capabilities), we empirically examines the degree to which three industry characteristics (dynamism, munificence, and complexity) influence the impact of each type of IT capability on measures of financial performance. After controlling for prior performance, the findings provide general support for the posited contingency model of IT impact. The implications of these findings on practice and research are discussed.",
"title": ""
},
{
"docid": "28b796954834230a0e8218e24bab0d35",
"text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).",
"title": ""
},
{
"docid": "be48b00ee50c872d42ab95e193ac774b",
"text": "T profitability of remanufacturing systems for different cost, technology, and logistics structures has been extensively investigated in the literature. We provide an alternative and somewhat complementary approach that considers demand-related issues, such as the existence of green segments, original equipment manufacturer competition, and product life-cycle effects. The profitability of a remanufacturing system strongly depends on these issues as well as on their interactions. For a monopolist, we show that there exist thresholds on the remanufacturing cost savings, the green segment size, market growth rate, and consumer valuations for the remanufactured products, above which remanufacturing is profitable. More important, we show that under competition remanufacturing can become an effective marketing strategy, which allows the manufacturer to defend its market share via price discrimination.",
"title": ""
},
{
"docid": "37c35b782bb80d2324749fc71089c445",
"text": "Predicting the stock market is considered to be a very difficult task due to its non-linear and dynamic nature. Our proposed system is designed in such a way that even a layman can use it. It reduces the burden on the user. The user’s job is to give only the recent closing prices of a stock as input and the proposed Recommender system will instruct him when to buy and when to sell if it is profitable or not to buy share in case if it is not profitable to do trading. Using soft computing based techniques is considered to be more suitable for predicting trends in stock market where the data is chaotic and large in number. The soft computing based systems are capable of extracting relevant information from large sets of data by discovering hidden patterns in the data. Here regression trees are used for dimensionality reduction and clustering is done with the help of Self Organizing Maps (SOM). The proposed system is designed to assist stock market investors identify possible profit-making opportunities and also help in developing a better understanding on how to extract the relevant information from stock price data. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a84b5fa43c17eebd9cc3ddf2a0d2129e",
"text": "The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station.",
"title": ""
},
{
"docid": "80477fdab96ae761dbbb7662b87e82a0",
"text": "This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.",
"title": ""
}
] | scidocsrr |
eacc5b915ce11792286986f305652163 | Fuzzy Filter Design for Nonlinear Systems in Finite-Frequency Domain | [
{
"docid": "239644f4ecd82758ca31810337a10fda",
"text": "This paper discusses a design of stable filters withH∞ disturbance attenuation of Takagi–Sugeno fuzzy systemswith immeasurable premise variables. When we consider the filter design of Takagi–Sugeno fuzzy systems, the selection of premise variables plays an important role. If the premise variable is the state of the system, then a fuzzy system describes a wide class of nonlinear systems. In this case, however, a filter design of fuzzy systems based on parallel distributed compensator idea is infeasible. To avoid such a difficulty, we consider the premise variables uncertainties. Then we consider a robust H∞ filtering problem for such an uncertain system. A solution of the problem is given in terms of linear matrix inequalities (LMIs). Some numerical examples are given to illustrate our theory. © 2008 Elsevier B.V. All rights reserved.",
"title": ""
}
] | [
{
"docid": "eaf16b3e9144426aed7edc092ad4a649",
"text": "In order to use a synchronous dynamic RAM (SDRAM) as the off-chip memory of an H.264/AVC encoder, this paper proposes an efficient SDRAM memory controller with an asynchronous bridge. With the proposed architecture, the SDRAM bandwidth is increased by making the operation frequency of an external SDRAM higher than that of the hardware accelerators of an H.264/AVC encoder. Experimental results show that the encoding speed is increased by 30.5% when the SDRAM clock frequency is increased from 100 MHz to 200 MHz while the H.264/AVC hardware accelerators operate at 100 MHz.",
"title": ""
},
{
"docid": "42127829aebaaaa4a4ac6c7e9417feaf",
"text": "The study was to compare treatment preference, efficacy, and tolerability of sildenafil citrate (sildenafil) and tadalafil for treating erectile dysfunction (ED) in Chinese men naοve to phosphodiesterase 5 (PDE5) inhibitor therapies. This multicenter, randomized, open-label, crossover study evaluated whether Chinese men with ED preferred 20-mg tadalafil or 100-mg sildenafil. After a 4 weeks baseline assessment, 383 eligible patients were randomized to sequential 20-mg tadalafil per 100-mg sildenafil or vice versa for 8 weeks respectively and then chose which treatment they preferred to take during the 8 weeks extension. Primary efficacy was measured by Question 1 of the PDE5 Inhibitor Treatment Preference Questionnaire (PITPQ). Secondary efficacy was analyzed by PITPQ Question 2, the International Index of Erectile Function (IIEF) erectile function (EF) domain, sexual encounter profile (SEP) Questions 2 and 3, and the Drug Attributes Questionnaire. Three hundred and fifty men (91%) completed the randomized treatment phase. Two hundred and forty-two per 350 (69.1%) patients preferred 20-mg tadalafil, and 108/350 (30.9%) preferred 100-mg sildenafil (P < 0.001) as their treatment in the 8 weeks extension. Ninety-two per 242 (38%) patients strongly preferred tadalafil and 37/108 (34.3%) strongly the preferred sildenafil. The SEP2 (penetration), SEP3 (successful intercourse), and IIEF-EF domain scores were improved in both tadalafil and sildenafil treatment groups. For patients who preferred tadalafil, getting an erection long after taking the medication was the most reported reason for tadalafil preference. The only treatment-emergent adverse event reported by > 2% of men was headache. After tadalafil and sildenafil treatments, more Chinese men with ED naοve to PDE5 inhibitor preferred tadalafil. Both sildenafil and tadalafil treatments were effective and safe.",
"title": ""
},
{
"docid": "5e952c10a30baffc511bb3ffe86cd4a8",
"text": "Chitin and its deacetylated derivative chitosan are natural polymers composed of randomly distributed -(1-4)linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). Chitin is insoluble in aqueous media while chitosan is soluble in acidic conditions due to the free protonable amino groups present in the D-glucosamine units. Due to their natural origin, both chitin and chitosan can not be defined as a unique chemical structure but as a family of polymers which present a high variability in their chemical and physical properties. This variability is related not only to the origin of the samples but also to their method of preparation. Chitin and chitosan are used in fields as different as food, biomedicine and agriculture, among others. The success of chitin and chitosan in each of these specific applications is directly related to deep research into their physicochemical properties. In recent years, several reviews covering different aspects of the applications of chitin and chitosan have been published. However, these reviews have not taken into account the key role of the physicochemical properties of chitin and chitosan in their possible applications. The aim of this review is to highlight the relationship between the physicochemical properties of the polymers and their behaviour. A functional characterization of chitin and chitosan regarding some biological properties and some specific applications (drug delivery, tissue engineering, functional food, food preservative, biocatalyst immobilization, wastewater treatment, molecular imprinting and metal nanocomposites) is presented. The molecular mechanism of the biological properties such as biocompatibility, mucoadhesion, permeation enhancing effect, anticholesterolemic, and antimicrobial has been up-",
"title": ""
},
{
"docid": "d258a14fc9e64ba612f2c8ea77f85d08",
"text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.",
"title": ""
},
{
"docid": "90df69e590373e757523f4c92a841d5c",
"text": "A new impedance-based stability criterion was proposed for a grid-tied inverter system based on a Norton equivalent circuit of the inverter [18]. As an extension of the work in [18], this paper shows that using a Thévenin representation of the inverter can lead to the same criterion in [18]. Further, this paper shows that the criterion proposed by Middlebrook can still be used for the inverter systems. The link between the criterion in [18] and the original criterion is the inverse Nyquist stability criterion. The criterion in [18] is easier to be used. Because the current feedback controller and the phase-locked loop of the inverter introduce poles at the origin and right-half plane to the output impedance of the inverter. These poles do not appear in the minor loop gain defined in [18] but in the minor loop gain defined by Middlebrook. Experimental systems are used to verify the proposed analysis.",
"title": ""
},
{
"docid": "93e6194dc3d8922edb672ac12333ea82",
"text": "Sensors including RFID tags have been widely deployed for measuring environmental parameters such as temperature, humidity, oxygen concentration, monitoring the location and velocity of moving objects, tracking tagged objects, and many others. To support effective, efficient, and near real-time phenomena probing and objects monitoring, streaming sensor data have to be gracefully managed in an event processing manner. Different from the traditional events, sensor events come with temporal or spatio-temporal constraints and can be non-spontaneous. Meanwhile, like general event streams, sensor event streams can be generated with very high volumes and rates. Primitive sensor events need to be filtered, aggregated and correlated to generate more semantically rich complex events to facilitate the requirements of up-streaming applications. Motivated by such challenges, many new methods have been proposed in the past to support event processing in sensor event streams. In this chapter, we survey state-of-the-art research on event processing in sensor networks, and provide a broad overview of major topics in Springer Science+Business Media New York 2013 © Managing and Mining Sensor Data, DOI 10.1007/978-1-4614-6309-2_4, C.C. Aggarwal (ed.), 77 78 MANAGING AND MINING SENSOR DATA complex RFID event processing, including event specification languages, event detection models, event processing methods and their optimizations. Additionally, we have presented an open discussion on advanced issues such as processing uncertain and out-of-order sensor events.",
"title": ""
},
{
"docid": "26e79793addc4750dcacc0408764d1e1",
"text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.",
"title": ""
},
{
"docid": "2d4cb6980cf8716699bdffca6cfed274",
"text": "Advances in laser technology have progressed so rapidly during the past decade that successful treatment of many cutaneous concerns and congenital defects, including vascular and pigmented lesions, tattoos, scars and unwanted haircan be achieved. The demand for laser surgery has increased as a result of the relative ease with low incidence of adverse postoperative sequelae. In this review, the currently available laser systems with cutaneous applications are outlined to identify the various types of dermatologic lasers available, to list their clinical indications and to understand the possible side effects.",
"title": ""
},
{
"docid": "2b310a05b6a0c0fae45a2e15f8d52101",
"text": "Cyber threats and the field of computer cyber defense are gaining more and more an increased importance in our lives. Starting from our regular personal computers and ending with thin clients such as netbooks or smartphones we find ourselves bombarded with constant malware attacks. In this paper we will present a new and novel way in which we can detect these kind of attacks by using elements of modern game theory. We will present the effects and benefits of game theory and we will talk about a defense exercise model that can be used to train cyber response specialists.",
"title": ""
},
{
"docid": "09085fc15308a96cd9441bb0e23e6c1a",
"text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.",
"title": ""
},
{
"docid": "5394df4e1d6f52a608bfdab8731da088",
"text": "For over a decade, researchers have devoted much effort to construct theoretical models, such as the Technology Acceptance Model (TAM) and the Expectation Confirmation Model (ECM) for explaining and predicting user behavior in IS acceptance and continuance. Another model, the Cognitive Model (COG), was proposed for continuance behavior; it combines some of the variables used in both TAM and ECM. This study applied the technique of structured equation modeling with multiple group analysis to compare the TAM, ECM, and COG models. Results indicate that TAM, ECM, and COG have quite different assumptions about the underlying constructs that dictate user behavior and thus have different explanatory powers. The six constructs in the three models were synthesized to propose a new Technology Continuance Theory (TCT). A major contribution of TCT is that it combines two central constructs: attitude and satisfaction into one continuance model, and has applicability for users at different stages of the adoption life cycle, i.e., initial, short-term and long-term users. The TCT represents a substantial improvement over the TAM, ECM and COG models in terms of both breadth of applicability and explanatory power.",
"title": ""
},
{
"docid": "e4b54824b2528b66e28e82ad7d496b36",
"text": "Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. Methods: The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.",
"title": ""
},
{
"docid": "7a87ffc98d8bab1ff0c80b9e8510a17d",
"text": "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.",
"title": ""
},
{
"docid": "a39091796e8f679f246baa8dce08f213",
"text": "Resource scheduling in cloud is a challenging job and the scheduling of appropriate resources to cloud workloads depends on the QoS requirements of cloud applications. In cloud environment, heterogeneity, uncertainty and dispersion of resources encounters problems of allocation of resources, which cannot be addressed with existing resource allocation policies. Researchers still face troubles to select the efficient and appropriate resource scheduling algorithm for a specific workload from the existing literature of resource scheduling algorithms. This research depicts a broad methodical literature analysis of resource management in the area of cloud in general and cloud resource scheduling in specific. In this survey, standard methodical literature analysis technique is used based on a complete collection of 110 research papers out of large collection of 1206 research papers published in 19 foremost workshops, symposiums and conferences and 11 prominent journals. The current status of resource scheduling in cloud computing is distributed into various categories. Methodical analysis of resource scheduling in cloud computing is presented, resource scheduling algorithms and management, its types and benefits with tools, resource scheduling aspects and resource distribution policies are described. The literature concerning to thirteen types of resource scheduling algorithms has also been stated. Further, eight types of resource distribution policies are described. Methodical analysis of this research work will help researchers to find the important characteristics of resource scheduling algorithms and also will help to select most suitable algorithm for scheduling a specific workload. Future research directions have also been suggested in this research work.",
"title": ""
},
{
"docid": "048d54f4997bfea726f69cf7f030543d",
"text": "In this article, we have reviewed the state of the art of IPT systems and have explored the suitability of the technology to wirelessly charge battery powered vehicles. the review shows that the IPT technology has merits for stationary charging (when the vehicle is parked), opportunity charging (when the vehicle is stopped for a short period of time, for example, at a bus stop), and dynamic charging (when the vehicle is moving along a dedicated lane equipped with an IPT system). Dynamic wireless charging holds promise to partially or completely eliminate the overnight charging through a compact network of dynamic chargers installed on the roads that would keep the vehicle batteries charged at all times, consequently reducing the range anxiety and increasing the reliability of EVs. Dynamic charging can help lower the price of EVs by reducing the size of the battery pack. Indeed, if the recharging energy is readily available, the batteries do not have to support the whole driving range but only supply power when the IPT system is not available. Depending on the power capability, the use of dynamic charging may increase driving range and reduce the size of the battery pack.",
"title": ""
},
{
"docid": "ffb1610fddb36fa4db5fa3c3dc1e5fad",
"text": "The complex methodology of investigations was applied to study a movement structure on bench press. We have checked the usefulness of multimodular measuring system (SMART-E, BTS company, Italy) and a special device for tracking the position of barbell (pantograph). Software Smart Analyser was used to create a database allowing chosen parameters to be compared. The results from different measuring devices are very similar, therefore the replacement of many devices by one multimodular system is reasonable. In our study, the effect of increased barbell load on the values of muscles activity and bar kinematics during the flat bench press movement was clearly visible. The greater the weight of a barbell, the greater the myoactivity of shoulder muscles and vertical velocity of the bar. It was also confirmed the presence of the so-called sticking point (period) during the concentric phase of the bench press. In this study, the initial velocity of the barbell decreased (v(min)) not only under submaximal and maximal loads (90 and 100% of the one repetition maximum; 1-RM), but also under slightly lighter weights (70 and 80% of 1-RM).",
"title": ""
},
{
"docid": "51e2f490072820230d71f648d70babcb",
"text": "Classification and regression trees are becoming increasingly popular for partitioning data and identifying local structure in small and large datasets. Classification trees include those models in which the dependent variable (the predicted variable) is categorical. Regression trees include those in which it is continuous. This paper discusses pitfalls in the use of these methods and highlights where they are especially suitable. Paper presented at the 1992 Sun Valley, ID, Sawtooth/SYSTAT Joint Software Conference.",
"title": ""
},
{
"docid": "8bae8e7937f4c9a492a7030c62d7d9f4",
"text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.",
"title": ""
},
{
"docid": "717dd8e3c699d6cc22ba483002ab0a6f",
"text": "Our analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in realtime to opportunities and risks detected or environmental changes. We are the first to tackle this problem by providing active rule support embedded directly within the CEP engine, henceforth called Active Complex Event Processing technology, or short, Active CEP. We design the Active CEP model and associated rule language that allows rules to be triggered by CEP system state changes and correctly executed during the continuous query process. Moreover we design an Active CEP infrastructure, that integrates the active rule component into the CEP kernel, allowing finegrained and optimized rule processing. We demonstrate the power of Active CEP by applying it to the development of a collaborative project with UMass Medical School, which detects potential threads of infection and reminds healthcare workers to perform hygiene precautions in real-time. 1. BACKGROUND AND MOTIVATION Complex patterns of events often capture exceptions, threats or opportunities occurring across application space and time. Complex Event Processing (CEP) technology has thus increasingly gained popularity for efficiently detecting such event patterns in real-time. For example CEP has been employed by diverse applications ranging from healthcare systems , financial analysis , real-time business intelligence to RFID based surveillance. However, existing CEP technologies [3, 7, 2, 5], while effective for pattern matching, are limited in their capability of supporting active rules. We motivate the need for such capability based on our experience with the development of a real-world hospital infection control system, called HygieneReminder, or short HyReminder. Application: HyReminder. According to the U.S. Centers for Disease Control and Prevention [8], healthcareassociated infections hit 1.7 million people a year in the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were presented at The 36th International Conference on Very Large Data Bases, September 13-17, 2010, Singapore. Proceedings of the VLDB Endowment, Vol. 3, No. 2 Copyright 2010 VLDB Endowment 2150-8097/10/09... $ 10.00. United States, causing an estimated 99,000 deaths. HyReminder is a collaborated project between WPI and University of Massachusetts Medical School (UMMS) that uses advanced CEP technologies to solve this long-standing public health problem. HyReminder system aims to continuously track healthcare workers (HCW) for hygiene compliance (for example cleansing hands before entering a H1N1 patient’s room), and remind the HCW at the appropriate moments to perform hygiene precautions thus preventing spread of infections. CEP technologies are adopted to efficiently monitor event patterns, such as the sequence that a HCW left a patient room (this behavior is measured by a sensor reading and modeled as “exit” event), did not sanitize his hands (referred as “!sanitize”, where ! represents negation), and then entered another patient’s room (referred as “enter”). Such a sequence of behaviors, i.e. SEQ(exit,!sanitize,enter), would be deemed as a violation of hand hygiene regulations. Besides detecting complex events, the HyReminder system requires the ability to specify logic rules reminding HCWs to perform the respective appropriate hygiene upon detection of an imminent hand hygiene violation or an actual observed violation. A condensed version of example logic rules derived from HyReminder and modeled using CEP semantics is depicted in Figure 1. In the figure, the edge marked “Q1.1” expresses the logic that “if query Q1.1 is satisfied for a HCW, then change his hygiene status to warning and change his badge light to yellow”. This logic rule in fact specifies how the system should react to the observed change, here meaning the risk being detected by the continuous pattern matching query Q1.1, during the long running query process. The system’s streaming environment requires that such reactions be executed in a timely fashion. An additional complication arises in that the HCW status changed by this logic rule must be used as a condition by other continuous queries at run time, like Q2.1 and Q2.2. We can see that active rules and continuous queries over streaming data are tightly-coupled: continuous queries are monitoring the world while active rules are changing the world, both in real-time. Yet contrary to traditional databases, data is not persistently stored in a DSMS, but rather streamed through the system in fluctuating arrival rate. Thus processing active rules in CEP systems requires precise synchronization between queries and rules and careful consideration of latency and resource utilization. Limitations of Existing CEP Technology. In summary, the following active functionalities are needed by many event stream applications, but not supported by the existing",
"title": ""
}
] | scidocsrr |
8984257b3fea005a6bee6049c2375f5f | A Critical Review of Online Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries | [
{
"docid": "1f700c0c55b050db7c760f0c10eab947",
"text": "Cathy O’Neil’s Weapons of Math Destruction is a timely reminder of the power and perils of predictive algorithms and model-driven decision processes. The book deals in some depth with eight case studies of the abuses she associates with WMDs: “weapons of math destruction.” The cases include the havoc wrought by value-added models used to evaluate teacher performance and by the college ranking system introduced by U.S. News and World Report; the collateral damage of online advertising and models devised to track and monetize “eyeballs”; the abuses associated with the recidivism models used in judicial decisions; the inequities perpetrated by the use of personality tests in hiring decisions; the burdens placed on low-wage workers by algorithm-driven attempts to maximize labor efficiency; the injustices written into models that evaluate creditworthiness; the inequities produced by insurance companies’ risk models; and the potential assault on the democratic process by the use of big data in political campaigns. As this summary suggests, O’Neil had plenty of examples to choose from when she wrote the book, but since the publication of Weapons of Math Destruction, two more problems associated with model-driven decision procedures have surfaced, making O’Neil’s work even more essential reading. The first—the role played by fake news, much of it circulated on Facebook, in the 2016 election—has led to congressional investigations. The second—the failure of algorithm-governed oversight to recognize and delete gruesome posts on the Facebook Live streaming service—has caused CEO Mark Zuckerberg to announce the addition of 3,000 human screeners to the Facebook staff. While O’Neil’s book may seem too polemical to some readers and too cautious to others, it speaks forcefully to the cultural moment we share. O’Neil weaves the story of her own credentials and work experience into her analysis, because, as she explains, her training as a mathematician and her experience in finance shaped the way she now understands the world. O’Neil earned a PhD in mathematics from Harvard; taught at Barnard College, where her research area was algebraic number theory; and worked for the hedge fund D. E. Shaw, which uses mathematical analysis to guide investment decisions. When the financial crisis of 2008 revealed that even the most sophisticated models were incapable of anticipating risks associated with “black swans”—events whose rarity make them nearly impossible to predict—O’Neil left the world of corporate finance to join the RiskMetrics Group, where she helped market risk models to financial institutions eager to rehabilitate their image. Ultimately, she became disillusioned with the financial industry’s refusal to take seriously the limitations of risk management models and left RiskMetrics. She rebranded herself a “data scientist” and took a job at Intent Media, where she helped design algorithms that would make big data useful for all kinds of applications. All the while, as O’Neil describes it, she “worried about the separation between technical models and real people, and about the moral repercussions of that separation” (page 48). O’Neil eventually left Intent Media to devote her energies to inWeapons of Math Destruction",
"title": ""
}
] | [
{
"docid": "08e8629cf29da3532007c5cf5c57d8bb",
"text": "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.",
"title": ""
},
{
"docid": "7a8faa4e8ecef8e28aa2203f0aa9d888",
"text": "In today’s global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. This paper proposes a fuzzy mathematical programming model for supply chain planning which considers supply, demand and process uncertainties. The model has been formulated as a fuzzy mixed-integer linear programming model where data are ill-known andmodelled by triangular fuzzy numbers. The fuzzy model provides the decision maker with alternative decision plans for different degrees of satisfaction. This proposal is tested by using data from a real automobile supply chain. © 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "ff029b2b9799ab1de433a3264d28d711",
"text": "This paper introduces and summarises the findings of a new shared task at the intersection of Natural Language Processing and Computer Vision: the generation of image descriptions in a target language, given an image and/or one or more descriptions in a different (source) language. This challenge was organised along with the Conference on Machine Translation (WMT16), and called for system submissions for two task variants: (i) a translation task, in which a source language image description needs to be translated to a target language, (optionally) with additional cues from the corresponding image, and (ii) a description generation task, in which a target language description needs to be generated for an image, (optionally) with additional cues from source language descriptions of the same image. In this first edition of the shared task, 16 systems were submitted for the translation task and seven for the image description task, from a total of 10 teams.",
"title": ""
},
{
"docid": "011d0fa5eac3128d5127a66741689df7",
"text": "Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings. The text embeddings are generated using an Simple Recurrent Network. We find that enriching the feature set with text embeddings substantially lowers word error rates on an English tweet normalization dataset. Our model improves on stateof-the-art with little training data and without any lexical resources.",
"title": ""
},
{
"docid": "68fb48f456383db1865c635e64333d8a",
"text": "Documenting underwater archaeological sites is an extremely challenging problem. Sites covering large areas are particularly daunting for traditional techniques. In this paper, we present a novel approach to this problem using both an autonomous underwater vehicle (AUV) and a diver-controlled stereo imaging platform to document the submerged Bronze Age city at Pavlopetri, Greece. The result is a three-dimensional (3D) reconstruction covering 26,600 m2 at a resolution of 2 mm/pixel, the largest-scale underwater optical 3D map, at such a resolution, in the world to date. We discuss the advances necessary to achieve this result, including i) an approach to color correct large numbers of images at varying altitudes and over varying bottom types; ii) a large-scale bundle adjustment framework that is capable of handling upward of 400,000 stereo images; and iii) a novel approach to the registration and rapid documentation of an underwater excavations area that can quickly produce maps of site change. We present visual and quantitative comparisons to the authors’ previous underwater mapping approaches. C © 2016 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "b121ba0b5d24e0d53f85d04415b8c41d",
"text": "Until now, most systems for Internet of Things (IoT) management, have been designed in a Cloud-centric manner, getting benefits from the unified platform that the Cloud offers. However, a Cloud-centric infrastructure mainly achieves static sensor and data streaming systems, which do not support the direct configuration management of IoT components. To address this issue, a virtualization of IoT components (Virtual Resources) is introduced at the edge of the IoT network. This research also introduces permission-based Blockchain protocols to handle the provisioning of Virtual Resources directly onto edge devices. The architecture presented by this research focuses on the use of Virtual Resources and Blockchain protocols as management tools to distribute configuration tasks towards the edge of the IoT network. Results from lab experiments demonstrate the successful deployment and communication performance (response time in milliseconds) of Virtual Resources on two edge platforms, Raspberry Pi and Edison board. This work also provides performance evaluations of two permission-based blockchain protocol approaches. The first blockchain approach is a Blockchain as a Service (BaaS) in the Cloud, Bluemix. The second blockchain approach is a private cluster hosted in a Fog network, Multichain.",
"title": ""
},
{
"docid": "3149dd6f03208af01333dbe2c045c0c6",
"text": "Debates about human nature often revolve around what is built in. However, the hallmark of human nature is how much of a person's identity is not built in; rather, it is humans' great capacity to adapt, change, and grow. This nature versus nurture debate matters-not only to students of human nature-but to everyone. It matters whether people believe that their core qualities are fixed by nature (an entity theory, or fixed mindset) or whether they believe that their qualities can be developed (an incremental theory, or growth mindset). In this article, I show that an emphasis on growth not only increases intellectual achievement but can also advance conflict resolution between long-standing adversaries, decrease even chronic aggression, foster cross-race relations, and enhance willpower. I close by returning to human nature and considering how it is best conceptualized and studied.",
"title": ""
},
{
"docid": "19ebb5c0cdf90bf5aef36ad4b9f621a1",
"text": "There has been a dramatic increase in the number and complexity of new ventilation modes over the last 30 years. The impetus for this has been the desire to improve the safety, efficiency, and synchrony of ventilator-patient interaction. Unfortunately, the proliferation of names for ventilation modes has made understanding mode capabilities problematic. New modes are generally based on increasingly sophisticated closed-loop control systems or targeting schemes. We describe the 6 basic targeting schemes used in commercially available ventilators today: set-point, dual, servo, adaptive, optimal, and intelligent. These control systems are designed to serve the 3 primary goals of mechanical ventilation: safety, comfort, and liberation. The basic operations of these schemes may be understood by clinicians without any engineering background, and they provide the basis for understanding the wide variety of ventilation modes and their relative advantages for improving patient-ventilator synchrony. Conversely, their descriptions may provide engineers with a means to better communicate to end users.",
"title": ""
},
{
"docid": "5eeb17964742e1bf1e517afcb1963b02",
"text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.",
"title": ""
},
{
"docid": "eb271acef996a9ba0f84a50b5055953b",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "1785135fa0a35fd59a6181ec5886ddc1",
"text": "We aimed to describe the surgical technique and clinical outcomes of paraspinal-approach reduction and fixation (PARF) in a group of patients with Denis type B thoracolumbar burst fracture (TLBF) with neurological deficiencies. A total of 62 patients with Denis B TLBF with neurological deficiencies were included in this study between January 2009 and December 2011. Clinical evaluations including the Frankel scale, pain visual analog scale (VAS) and radiological assessment (CT scans for fragment reduction and X-ray for the Cobb angle, adjacent superior and inferior intervertebral disc height, and vertebral canal diameter) were performed preoperatively and at 3 days, 6 months, and 1 and 2 years postoperatively. All patients underwent successful PARF, and were followed-up for at least 2 years. Average surgical time, blood loss and incision length were recorded. The sagittal vertebral canal diameter was significantly enlarged. The canal stenosis index was also improved. Kyphosis was corrected and remained at 8.6±1.4o (P>0.05) 1 year postoperatively. Adjacent disc heights remained constant. Average Frankel grades were significantly improved at the end of follow-up. All 62 patients were neurologically assessed. Pain scores decreased at 6 months postoperatively, compared to before surgery (P<0.05). PARF provided excellent reduction for traumatic segmental kyphosis, and resulted in significant spinal canal clearance, which restored and maintained the vertebral body height of patients with Denis B TLBF with neurological deficits.",
"title": ""
},
{
"docid": "2dc2e201bee0f963355d10572ad71955",
"text": "This paper presents Dynamoth, a dynamic, scalable, channel-based pub/sub middleware targeted at large scale, distributed and latency constrained systems. Our approach provides a software layer that balances the load generated by a high number of publishers, subscribers and messages across multiple, standard pub/sub servers that can be deployed in the Cloud. In order to optimize Cloud infrastructure usage, pub/sub servers can be added or removed as needed. Balancing takes into account the live characteristics of each channel and is done in an hierarchical manner across channels (macro) as well as within individual channels (micro) to maintain acceptable performance and low latencies despite highly varying conditions. Load monitoring is performed in an unintrusive way, and rebalancing employs a lazy approach in order to minimize its temporal impact on performance while ensuring successful and timely delivery of all messages. Extensive real-world experiments that illustrate the practicality of the approach within a massively multiplayer game setting are presented. Results indicate that with a given number of servers, Dynamoth was able to handle 60% more simultaneous clients than the consistent hashing approach, and that it was properly able to deal with highly varying conditions in the context of large workloads.",
"title": ""
},
{
"docid": "23ed8f887128cb1cd6ea2f386c099a43",
"text": "The capability to overcome terrain irregularities or obstacles, named terrainability, is mostly dependant on the suspension mechanism of the rover and its control. For a given wheeled robot, the terrainability can be improved by using a sophisticated control, and is somewhat related to minimizing wheel slip. The proposed control method, named torque control, improves the rover terrainability by taking into account the whole mechanical structure. The rover model is based on the Newton-Euler equations and knowing the complete state of the mechanical structures allows us to compute the force distribution in the structure, and especially between the wheels and the ground. Thus, a set of torques maximizing the traction can be used to drive the rover. The torque control algorithm is presented in this paper, as well as tests showing its impact and improvement in terms of terrainability. Using the CRAB rover platform, we show that the torque control not only increases the climbing performance but also limits odometric errors and reduces the overall power consumption.",
"title": ""
},
{
"docid": "134578862a01dc4729999e9076362ee0",
"text": "PURPOSE\nBasal-like breast cancer is associated with high grade, poor prognosis, and younger patient age. Clinically, a triple-negative phenotype definition [estrogen receptor, progesterone receptor, and human epidermal growth factor receptor (HER)-2, all negative] is commonly used to identify such cases. EGFR and cytokeratin 5/6 are readily available positive markers of basal-like breast cancer applicable to standard pathology specimens. This study directly compares the prognostic significance between three- and five-biomarker surrogate panels to define intrinsic breast cancer subtypes, using a large clinically annotated series of breast tumors.\n\n\nEXPERIMENTAL DESIGN\nFour thousand forty-six invasive breast cancers were assembled into tissue microarrays. All had staging, pathology, treatment, and outcome information; median follow-up was 12.5 years. Cox regression analyses and likelihood ratio tests compared the prognostic significance for breast cancer death-specific survival (BCSS) of the two immunohistochemical panels.\n\n\nRESULTS\nAmong 3,744 interpretable cases, 17% were basal using the triple-negative definition (10-year BCSS, 6 7%) and 9% were basal using the five-marker method (10-year BCSS, 62%). Likelihood ratio tests of multivariable Cox models including standard clinical variables show that the five-marker panel is significantly more prognostic than the three-marker panel. The poor prognosis of triple-negative phenotype is conferred almost entirely by those tumors positive for basal markers. Among triple-negative patients treated with adjuvant anthracycline-based chemotherapy, the additional positive basal markers identified a cohort of patients with significantly worse outcome.\n\n\nCONCLUSIONS\nThe expanded surrogate immunopanel of estrogen receptor, progesterone receptor, human HER-2, EGFR, and cytokeratin 5/6 provides a more specific definition of basal-like breast cancer that better predicts breast cancer survival.",
"title": ""
},
{
"docid": "4b69831f2736ae08049be81e05dd4046",
"text": "One of the most important aspects in playing the piano is using the appropriate fingers to facilitate movement and transitions. The fingering arrangement depends to a ce rtain extent on the size of the musician’s hand. We hav e developed an automatic fingering system that, given a sequence of pitches, suggests which fingers should be used. The output can be personalized to agree with t he limitations of the user’s hand. We also consider this system to be the base of a more complex future system: a score reduction system that will reduce orchestra scor e to piano scores. This paper describes: • “Vertical cost” model: the stretch induced by a given hand position. • “Horizontal cost” model: transition between two hand positions. • A system that computes low-cost fingering for a given piece of music. • A machine learning technique used to learn the appropriate parameters in the models.",
"title": ""
},
{
"docid": "65385cdaac98022605efd2fd82bb211b",
"text": "As electric vehicles (EVs) take a greater share in the personal automobile market, their penetration may bring higher peak demand at the distribution level. This may cause potential transformer overloads, feeder congestions, and undue circuit faults. This paper focuses on the impact of charging EVs on a residential distribution circuit. Different EV penetration levels, EV types, and charging profiles are considered. In order to minimize the impact of charging EVs on a distribution circuit, a demand response strategy is proposed in the context of a smart distribution network. In the proposed DR strategy, consumers will have their own choices to determine which load to control and when. Consumer comfort indices are introduced to measure the impact of demand response on consumers' lifestyle. The proposed indices can provide electric utilities a better estimation of the customer acceptance of a DR program, and the capability of a distribution circuit to accommodate EV penetration.",
"title": ""
},
{
"docid": "952d97cc8302a6a1ab584ae32bfb64ee",
"text": "1 Background and Objective of the Survey Compared with conventional centralized systems, blockchain technologies used for transactions of value records, such as bitcoins, structurally have the characteristics that (i) enable the creation of a system that substantially ensures no downtime (ii) make falsification extremely hard, and (iii) realize inexpensive system. Blockchain technologies are expected to be utilized in diverse fields including IoT. Japanese companies just started technology verification independently, and there is a risk that the initiative might be taken by foreign companies in blockchain technologies, which are highly likely to serve as the next-generation platform for all industrial fields in the future. From such point of view, this survey was conducted for the purpose of comparing and analyzing details of numbers of blockchains and advantages/challenges therein; ascertaining promising fields in which the technology should be utilized; ascertaining the impact of the technology on society and the economy; and developing policy guidelines for encouraging industries to utilize the technology in the future. This report compiles the results of interviews with domestic and overseas companies involving blockchain technology and experts. The content of this report is mostly based on data as of the end of February 2016. As specifications of blockchains and the status of services being provided change by the minute, it is recommended to check the latest conditions when intending to utilize any related technologies in business, etc. Terms and abbreviations used in this report are defined as follows. Terms Explanations BTC Abbreviation used as a currency unit of bitcoins FinTech A coined term combining Finance and Technology; Technologies and initiatives to create new services and businesses by utilizing ICT in the financial business Virtual currency / Cryptocurrency Bitcoins or other information whose value is recognized only on the Internet Exchange Services to exchange virtual currency, such as bitcoins, with another virtual currency or with legal currency, such as Japanese yen or US dollars; Some exchange offers services for contracts for difference, such as foreign exchange margin transactions (FX transactions) Consensus A series of procedures from approving a transaction as an official one and mutually confirming said results by using the following consensus algorithm Consensus algorithm Algorithm in general for mutually approving a distributed ledger using Proof of Work and Proof of Stake, etc. Token Virtual currency unique to blockchains; Virtual currency used for paying fees for asset management, etc. on blockchains is referred to …",
"title": ""
},
{
"docid": "69e86a1f6f4d7f1039a3448e06df3725",
"text": "In this paper, a low profile LLC resonant converter with two planar transformers is proposed for a slim SMPS (Switching Mode Power Supply). Design procedures and voltage gain characteristics on the proposed planar transformer and converter are described in detail. Two planar transformers applied to LLC resonant converter are connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter for LED TV power module is designed and tested.",
"title": ""
},
{
"docid": "f9d4b66f395ec6660da8cb22b96c436c",
"text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.",
"title": ""
}
] | scidocsrr |
8ebdc8fee8a3c35cd03cb1a3c1bae8d1 | Novel Cellular Active Array Antenna System at Base Station for Beyond 4G | [
{
"docid": "cac379c00a4146acd06c446358c3e95a",
"text": "In this work, a new base station antenna is proposed. Two separate frequency bands with separate radiating elements are used in each band. The frequency band separation ratio is about 1.3:1. These elements are arranged with different spacing (wider spacing for the lower frequency band, and narrower spacing for the higher frequency band). Isolation between bands inherently exists in this approach. This avoids the grating lobe effect, and mitigates the beam narrowing (dispersion) seen with fixed element spacing covering the whole wide bandwidth. A new low-profile cross dipole is designed, which is integrated in the array with an EBG/AMC structure for reducing the size of low band elements and decreasing coupling at high band.",
"title": ""
},
{
"docid": "0dd462fa371d270a63e7ad88b070d8a2",
"text": "Currently, many operators worldwide are deploying Long Term Evolution (LTE) to provide much faster access with lower latency and higher efficiency than its predecessors 3G and 3.5G. Meanwhile, the service rollout of LTE-Advanced, which is an evolution of LTE and a “true 4G” mobile broadband, is being underway to further enhance LTE performance. However, the anticipated challenges of the next decade (2020s) are so tremendous and diverse that there is a vastly increased need for a new generation mobile communications system with even further enhanced capabilities and new functionalities, namely a fifth generation (5G) system. Envisioning the development of a 5G system by 2020, at DOCOMO we started studies on future radio access as early as 2010, just after the launch of LTE service. The aim at that time was to anticipate the future user needs and the requirements of 10 years later (2020s) in order to identify the right concept and radio access technologies for the next generation system. The identified 5G concept consists of an efficient integration of existing spectrum bands for current cellular mobile and future new spectrum bands including higher frequency bands, e.g., millimeter wave, with a set of spectrum specific and spectrum agnostic technologies. Since a few years ago, we have been conducting several proof-of-concept activities and investigations on our 5G concept and its key technologies, including the development of a 5G real-time simulator, experimental trials of a wide range of frequency bands and technologies and channel measurements for higher frequency bands. In this paper, we introduce an overview of our views on the requirements, concept and promising technologies for 5G radio access, in addition to our ongoing activities for paving the way toward the realization of 5G by 2020. key words: next generation mobile communications system, 5G, 4G, LTE, LTE-advanced",
"title": ""
}
] | [
{
"docid": "6660bcfd564726421d9eaaa696549454",
"text": "When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.",
"title": ""
},
{
"docid": "d13ecf582ac820cdb8ea6353c44c535f",
"text": "We have previously shown that, while the intrinsic quality of the oocyte is the main factor affecting blastocyst yield during bovine embryo development in vitro, the main factor affecting the quality of the blastocyst is the postfertilization culture conditions. Therefore, any improvement in the quality of blastocysts produced in vitro is likely to derive from the modification of the postfertilization culture conditions. The objective of this study was to examine the effect of the presence or absence of serum and the concentration of BSA during the period of embryo culture in vitro on 1) cleavage rate, 2) the kinetics of embryo development, 3) blastocyst yield, and 4) blastocyst quality, as assessed by cryotolerance and gene expression patterns. The quantification of all gene transcripts was carried out by real-time quantitative reverse transcription-polymerase chain reaction. Bovine blastocysts from four sources were used: 1) in vitro culture in synthetic oviduct fluid (SOF) supplemented with 3 mg/ml BSA and 10% fetal calf serum (FCS), 2) in vitro culture in SOF + 3 mg/ml BSA in the absence of serum, 3) in vitro culture in SOF + 16 mg/ml BSA in the absence of serum, and 4) in vivo blastocysts. There was no difference in overall blastocyst yield at Day 9 between the groups. However, significantly more blastocysts were present by Day 6 in the presence of 10% serum (20.0%) compared with 3 mg/ml BSA (4.6%, P < 0.001) or 16 mg/ml BSA (11.6%, P < 0.01). By Day 7, however, this difference had disappeared. Following vitrification, there was no difference in survival between blastocysts produced in the presence of 16 mg/ml BSA or those produced in the presence of 10% FCS; the survival of both groups was significantly lower than the in vivo controls at all time points and in terms of hatching rate. In contrast, survival of blastocysts produced in SOF + 3 mg/ml BSA in the absence of serum was intermediate, with no difference remaining at 72 h when compared with in vivo embryos. Differences in relative mRNA abundance among the two groups of blastocysts analyzed were found for genes related to apoptosis (Bax), oxidative stress (MnSOD, CuZnSOD, and SOX), communication through gap junctions (Cx31 and Cx43), maternal recognition of pregnancy (IFN-tau), and differentiation and implantation (LIF and LR-beta). The presence of serum during the culture period resulted in a significant increase in the level of expression of MnSOD, SOX, Bax, LIF, and LR-beta. The level of expression of Cx31 and Cu/ZnSOD also tended to be increased, although the difference was not significant. In contrast, the level of expression of Cx43 and IFN-tau was decreased in the presence of serum. In conclusion, using a combination of measures of developmental competence (cleavage and blastocyst rates) and qualitative measures such as cryotolerance and relative mRNA abundance to give a more complete picture of the consequences of modifying medium composition on the embryo, we have shown that conditions of postfertilization culture, in particular, the presence of serum in the medium, can affect the speed of embryo development and the quality of the resulting blastocysts. The reduced cryotolerance of blastocysts generated in the presence of serum is accompanied by deviations in the relative abundance of developmentally important gene transcripts. Omission of serum during the postfertilization culture period can significantly improve the cryotolerance of the blastocysts to a level intermediate between serum-generated blastocysts and those derived in vivo. The challenge now is to try and bridge this gap.",
"title": ""
},
{
"docid": "ba96f2099e6e44ad14b85bfc2b49ddff",
"text": "In this paper, an improved multimodel optimal quadratic control structure for variable speed, pitch regulated wind turbines (operating at high wind speeds) is proposed in order to integrate high levels of wind power to actively provide a primary reserve for frequency control. On the basis of the nonlinear model of the studied plant, and taking into account the wind speed fluctuations, and the electrical power variation, a multimodel linear description is derived for the wind turbine, and is used for the synthesis of an optimal control law involving a state feedback, an integral action and an output reference model. This new control structure allows a rapid transition of the wind turbine generated power between different desired set values. This electrical power tracking is ensured with a high-performance behavior for all other state variables: turbine and generator rotational speeds and mechanical shaft torque; and smooth and adequate evolution of the control variables.",
"title": ""
},
{
"docid": "0aab0c0fa6a1b0f283478b390dece614",
"text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.",
"title": ""
},
{
"docid": "a1d0bf0d28bbe3dd568e7e01bc9d59c3",
"text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.",
"title": ""
},
{
"docid": "6b718717d5ecef343a8f8033803a55e6",
"text": "BACKGROUND\nMedication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data.\n\n\nOBJECTIVE\nTo unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations.\n\n\nMETHODS\nWe have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types.\n\n\nRESULTS\nOur results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%.\n\n\nCONCLUSIONS\nIt shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community.",
"title": ""
},
{
"docid": "afbd0ecad829246ed7d6e1ebcebf5815",
"text": "Battery thermal management system (BTMS) is essential for electric-vehicle (EV) and hybrid-vehicle (HV) battery packs to operate effectively in all climates. Lithium-ion (Li-ion) batteries offer many advantages to the EV such as high power and high specific energy. However, temperature affects their performance, safety, and productive life. This paper is about the design and evaluation of a BTMS based on the Peltier effect heat pumps. The discharge efficiency of a 60-Ah prismatic Li-ion pouch cell was measured under different rates and different ambient temperature values. The obtained results were used to design a solid-state BTMS based on Peltier thermoelectric coolers (TECs). The proposed BTMS is then modeled and evaluated at constant current discharge in the laboratory. In addition, The BTMS was installed in an EV that was driven in the US06 cycle. The thermal response and the energy consumption of the proposed BTMS were satisfactory.",
"title": ""
},
{
"docid": "6f5ada16b55afc21f7291f7764ec85ee",
"text": "Breast cancer is often treated with radiotherapy (RT), with two opposing tangential fields. When indicated, supraclavicular lymph nodes have to be irradiated, and a third anterior field is applied. The junction region has the potential to be over or underdosed. To overcome this problem, many techniques have been proposed. A literature review of 3 Dimensional Conformal RT (3D CRT) and older 3-field techniques was carried out. Intensity Modulated RT (IMRT) techniques are also briefly discussed. Techniques are categorized, few characteristic examples are presented and a comparison is attempted. Three-field techniques can be divided in monoisocentric and two-isocentric. Two-isocentric techniques can be further divided in full field and half field techniques. Monoisocentric techniques show certain great advantages over two-isocentric techniques. However, they are not always applicable and they require extra caution as they are characterized by high dose gradient in the junction region. IMRT has been proved to give better dosimetric results. Three-field matching is a complicated procedure, with potential of over or undredosage in the junction region. Many techniques have been proposed, each with advantages and disadvantages. Among them, monoisocentric techniques, when carefully applied, are the ideal choice, provided IMRT facility is not available. Otherwise, a two-isocentric half beam technique is recommended.",
"title": ""
},
{
"docid": "601ffeb412bac0baa6fdb6da7a4a9a42",
"text": "CLCWeb: Comparative Literature and Culture, the peer-reviewed, full-text, and open-access learned journal in the humanities and social sciences, publishes new scholarship following tenets of the discipline of comparative literature and the field of cultural studies designated as \"comparative cultural studies.\" Publications in the journal are indexed in the Annual Bibliography of English Language and Literature (Chadwyck-Healey), the Arts and Humanities Citation Index (Thomson Reuters ISI), the Humanities Index (Wilson), Humanities International Complete (EBSCO), the International Bibliography of the Modern Language Association of America, and Scopus (Elsevier). The journal is affiliated with the Purdue University Press monograph series of Books in Comparative Cultural Studies. Contact: <[email protected]>",
"title": ""
},
{
"docid": "36fef38de53386e071ee2a1996aa733f",
"text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.",
"title": ""
},
{
"docid": "8589ec481e78d14fbeb3e6e4205eee50",
"text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fd70fff204201c33ed3d901c48560980",
"text": "I n the early 1960s, the average American adult male weighed 168 pounds. Today, he weighs nearly 180 pounds. Over the same time period, the average female adult weight rose from 143 pounds to over 155 pounds (U.S. Department of Health and Human Services, 1977, 1996). In the early 1970s, 14 percent of the population was classified as medically obese. Today, obesity rates are two times higher (Centers for Disease Control, 2003). Weights have been rising in the United States throughout the twentieth century, but the rise in obesity since 1980 is fundamentally different from past changes. For most of the twentieth century, weights were below levels recommended for maximum longevity (Fogel, 1994), and the increase in weight represented an increase in health, not a decrease. Today, Americans are fatter than medical science recommends, and weights are still increasing. While many other countries have experienced significant increases in obesity, no other developed country is quite as heavy as the United States. What explains this growth in obesity? Why is obesity higher in the United States than in any other developed country? The available evidence suggests that calories expended have not changed significantly since 1980, while calories consumed have risen markedly. But these facts just push the puzzle back a step: why has there been an increase in calories consumed? We propose a theory based on the division of labor in food preparation. In the 1960s, the bulk of food preparation was done by families that cooked their own food and ate it at home. Since then, there has been a revolution in the mass preparation of food that is roughly comparable to the mass",
"title": ""
},
{
"docid": "3cf174505ecd647930d762327fc7feb6",
"text": "The purpose of the present study was to examine the relationship between workplace friendship and social loafing effect among employees in Certified Public Accounting (CPA) firms. Previous studies showed that workplace friendship has both positive and negative effects, meaning that there is an inconsistent relationship between workplace friendship and social loafing. The present study investigated the correlation between workplace friendship and social loafing effect among employees from CPA firms in Taiwan. The study results revealed that there was a negative relationship between workplace friendship and social loafing effect among CPA employees. In other words, the better the workplace friendship, the lower the social loafing effect. An individual would not put less effort in work when there was a low social loafing effect.",
"title": ""
},
{
"docid": "b4d5bfc26bac32e1e1db063c3696540a",
"text": "Symmetric positive semidefinite (SPSD) matrix approximation is an important problem with applications in kernel methods. However, existing SPSD matrix approximation methods such as the Nyström method only have weak error bounds. In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds. We call it the prototype model for it has more efficient and effective extensions, and some of its extensions have high scalability. Though the prototype model itself is not suitable for large-scale data, it is still useful to study its properties, on which the analysis of its extensions relies. This paper offers novel theoretical analysis, efficient algorithms, and a highly accurate extension. First, we establish a lower error bound for the prototype model, and we improve the error bound of an existing column selection algorithm to match the lower bound. In this way, we obtain the first optimal column selection algorithm for the prototype model. We also prove that the prototype model is exact under certain conditions. Second, we develop a simple column selection algorithm with a provable error bound. Third, we propose a socalled spectral shifting model to make the approximation more accurate when the spectrum of the matrix decays slowly, and the improvement is theoretically quantified. The spectral shifting method can also be applied to improve other SPSD matrix approximation models.",
"title": ""
},
{
"docid": "0a143c2d4af3cc726964a90927556399",
"text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.",
"title": ""
},
{
"docid": "c2553e6256ef130fbd5bc0029bb5e7b7",
"text": "Using Blockchain seems a promising approach for Business Process Reengineering (BPR) to alleviate trust issues among stakeholders, by providing decentralization, transparency, traceability, and immutability of information along with its business logic. However, little work seems to be available on utilizing Blockchain for supporting BPR in a systematic and rational way, potentially leading to disappointments and even doubts on the utility of Blockchain. In this paper, as ongoing research, we outline Fides - a framework for exploiting Blockchain towards enhancing the trustworthiness for BPR. Fides supports diagnosing trust issues with AS-IS business processes, exploring TO-BE business process alternatives using Blockchain, and selecting among the alternatives. A business process of a retail chain for a food supply chain is used throughout the paper to illustrate Fides concepts.",
"title": ""
},
{
"docid": "562ec4c39f0d059fbb9159ecdecd0358",
"text": "In this paper, we propose the factorized hidden layer FHL approach to adapt the deep neural network DNN acoustic models for automatic speech recognition ASR. FHL aims at modeling speaker dependent SD hidden layers by representing an SD affine transformation as a linear combination of bases. The combination weights are low-dimensional speaker parameters that can be initialized using speaker representations like i-vectors and then reliably refined in an unsupervised adaptation fashion. Therefore, our method provides an efficient way to perform both adaptive training and test-time adaptation. Experimental results have shown that the FHL adaptation improves the ASR performance significantly, compared to the standard DNN models, as well as other state-of-the-art DNN adaptation approaches, such as training with the speaker-normalized CMLLR features, speaker-aware training using i-vector and learning hidden unit contributions LHUC. For Aurora 4, FHL achieves 3.8% and 2.3% absolute improvements over the standard DNNs trained on the LDA + STC and CMLLR features, respectively. It also achieves 1.7% absolute performance improvement over a system that combines the i-vector adaptive training with LHUC adaptation. For the AMI dataset, FHL achieved 1.4% and 1.9% absolute improvements over the sequence-trained CMLLR baseline systems, for the IHM and SDM tasks, respectively.",
"title": ""
},
{
"docid": "8d9cae70a7334afcd558c0fa850d551a",
"text": "A popular approach to solving large probabilistic systems relies on aggregating states based on a measure of similarity. Many approaches in the literature are heuristic. A number of recent methods rely instead on metrics based on the notion of bisimulation, or behavioral equivalence between states (Givan et al., 2003; Ferns et al., 2004). An integral component of such metrics is the Kantorovich metric between probability distributions. However, while this metric enables many satisfying theoretical properties, it is costly to compute in practice. In this paper, we use techniques from network optimization and statistical sampling to overcome this problem. We obtain in this manner a variety of distance functions for MDP state aggregation that differ in the tradeoff between time and space complexity, as well as the quality of the aggregation. We provide an empirical evaluation of these tradeoffs.",
"title": ""
},
{
"docid": "e22564e88d82b91e266b0a118bd2ec91",
"text": "Non-lethal dose of 70% ethanol extract of the Nerium oleander dry leaves (1000 mg/kg body weight) was subcutaneously injected into male and female mice once a week for 9 weeks (total 10 doses). One day after the last injection, final body weight gain (relative percentage to the initial body weight) had a tendency, in both males and females, towards depression suggesting a metabolic insult at other sites than those involved in myocardial function. Multiple exposure of the mice to the specified dose failed to express a significant influence on blood parameters (WBC, RBC, Hb, HCT, PLT) as well as myocardium. On the other hand, a lethal dose (4000 mg/kg body weight) was capable of inducing progressive changes in myocardial electrical activity ending up in cardiac arrest. The electrocardiogram abnormalities could be brought about by the expected Na+, K(+)-ATPase inhibition by the cardiac glycosides (cardenolides) content of the lethal dose.",
"title": ""
},
{
"docid": "3b64e99ea608819fc4bf06a6850a5aff",
"text": "Cloud computing is one of the most useful technology that is been widely used all over the world. It generally provides on demand IT services and products. Virtualization plays a major role in cloud computing as it provides a virtual storage and computing services to the cloud clients which is only possible through virtualization. Cloud computing is a new business computing paradigm that is based on the concepts of virtualization, multi-tenancy, and shared infrastructure. This paper discusses about cloud computing, how virtualization is done in cloud computing, virtualization basic architecture, its advantages and effects [1].",
"title": ""
}
] | scidocsrr |
52c1d35a8fd58fe024f3b5b19174c2ce | Blockchain And Its Applications | [
{
"docid": "469c17aa0db2c70394f081a9a7c09be5",
"text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.",
"title": ""
},
{
"docid": "4deea3312fe396f81919b07462551682",
"text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent",
"title": ""
}
] | [
{
"docid": "98d998eae1fa7a00b73dcff0251f0bbd",
"text": "Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 [19] and COCO-text [39]. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.",
"title": ""
},
{
"docid": "d6ca38ccad91c0c2c51ba3dd5be454b2",
"text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.",
"title": ""
},
{
"docid": "d65376ed544623a927a868b35394409e",
"text": "The balance compensating techniques for asymmetric Marchand balun are presented in this letter. The amplitude and phase difference are characterized explicitly by S21 and S31, from which the factors responsible for the balance compensating are determined. Finally, two asymmetric Marchand baluns, which have normal and enhanced balance compensation, respectively, are designed and fabricated in a 0.18 μm CMOS technology for demonstration. The simulation and measurement results show that the proposed balance compensating techniques are valid in a very wide frequency range up to millimeter-wave (MMW) band.",
"title": ""
},
{
"docid": "99c29c6cacb623a857817c412d6d9515",
"text": "Considering the rapid growth of China’s elderly rural population, establishing both an adequate and a financially sustainable rural pension system is a major challenge. Focusing on financial sustainability, this article defines this concept of financial sustainability before constructing sound actuarial models for China’s rural pension system. Based on these models and statistical data, the analysis finds that the rural pension funding gap should rise from 97.80 billion Yuan in 2014 to 3062.31 billion Yuan in 2049, which represents an annual growth rate of 10.34%. This implies that, as it stands, the rural pension system in China is not financially sustainable. Finally, the article explains how this problem could be fixed through policy recommendations based on recent international experiences.",
"title": ""
},
{
"docid": "b8fa649e8b5a60a05aad257a0a364b51",
"text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.",
"title": ""
},
{
"docid": "117c66505964344d9c350a4e57a4a936",
"text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.",
"title": ""
},
{
"docid": "28fa91e4476522f895a6874ebc967cfa",
"text": "The lifetime of micro electro–thermo–mechanical actuators with complex electro–thermo–mechanical coupling mechanisms can be decreased significantly due to unexpected failure events. Even more serious is the fact that various failures are tightly coupled due to micro-size and multi-physics effects. Interrelation between performance and potential failures should be established to predict reliability of actuators and improve their design. Thus, a multiphysics modeling approach is proposed to evaluate such interactive effects of failure mechanisms on actuators, where potential failures are pre-analyzed via FMMEA (Failure Modes, Mechanisms, and Effects Analysis) tool for guiding the electro–thermo–mechanical-reliability modeling process. Peak values of temperature, thermal stresses/strains and tip deflection are estimated as indicators for various failure modes and factors (e.g. residual stresses, thermal fatigue, electrical overstress, plastic deformation and parameter variations). Compared with analytical solutions and experimental data, the obtained simulation results were found suitable for coupled performance and reliability analysis of micro actuators and assessment of their design.",
"title": ""
},
{
"docid": "e502cdbbbf557c8365b0d4b69745e225",
"text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.",
"title": ""
},
{
"docid": "7e004a7b6a39ff29176dd19a07c15448",
"text": "All humans will become presbyopic as part of the aging process where the eye losses the ability to focus at different depths. Progressive additive lenses (PALs) allow a person to focus on objects located at near versus far by combing lenses of different strengths within the same spectacle. However, it is unknown why some patients easily adapt to wearing these lenses while others struggle and complain of vertigo, swim, and nausea as well as experience difficulties with balance. Sixteen presbyopes (nine who adapted to PALs and seven who had tried but could not adapt) participated in this study. This research investigated vergence dynamics and its adaptation using a short-term motor learning experiment to asses the ability to adapt. Vergence dynamics were on average faster and the ability to change vergence dynamics was also greater for presbyopes who adapted to progressive lenses compared to those who could not. Data suggest that vergence dynamics and its adaptation may be used to predict which patients will easily adapt to progressive lenses and discern those who will have difficulty.",
"title": ""
},
{
"docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76",
"text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "213313382d4e5d24a065d551012887ed",
"text": "The authors present full wave simulations and experimental results of propagation of electromagnetic waves in shallow seawaters. Transmitter and receiver antennas are ten-turns loops placed on the seabed. Some propagation frameworks are presented and simulated. Finally, simulation results are compared with experimental ones.",
"title": ""
},
{
"docid": "b02dcd4d78f87d8ac53414f0afd8604b",
"text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.",
"title": ""
},
{
"docid": "caab00ae6fcae59258ad4e45f787db64",
"text": "Traditional bullying has received considerable research but the emerging phenomenon of cyber-bullying much less so. Our study aims to investigate environmental and psychological factors associated with traditional and cyber-bullying. In a school-based 2-year prospective survey, information was collected on 1,344 children aged 10 including bullying behavior/experience, depression, anxiety, coping strategies, self-esteem, and psychopathology. Parents reported demographic data, general health, and attention-deficit hyperactivity disorder (ADHD) symptoms. These were investigated in relation to traditional and cyber-bullying perpetration and victimization at age 12. Male gender and depressive symptoms were associated with all types of bullying behavior and experience. Living with a single parent was associated with perpetration of traditional bullying while higher ADHD symptoms were associated with victimization from this. Lower academic achievement and lower self esteem were associated with cyber-bullying perpetration and victimization, and anxiety symptoms with cyber-bullying perpetration. After adjustment, previous bullying perpetration was associated with victimization from cyber-bullying but not other outcomes. Cyber-bullying has differences in predictors from traditional bullying and intervention programmes need to take these into consideration.",
"title": ""
},
{
"docid": "e5aed574fbe4560a794cf8b77fb84192",
"text": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.",
"title": ""
},
{
"docid": "22bb6af742b845dea702453b6b14ef3a",
"text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.",
"title": ""
},
{
"docid": "cc8a4744f05d5f46feacaff27b91a86c",
"text": "In the recent past, several sampling-based algorithms have been proposed to compute trajectories that are collision-free and dynamically-feasible. However, the outputs of such algorithms are notoriously jagged. In this paper, by focusing on robots with car-like dynamics, we present a fast and simple heuristic algorithm, named Convex Elastic Smoothing (CES) algorithm, for trajectory smoothing and speed optimization. The CES algorithm is inspired by earlier work on elastic band planning and iteratively performs shape and speed optimization. The key feature of the algorithm is that both optimization problems can be solved via convex programming, making CES particularly fast. A range of numerical experiments show that the CES algorithm returns high-quality solutions in a matter of a few hundreds of milliseconds and hence appears amenable to a real-time implementation.",
"title": ""
},
{
"docid": "f44d3512cd8658f824b0ba0ea5a69e4a",
"text": "Customer retention is a major issue for various service-based organizations particularly telecom industry, wherein predictive models for observing the behavior of customers are one of the great instruments in customer retention process and inferring the future behavior of the customers. However, the performances of predictive models are greatly affected when the real-world data set is highly imbalanced. A data set is called imbalanced if the samples size from one class is very much smaller or larger than the other classes. The most commonly used technique is over/under sampling for handling the class-imbalance problem (CIP) in various domains. In this paper, we survey six well-known sampling techniques and compare the performances of these key techniques, i.e., mega-trend diffusion function (MTDF), synthetic minority oversampling technique, adaptive synthetic sampling approach, couples top-N reverse k-nearest neighbor, majority weighted minority oversampling technique, and immune centroids oversampling technique. Moreover, this paper also reveals the evaluation of four rules-generation algorithms (the learning from example module, version 2 (LEM2), covering, exhaustive, and genetic algorithms) using publicly available data sets. The empirical results demonstrate that the overall predictive performance of MTDF and rules-generation based on genetic algorithms performed the best as compared with the rest of the evaluated oversampling methods and rule-generation algorithms.",
"title": ""
},
{
"docid": "3e9de22ac9f81cf3233950a0d72ef15a",
"text": "Increasing of head rise (HR) and decreasing of head loss (HL), simultaneously, are important purpose in the design of different types of fans. Therefore, multi-objective optimization process is more applicable for the design of such turbo machines. In the present study, multi-objective optimization of Forward-Curved (FC) blades centrifugal fans is performed at three steps. At the first step, Head rise (HR) and the Head loss (HL) in a set of FC centrifugal fan is numerically investigated using commercial software NUMECA. Two meta-models based on the evolved group method of data handling (GMDH) type neural networks are obtained, at the second step, for modeling of HR and HL with respect to geometrical design variables. Finally, using obtained polynomial neural networks, multi-objective genetic algorithms are used for Pareto based optimization of FC centrifugal fans considering two conflicting objectives, HR and HL. It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of FC fans can be discovered by Pareto based multi-objective optimization of the obtained polynomial meta-models representing their HR and HL characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modeling and the Pareto optimization approach.",
"title": ""
},
{
"docid": "bddf8420c2dd67dd5be10556088bf653",
"text": "The Hadoop Distributed File System (HDFS) is a distributed storage system that stores large-scale data sets reliably and streams those data sets to applications at high bandwidth. HDFS provides high performance, reliability and availability by replicating data, typically three copies of every data. The data in HDFS changes in popularity over time. To get better performance and higher disk utilization, the replication policy of HDFS should be elastic and adapt to data popularity. In this paper, we describe ERMS, an elastic replication management system for HDFS. ERMS provides an active/standby storage model for HDFS. It utilizes a complex event processing engine to distinguish real-time data types, and then dynamically increases extra replicas for hot data, cleans up these extra replicas when the data cool down, and uses erasure codes for cold data. ERMS also introduces a replica placement strategy for the extra replicas of hot data and erasure coding parities. The experiments show that ERMS effectively improves the reliability and performance of HDFS and reduce storage overhead.",
"title": ""
},
{
"docid": "40beda0d1e99f4cc5a15a3f7f6438ede",
"text": "One of the major challenges with electric shipboard power systems (SPS) is preserving the survivability of the system under fault situations. Some minor faults in SPS can result in catastrophic consequences. Therefore, it is essential to investigate available fault management techniques for SPS applications that can enhance SPS robustness and reliability. Many recent studies in this area take different approaches to address fault tolerance in SPSs. This paper provides an overview of the concepts and methodologies that are utilized to deal with faults in the electric SPS. First, a taxonomy of the types of faults and their sources in SPS is presented; then, the methods that are used to detect, identify, isolate, and manage faults are reviewed. Furthermore, common techniques for designing a fault management system in SPS are analyzed and compared. This paper also highlights several possible future research directions.",
"title": ""
}
] | scidocsrr |
1089ad0b6e4711d848b904c08ad9bc56 | THE FAILURE OF E-GOVERNMENT IN DEVELOPING COUNTRIES: A LITERATURE REVIEW | [
{
"docid": "310aa30e2dd2b71c09780f7984a3663c",
"text": "E-governance is more than just a government website on the Internet. The strategic objective of e-governance is to support and simplify governance for all parties; government, citizens and businesses. The use of ICTs can connect all three parties and support processes and activities. In other words, in e-governance electronic means support and stimulate good governance. Therefore, the objectives of e-governance are similar to the objectives of good governance. Good governance can be seen as an exercise of economic, political, and administrative authority to better manage affairs of a country at all levels. It is not difficult for people in developed countries to imagine a situation in which all interaction with government can be done through one counter 24 hours a day, 7 days a week, without waiting in lines. However to achieve this same level of efficiency and flexibility for developing countries is going to be difficult. The experience in developed countries shows that this is possible if governments are willing to decentralize responsibilities and processes, and if they start to use electronic means. This paper is going to examine the legal and infrastructure issues related to e-governance from the perspective of developing countries. Particularly it will examine how far the developing countries have been successful in providing a legal framework.",
"title": ""
}
] | [
{
"docid": "70242cb6aee415682c03da6bfd033845",
"text": "This paper presents a class of linear predictors for nonlinear controlled dynamical systems. The basic idea is to lift (or embed) the nonlinear dynamics into a higher dimensional space where its evolution is approximately linear. In an uncontrolled setting, this procedure amounts to numerical approximations of the Koopman operator associated to the nonlinear dynamics. In this work, we extend the Koopman operator to controlled dynamical systems and apply the Extended Dynamic Mode Decomposition (EDMD) to compute a finite-dimensional approximation of the operator in such a way that this approximation has the form of a linear controlled dynamical system. In numerical examples, the linear predictors obtained in this way exhibit a performance superior to existing linear predictors such as those based on local linearization or the so called Carleman linearization. Importantly, the procedure to construct these linear predictors is completely data-driven and extremely simple – it boils down to a nonlinear transformation of the data (the lifting) and a linear least squares problem in the lifted space that can be readily solved for large data sets. These linear predictors can be readily used to design controllers for the nonlinear dynamical system using linear controller design methodologies. We focus in particular on model predictive control (MPC) and show that MPC controllers designed in this way enjoy computational complexity of the underlying optimization problem comparable to that of MPC for a linear dynamical system with the same number of control inputs and the same dimension of the state-space. Importantly, linear inequality constraints on the state and control inputs as well as nonlinear constraints on the state can be imposed in a linear fashion in the proposed MPC scheme. Similarly, cost functions nonlinear in the state variable can be handled in a linear fashion. We treat both the full-state measurement case and the input-output case, as well as systems with disturbances / noise. Numerical examples (including a high-dimensional nonlinear PDE control) demonstrate the approach with the source code available online2.",
"title": ""
},
{
"docid": "ced13f6c3e904f5bd833e2f2621ae5e2",
"text": "A growing amount of research focuses on learning in group settings and more specifically on learning in computersupported collaborative learning (CSCL) settings. Studies on western students indicate that online collaboration enhances student learning achievement; however, few empirical studies have examined student satisfaction, performance, and knowledge construction through online collaboration from a cross-cultural perspective. This study examines satisfaction, performance, and knowledge construction via online group discussions of students in two different cultural contexts. Students were both first-year university students majoring in educational sciences at a Flemish university and a Chinese university. Differences and similarities of the two groups of students with regard to satisfaction, learning process, and achievement were analyzed.",
"title": ""
},
{
"docid": "3a0da20211697fbcce3493aff795556c",
"text": "OBJECTIVES\nWe studied whether park size, number of features in the park, and distance to a park from participants' homes were related to a park being used for physical activity.\n\n\nMETHODS\nWe collected observational data on 28 specific features from 33 parks. Adult residents in surrounding areas (n=380) completed 7-day physical activity logs that included the location of their activities. We used logistic regression to examine the relative importance of park size, features, and distance to participants' homes in predicting whether a park was used for physical activity, with control for perceived neighborhood safety and aesthetics.\n\n\nRESULTS\nParks with more features were more likely to be used for physical activity; size and distance were not significant predictors. Park facilities were more important than were park amenities. Of the park facilities, trails had the strongest relationship with park use for physical activity.\n\n\nCONCLUSIONS\nSpecific park features may have significant implications for park-based physical activity. Future research should explore these factors in diverse neighborhoods and diverse parks among both younger and older populations.",
"title": ""
},
{
"docid": "102ed07783d46a8ebadcad4b30ccb3c8",
"text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.",
"title": ""
},
{
"docid": "99206cfadd7aeb90f4cebaa1edebc0e1",
"text": "An energy-efficient gait planning (EEGP) and control system is established for biped robots with three-mass inverted pendulum mode (3MIPM), which utilizes both vertical body motion (VBM) and allowable zero-moment-point (ZMP) region (AZR). Given a distance to be traveled, we newly designed an online gait synthesis algorithm to construct a complete walking cycle, i.e., a starting step, multiple cyclic steps, and a stopping step, in which: 1) ZMP was fully manipulated within AZR; and 2) vertical body movement was allowed to relieve knee bending. Moreover, gait parameter optimization is effectively performed to determine the optimal set of gait parameters, i.e., average body height and amplitude of VBM, number of steps, and average walking speed, which minimizes energy consumption of actuation motors for leg joints under practical constraints, i.e., geometrical constraints, friction force limit, and yawing moment limit. Various simulations were conducted to identify the effectiveness of the proposed method and verify energy-saving performance for various ZMP regions. Our control system was implemented and tested on the humanoid robot DARwIn-OP.",
"title": ""
},
{
"docid": "9fc2d92c42400a45cb7bf6c998dc9236",
"text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.",
"title": ""
},
{
"docid": "c1ba049befffa94e358555056df15cc2",
"text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.",
"title": ""
},
{
"docid": "277bdeccc25baa31ba222ff80a341ef2",
"text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.",
"title": ""
},
{
"docid": "a0c6b1817a08d1be63dff9664852a6b4",
"text": "Despite years of HCI research on digital technology in museums, it is still unclear how different interactions impact on visitors'. A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences, behavioural change, and the appeal of mobiles in museums. 76 participants used all three interaction modes and gave their opinions in a questionnaire; participants interaction was also observed. The results show the phone is the most disliked interaction mode while tangible interaction (smart card and replica combined) is the most liked. Preference for the phone favour mobility to the detriment of engagement with the exhibition. Different behaviours when interacting with the phone or the tangibles where observed. The personal visiting style appeared to be only marginally affected by the device. Visitors also expect museums to provide the phones against the current trend of developing apps in a \"bring your own device\" approach.",
"title": ""
},
{
"docid": "d9df98fbd7281b67347df0f2643323fa",
"text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.",
"title": ""
},
{
"docid": "46f646c82f30eae98142c83045176353",
"text": "In this article, the authors present a psychodynamically oriented psychotherapy approach for posttraumatic stress disorder (PTSD) related to childhood abuse. This neurobiologically informed, phase-oriented treatment approach, which has been developed in Germany during the past 20 years, takes into account the broad comorbidity and the large degree of ego-function impairment typically found in these patients. Based on a psychodynamic relationship orientation, this treatment integrates a variety of trauma-specific imaginative and resource-oriented techniques. The approach places major emphasis on the prevention of vicarious traumatization. The authors are presently planning to test the approach in a randomized controlled trial aimed at strengthening the evidence base for psychodynamic psychotherapy in PTSD.",
"title": ""
},
{
"docid": "87c793be992e5d25c8422011bd52be12",
"text": "A major challenge in real-world feature matching problems is to tolerate the numerous outliers arising in typical visual tasks. Variations in object appearance, shape, and structure within the same object class make it harder to distinguish inliers from outliers due to clutters. In this paper, we propose a max-pooling approach to graph matching, which is not only resilient to deformations but also remarkably tolerant to outliers. The proposed algorithm evaluates each candidate match using its most promising neighbors, and gradually propagates the corresponding scores to update the neighbors. As final output, it assigns a reliable score to each match together with its supporting neighbors, thus providing contextual information for further verification. We demonstrate the robustness and utility of our method with synthetic and real image experiments.",
"title": ""
},
{
"docid": "d7108ba99aaa9231d926a52617baa712",
"text": "In this paper, an ultra-compact single-chip solar energy harvesting IC using on-chip solar cell for biomedical implant applications is presented. By employing an on-chip charge pump with parallel connected photodiodes, a 3.5 <inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math></inline-formula> efficiency improvement can be achieved when compared with the conventional stacked photodiode approach to boost the harvested voltage while preserving a single-chip solution. A photodiode-assisted dual startup circuit (PDSC) is also proposed to improve the area efficiency and increase the startup speed by 77%. By employing an auxiliary charge pump (AQP) using zero threshold voltage (ZVT) devices in parallel with the main charge pump, a low startup voltage of 0.25 V is obtained while minimizing the reversion loss. A <inline-formula> <tex-math notation=\"LaTeX\">$4\\, {\\mathbf{V}}_{\\mathbf{in}}$</tex-math></inline-formula> gate drive voltage is utilized to reduce the conduction loss. Systematic charge pump and solar cell area optimization is also introduced to improve the energy harvesting efficiency. The proposed system is implemented in a standard 0.18- <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{m}$</tex-math></inline-formula> CMOS technology and occupies an active area of 1.54 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mm}^{2}$</tex-math></inline-formula>. Measurement results show that the on-chip charge pump can achieve a maximum efficiency of 67%. With an incident power of 1.22 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mW/cm}^{2}$</tex-math></inline-formula> from a halogen light source, the proposed energy harvesting IC can deliver an output power of 1.65 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> at 64% charge pump efficiency. The chip prototype is also verified using <italic>in-vitro</italic> experiment.",
"title": ""
},
{
"docid": "e3eae34f1ad48264f5b5913a65bf1247",
"text": "Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.",
"title": ""
},
{
"docid": "fb2287cb1c41441049288335f10fd473",
"text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly",
"title": ""
},
{
"docid": "d40aa76e76c44da4c6237f654dcdab45",
"text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.",
"title": ""
},
{
"docid": "5838d6a17e2223c6421da33d5985edd1",
"text": "In this article, I provide commentary on the Rudd et al. (2009) article advocating thorough informed consent with suicidal clients. I examine the Rudd et al. recommendations in light of their previous empirical-research and clinical-practice articles on suicidality, and from the perspective of clinical practice with suicidal clients in university counseling center settings. I conclude that thorough informed consent is a clinical intervention that is still in preliminary stages of development, necessitating empirical research and clinical training before actual implementation as an ethical clinical intervention. (PsycINFO Database Record (c) 2010 APA, all rights reserved).",
"title": ""
},
{
"docid": "a4cfe72cae5bdaed110299d652e60a6f",
"text": "Hoffa's (infrapatellar) fat pad (HFP) is one of the knee fat pads interposed between the joint capsule and the synovium. Located posterior to patellar tendon and anterior to the capsule, the HFP is richly innervated and, therefore, one of the sources of anterior knee pain. Repetitive local microtraumas, impingement, and surgery causing local bleeding and inflammation are the most frequent causes of HFP pain and can lead to a variety of arthrofibrotic lesions. In addition, the HFP may be secondarily involved to menisci and ligaments disorders, injuries of the patellar tendon and synovial disorders. Patients with oedema or abnormalities of the HFP on magnetic resonance imaging (MRI) are often symptomatic; however, these changes can also be seen in asymptomatic patients. Radiologists should be cautious in emphasising abnormalities of HFP since they do not always cause pain and/or difficulty in walking and, therefore, do not require therapy. Teaching Points • Hoffa's fat pad (HFP) is richly innervated and, therefore, a source of anterior knee pain. • HFP disorders are related to traumas, involvement from adjacent disorders and masses. • Patients with abnormalities of the HFP on MRI are often but not always symptomatic. • Radiologists should be cautious in emphasising abnormalities of HFP.",
"title": ""
},
{
"docid": "4ae82b3362756b0efed84596076ea6fb",
"text": "Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.",
"title": ""
}
] | scidocsrr |
19e7b796871086d407576d1f0ef80d83 | Bidirectional Single-Stage Grid-Connected Inverter for a Battery Energy Storage System | [
{
"docid": "f1e9c9106dd3cdd7b568d5513b39ac7a",
"text": "This paper presents a novel zero-voltage switching (ZVS) approach to a grid-connected single-stage flyback inverter. The soft-switching of the primary switch is achieved by allowing negative current from the grid side through bidirectional switches placed on the secondary side of the transformer. Basically, the negative current discharges the metal-oxide-semiconductor field-effect transistor's output capacitor, thereby allowing turn on of the primary switch under zero voltage. To optimize the amount of reactive current required to achieve ZVS, a variable-frequency control scheme is implemented over the line cycle. In addition, the bidirectional switches on the secondary side of the transformer have ZVS during the turn- on times. Therefore, the switching losses of the bidirectional switches are negligible. A 250-W prototype has been implemented to validate the proposed scheme. Experimental results confirm the feasibility and superior performance of the converter compared with the conventional flyback inverter.",
"title": ""
},
{
"docid": "5042532d025cd5bdb21893a2c2e9f9b4",
"text": "This paper presents an energy sharing state-of-charge (SOC) balancing control scheme based on a distributed battery energy storage system architecture where the cell balancing system and the dc bus voltage regulation system are combined into a single system. The battery cells are decoupled from one another by connecting each cell with a small lower power dc-dc power converter. The small power converters are utilized to achieve both SOC balancing between the battery cells and dc bus voltage regulation at the same time. The battery cells' SOC imbalance issue is addressed from the root by using the energy sharing concept to automatically adjust the discharge/charge rate of each cell while maintaining a regulated dc bus voltage. Consequently, there is no need to transfer the excess energy between the cells for SOC balancing. The theoretical basis and experimental prototype results are provided to illustrate and validate the proposed energy sharing controller.",
"title": ""
}
] | [
{
"docid": "9dd6d9f5643c4884e981676230f3ee66",
"text": "A rank-r matrix X ∈ Rm×n can be written as a product UV >, where U ∈ Rm×r and V ∈ Rn×r. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f(X) over rank-r matrices, where the scaffold of rank-r matrices is modeled via the factorization in U and V variables. Such heuristic has been widely used before for specific problem instances, where the solution sought is (approximately) low-rank. Though such parameterization reduces the number of variables and is more efficient in computational speed and memory requirement (of particular interest is the case r min{m,n}), it comes at a cost: f(UV >) becomes a non-convex function w.r.t. U and V . In this paper, we study such parameterization in optimization of generic convex f and focus on first-order, gradient descent algorithmic solutions. We propose an algorithm we call the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the U, V factors. We show that when f is smooth, BFGD has local sublinear convergence, and linear convergence when f is both smooth and strongly convex. Moreover, for several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold.",
"title": ""
},
{
"docid": "d5e573802d6519a8da402f2e66064372",
"text": "Targeted cyberattacks play an increasingly significant role in disrupting the online social and economic model, not to mention the threat they pose to nation-states. A variety of components and techniques come together to bring about such attacks.",
"title": ""
},
{
"docid": "074d9b68f1604129bcfdf0bb30bbd365",
"text": "This paper describes a methodology for semi-supervised learning of dialogue acts using the similarity between sentences. We suppose that the dialogue sentences with the same dialogue act are more similar in terms of semantic and syntactic information. However, previous work on sentence similarity mainly modeled a sentence as bag-of-words and then compared different groups of words using corpus-based or knowledge-based measurements of word semantic similarity. Novelly, we present a vector-space sentence representation, composed of word embeddings, that is, the related word distributed representations, and these word embeddings are organised in a sentence syntactic structure. Given the vectors of the dialogue sentences, a distance measurement can be well-defined to compute the similarity between them. Finally, a seeded k-means clustering algorithm is implemented to classify the dialogue sentences into several categories corresponding to particular dialogue acts. This constitutes the semi-supervised nature of the approach, which aims to ameliorate the reliance of the availability of annotated corpora. Experiments with Switchboard Dialog Act corpus show that classification accuracy is improved by 14%, compared to the state-of-art methods based on Support Vector Machine.",
"title": ""
},
{
"docid": "e1958dc823feee7f88ab5bf256655bee",
"text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.",
"title": ""
},
{
"docid": "b5af51c869fa4863dfa581b0fb8cc20a",
"text": "This paper describes progress toward a prototype implementation of a tool which aims to improve literacy in deaf high school and college students who are native (or near native) signers of American Sign Language (ASL). We envision a system that will take a piece of text written by a deaf student, analyze that text for grammatical errors, and engage that student in a tutorial dialogue, enabling the student to generate appropriate corrections to the text. A strong focus of this work is to develop a system which adapts this process to the knowledge level and learning strengths of the user and which has the flexibility to engage in multi-modal, multilingual tutorial instruction utilizing both English and the native language of the user.",
"title": ""
},
{
"docid": "7f6e966f3f924e18cb3be0ae618309e6",
"text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)",
"title": ""
},
{
"docid": "b527ade4819e314a723789de58280724",
"text": "Securing collaborative filtering systems from malicious attack has become an important issue with increasing popularity of recommender Systems. Since recommender systems are entirely based on the input provided by the users or customers, they tend to become highly vulnerable to outside attacks. Prior research has shown that attacks can significantly affect the robustness of the systems. To prevent such attacks, researchers proposed several unsupervised detection mechanisms. While these approaches produce satisfactory results in detecting some well studied attacks, they are not suitable for all types of attacks studied recently. In this paper, we show that the unsupervised clustering can be used effectively for attack detection by computing detection attributes modeled on basic descriptive statistics. We performed extensive experiments and discussed different approaches regarding their performances. Our experimental results showed that attribute-based unsupervised clustering algorithm can detect spam users with a high degree of accuracy and fewer misclassified genuine users regardless of attack strategies.",
"title": ""
},
{
"docid": "73e4fed83bf8b1f473768ce15d6a6a86",
"text": "Improving science, technology, engineering, and mathematics (STEM) education, especially for traditionally disadvantaged groups, is widely recognized as pivotal to the U.S.'s long-term economic growth and security. In this article, we review and discuss current research on STEM education in the U.S., drawing on recent research in sociology and related fields. The reviewed literature shows that different social factors affect the two major components of STEM education attainment: (1) attainment of education in general, and (2) attainment of STEM education relative to non-STEM education conditional on educational attainment. Cognitive and social psychological characteristics matter for both major components, as do structural influences at the neighborhood, school, and broader cultural levels. However, while commonly used measures of socioeconomic status (SES) predict the attainment of general education, social psychological factors are more important influences on participation and achievement in STEM versus non-STEM education. Domestically, disparities by family SES, race, and gender persist in STEM education. Internationally, American students lag behind those in some countries with less economic resources. Explanations for group disparities within the U.S. and the mediocre international ranking of US student performance require more research, a task that is best accomplished through interdisciplinary approaches.",
"title": ""
},
{
"docid": "7c5f2c92cb3d239674f105a618de99e0",
"text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.",
"title": ""
},
{
"docid": "4a5abe07b93938e7549df068967731fc",
"text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.",
"title": ""
},
{
"docid": "1331dc5705d4b416054341519126f32f",
"text": "There is a large tradition of work in moral psychology that explores the capacity for moral judgment by focusing on the basic capacity to distinguish moral violations (e.g. hitting another person) from conventional violations (e.g. playing with your food). However, only recently have there been attempts to characterize the cognitive mechanisms underlying moral judgment (e.g. Cognition 57 (1995) 1; Ethics 103 (1993) 337). Recent evidence indicates that affect plays a crucial role in mediating the capacity to draw the moral/conventional distinction. However, the prevailing account of the role of affect in moral judgment is problematic. This paper argues that the capacity to draw the moral/conventional distinction depends on both a body of information about which actions are prohibited (a Normative Theory) and an affective mechanism. This account leads to the prediction that other normative prohibitions that are connected to an affective mechanism might be treated as non-conventional. An experiment is presented that indicates that \"disgust\" violations (e.g. spitting at the table), are distinguished from conventional violations along the same dimensions as moral violations.",
"title": ""
},
{
"docid": "ad5b787fd972c202a69edc98a8fbc7ba",
"text": "BACKGROUND\nIntimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria.\n\n\nMETHODS\nMultilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage.\n\n\nRESULTS\nWomen who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy.\n\n\nCONCLUSION\nFindings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.",
"title": ""
},
{
"docid": "20718ae394b5f47387499e5f3360a888",
"text": "Crowding, the inability to recognize objects in clutter, sets a fundamental limit on conscious visual perception and object recognition throughout most of the visual field. Despite how widespread and essential it is to object recognition, reading and visually guided action, a solid operational definition of what crowding is has only recently become clear. The goal of this review is to provide a broad-based synthesis of the most recent findings in this area, to define what crowding is and is not, and to set the stage for future work that will extend our understanding of crowding well beyond low-level vision. Here we define six diagnostic criteria for what counts as crowding, and further describe factors that both escape and break crowding. All of these lead to the conclusion that crowding occurs at multiple stages in the visual hierarchy.",
"title": ""
},
{
"docid": "e5ce1ddd50a728fab41043324938a554",
"text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.",
"title": ""
},
{
"docid": "54234eef5d56951e408d2a163dfd27f8",
"text": "In many applications of wireless sensor networks (WSNs), node location is required to locate the monitored event once occurs. Mobility-assisted localization has emerged as an efficient technique for node localization. It works on optimizing a path planning of a location-aware mobile node, called mobile anchor (MA). The task of the MA is to traverse the area of interest (network) in a way that minimizes the localization error while maximizing the number of successful localized nodes. For simplicity, many path planning models assume that the MA has a sufficient source of energy and time, and the network area is obstacle-free. However, in many real-life applications such assumptions are rare. When the network area includes many obstacles, which need to be avoided, and the MA itself has a limited movement distance that cannot be exceeded, a dynamic movement approach is needed. In this paper, we propose two novel dynamic movement techniques that offer obstacle-avoidance path planning for mobility-assisted localization in WSNs. The movement planning is designed in a real-time using two swarm intelligence based algorithms, namely grey wolf optimizer and whale optimization algorithm. Both of our proposed models, grey wolf optimizer-based path planning and whale optimization algorithm-based path planning, provide superior outcomes in comparison to other existing works in several metrics including both localization ratio and localization error rate.",
"title": ""
},
{
"docid": "a488509590cd496669bdcc3ce8cc5fe5",
"text": "Ghrelin is an endogenous ligand for the growth hormone secretagogue receptor and a well-characterized food intake regulatory peptide. Hypothalamic ghrelin-, neuropeptide Y (NPY)-, and orexin-containing neurons form a feeding regulatory circuit. Orexins and NPY are also implicated in sleep-wake regulation. Sleep responses and motor activity after central administration of 0.2, 1, or 5 microg ghrelin in free-feeding rats as well as in feeding-restricted rats (1 microg dose) were determined. Food and water intake and behavioral responses after the light onset injection of saline or 1 microg ghrelin were also recorded. Light onset injection of ghrelin suppressed non-rapid-eye-movement sleep (NREMS) and rapid-eye-movement sleep (REMS) for 2 h. In the first hour, ghrelin induced increases in behavioral activity including feeding, exploring, and grooming and stimulated food and water intake. Ghrelin administration at dark onset also elicited NREMS and REMS suppression in hours 1 and 2, but the effect was not as marked as that, which occurred in the light period. In hours 3-12, a secondary NREMS increase was observed after some doses of ghrelin. In the feeding-restricted rats, ghrelin suppressed NREMS in hours 1 and 2 and REMS in hours 3-12. Data are consistent with the notion that ghrelin has a role in the integration of feeding, metabolism, and sleep regulation.",
"title": ""
},
{
"docid": "7b27d8b8f05833888b9edacf9ace0a18",
"text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.",
"title": ""
},
{
"docid": "8a7f4cde54d120aab50c9d4f45e67a43",
"text": "The purpose of this study was to assess the perceived discomfort of patrol officers related to equipment and vehicle design and whether there were discomfort differences between day and night shifts. A total of 16 participants were recruited (10 males, 6 females) from a local police force to participate for one full day shift and one full night shift. A series of questionnaires were administered to acquire information regarding comfort with specific car features and occupational gear, body part discomfort and health and lifestyle. The discomfort questionnaires were administered three times during each shift to monitor discomfort progression within a shift. Although there were no significant discomfort differences reported between the day and night shifts, perceived discomfort was identified for specific equipment, vehicle design and vehicle configuration, within each 12-h shift.",
"title": ""
},
{
"docid": "6150e19bffad5629c6d5cb7439663b13",
"text": "We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classiication of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axis-parallel. Allowing a set of such hyperplanes to form the boundaries of the decision regions leads to a signiicant reduction in the number of rules generated while maintaining the accuracy rates of the networks. We describe the components of NeuroLinear in detail by way of two examples using artiicial datasets. Our experimental results on real-world datasets show that the system is eeective in extracting compact and comprehensible rules with high predictive accuracy from neural networks.",
"title": ""
},
{
"docid": "ab0c80a10d26607134828c6b350089aa",
"text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.",
"title": ""
}
] | scidocsrr |
841fc2f45374901757ef197cf666e2e9 | Perceived learning environment and students ’ emotional experiences : A multilevel analysis of mathematics classrooms * | [
{
"docid": "e47276a0b7139e31266d032bb3a0cbfc",
"text": "We assessed math anxiety in 6ththrough 12th-grade children (N = 564) as part of a comprehensive longitudinal investigation of children's beliefs, attitudes, and values concerning mathematics. Confirmatory factor analyses provided evidence for two components of math anxiety, a negative affective reactions component and a cognitive component. The affective component of math anxiety related more strongly and negatively than did the worry component to children's ability perceptions, performance perceptions, and math performance. The worry component related more strongly and positively than did the affective component to the importance that children attach to math and their reported actual effort in math. Girls reported stronger negative affective reactions to math than did boys. Ninth-grade students reported experiencing the most worry about math and sixth graders the least.",
"title": ""
},
{
"docid": "db422d1fcb99b941a43e524f5f2897c2",
"text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.",
"title": ""
},
{
"docid": "f71d0084ebb315a346b52c7630f36fb2",
"text": "A theory of motivation and emotion is proposed in which causal ascriptions play a key role. It is first documented that in achievement-related contexts there are a few dominant causal perceptions. The perceived causes of success and failure share three common properties: locus, stability, and controllability, with intentionality and globality as other possible causal structures. The perceived stability of causes influences changes in expectancy of success; all three dimensions of causality affect a variety of common emotional experiences, including anger, gratitude, guilt, hopelessness, pity, pride, and shame. Expectancy and affect, in turn, are presumed to guide motivated behavior. The theory therefore relates the structure of thinking to the dynamics of feeling and action. Analysis of a created motivational episode involving achievement strivings is offered, and numerous empirical observations are examined from this theoretical position. The strength of the empirical evidence, the capability of this theory to address prevalent human emotions, and the potential generality of the conception are stressed.",
"title": ""
}
] | [
{
"docid": "264aa89aa10fe05cff2f0e1a239e79ff",
"text": "While the terminology has changed over time, the basic concept of the Digital Twin model has remained fairly stable from its inception in 2001. It is based on the idea that a digital informational construct about a physical system could be created as an entity on its own. This digital information would be a “twin” of the information that was embedded within the physical system itself and be linked with that physical system through the entire lifecycle of the system.",
"title": ""
},
{
"docid": "fce170ad2238ad6066c9e17a3a388e7d",
"text": "Language resources that systematically organize paraphrases for binary relations are of great value for various NLP tasks and have recently been advanced in projects like PATTY, WiseNet and DEFIE. This paper presents a new method for building such a resource and the resource itself, called POLY. Starting with a very large collection of multilingual sentences parsed into triples of phrases, our method clusters relational phrases using probabilistic measures. We judiciously leverage fine-grained semantic typing of relational arguments for identifying synonymous phrases. The evaluation of POLY shows significant improvements in precision and recall over the prior works on PATTY and DEFIE. An extrinsic use case demonstrates the benefits of POLY for question answering.",
"title": ""
},
{
"docid": "d8ce92b054fc425a5db5bf17a62c6308",
"text": "The possibility that wind turbine noise (WTN) affects human health remains controversial. The current analysis presents results related to WTN annoyance reported by randomly selected participants (606 males, 632 females), aged 18-79, living between 0.25 and 11.22 km from wind turbines. WTN levels reached 46 dB, and for each 5 dB increase in WTN levels, the odds of reporting to be either very or extremely (i.e., highly) annoyed increased by 2.60 [95% confidence interval: (1.92, 3.58), p < 0.0001]. Multiple regression models had R(2)'s up to 58%, with approximately 9% attributed to WTN level. Variables associated with WTN annoyance included, but were not limited to, other wind turbine-related annoyances, personal benefit, noise sensitivity, physical safety concerns, property ownership, and province. Annoyance was related to several reported measures of health and well-being, although these associations were statistically weak (R(2 )< 9%), independent of WTN levels, and not retained in multiple regression models. The role of community tolerance level as a complement and/or an alternative to multiple regression in predicting the prevalence of WTN annoyance is also provided. The analysis suggests that communities are between 11 and 26 dB less tolerant of WTN than of other transportation noise sources.",
"title": ""
},
{
"docid": "7b4dd695182f7e15e58f44e309bf897c",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "c7808ecbca4c5bf8e8093dce4d8f1ea7",
"text": "41 Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.",
"title": ""
},
{
"docid": "d3fda1730c1297ed3b63a1d4f133d893",
"text": "Registered nurses were queried about their knowledge and attitudes regarding pain management. Results suggest knowledge of pain management principles and interventions is insufficient.",
"title": ""
},
{
"docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a",
"text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.",
"title": ""
},
{
"docid": "76d22feb7da3dbc14688b0d999631169",
"text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.",
"title": ""
},
{
"docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "bc1f7e30b8dcef97c1d8de2db801c4f6",
"text": "In this paper a novel method is introduced based on the use of an unsupervised version of kernel least mean square (KLMS) algorithm for solving ordinary differential equations (ODEs). The algorithm is unsupervised because here no desired signal needs to be determined by user and the output of the model is generated by iterating the algorithm progressively. However, there are several new implementation, fast convergence and also little error. Furthermore, it is also a KLMS with obvious characteristics. In this paper the ability of KLMS is used to estimate the answer of ODE. First a trial solution of ODE is written as a sum of two parts, the first part satisfies the initial condition and the second part is trained using the KLMS algorithm so as the trial solution solves the ODE. The accuracy of the method is illustrated by solving several problems. Also the sensitivity of the convergence is analyzed by changing the step size parameters and kernel functions. Finally, the proposed method is compared with neuro-fuzzy [21] approach. Crown Copyright & 2011 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "042431e96028ed9729e6b174a78d642d",
"text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.",
"title": ""
},
{
"docid": "a5ace543a0e9b87d54cbe77c6a86c40f",
"text": "Packet capture is an essential function for many network applications. However, packet drop is a major problem with packet capture in high-speed networks. This paper presents WireCAP, a novel packet capture engine for commodity network interface cards (NICs) in high-speed networks. WireCAP provides lossless zero-copy packet capture and delivery services by exploiting multi-queue NICs and multicore architectures. WireCAP introduces two new mechanisms-the ring-buffer-pool mechanism and the buddy-group-based offloading mechanism-to address the packet drop problem of packet capture in high-speed network. WireCAP is efficient. It also facilitates the design and operation of a user-space packet-processing application. Experiments have demonstrated that WireCAP achieves better packet capture performance when compared to existing packet capture engines.\n In addition, WireCAP implements a packet transmit function that allows captured packets to be forwarded, potentially after the packets are modified or inspected in flight. Therefore, WireCAP can be used to support middlebox-type applications. Thus, at a high level, WireCAP provides a new packet I/O framework for commodity NICs in high-speed networks.",
"title": ""
},
{
"docid": "a87da46ab4026c566e3e42a5695fd8c9",
"text": "Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.",
"title": ""
},
{
"docid": "2f5d428b8da4d5b5009729fc1794e53d",
"text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image",
"title": ""
},
{
"docid": "3a75cf54ace0ebb56b985e1452151a91",
"text": "Ubiquitous networks support the roaming service for mobile communication devices. The mobile user can use the services in the foreign network with the help of the home network. Mutual authentication plays an important role in the roaming services, and researchers put their interests on the authentication schemes. Recently, in 2016, Gope and Hwang found that mutual authentication scheme of He et al. for global mobility networks had security disadvantages such as vulnerability to forgery attacks, unfair key agreement, and destitution of user anonymity. Then, they presented an improved scheme. However, we find that the scheme cannot resist the off-line guessing attack and the de-synchronization attack. Also, it lacks strong forward security. Moreover, the session key is known to HA in that scheme. To get over the weaknesses, we propose a new two-factor authentication scheme for global mobility networks. We use formal proof with random oracle model, formal verification with the tool Proverif, and informal analysis to demonstrate the security of the proposed scheme. Compared with some very recent schemes, our scheme is more applicable. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "56bad8cef0c8ed0af6882dbc945298ef",
"text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.",
"title": ""
},
{
"docid": "f5ba54c76166eed39da96f86a8bbd2a1",
"text": "The digital divide refers to the separation between those who have access to digital information and communications technology (ICT) and those who do not. Many believe that universal access to ICT would bring about a global community of interaction, commerce, and learning resulting in higher standards of living and improved social welfare. However, the digital divide threatens this outcome, leading many public policy makers to debate the best way to bridge the divide. Much of the research on the digital divide focuses on first order effects regarding who has access to the technology, but some work addresses the second order effects of inequality in the ability to use the technology among those who do have access. In this paper, we examine both first and second order effects of the digital divide at three levels of analysis the individual level, the organizational level, and the global level. At each level, we survey the existing research noting the theoretical perspective taken in the work, the research methodology employed, and the key results that were obtained. We then suggest a series of research questions at each level of analysis to guide researchers seeking to further examine the digital divide and how it impacts citizens, managers, and economies.",
"title": ""
},
{
"docid": "258e931d5c8d94f73be41cbb0058f49b",
"text": "VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output.\n VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.",
"title": ""
},
{
"docid": "43ca9719740147e88e86452bb42f5644",
"text": "Currently in the US, over 97% of food waste is estimated to be buried in landfills. There is nonetheless interest in strategies to divert this waste from landfills as evidenced by a number of programs and policies at the local and state levels, including collection programs for source separated organic wastes (SSO). The objective of this study was to characterize the state-of-the-practice of food waste treatment alternatives in the US and Canada. Site visits were conducted to aerobic composting and two anaerobic digestion facilities, in addition to meetings with officials that are responsible for program implementation and financing. The technology to produce useful products from either aerobic or anaerobic treatment of SSO is in place. However, there are a number of implementation issues that must be addressed, principally project economics and feedstock purity. Project economics varied by region based on landfill disposal fees. Feedstock purity can be obtained by enforcement of contaminant standards and/or manual or mechanical sorting of the feedstock prior to and after treatment. Future SSO diversion will be governed by economics and policy incentives, including landfill organics bans and climate change mitigation policies.",
"title": ""
},
{
"docid": "c7b9c324171d40cec24ed089933a06ce",
"text": "With the proliferation of the internet and increased global access to online media, cybercrime is also occurring at an increasing rate. Currently, both personal users and companies are vulnerable to cybercrime. A number of tools including firewalls and Intrusion Detection Systems (IDS) can be used as defense mechanisms. A firewall acts as a checkpoint which allows packets to pass through according to predetermined conditions. In extreme cases, it may even disconnect all network traffic. An IDS, on the other hand, automates the monitoring process in computer networks. The streaming nature of data in computer networks poses a significant challenge in building IDS. In this paper, a method is proposed to overcome this problem by performing online classification on datasets. In doing so, an incremental naive Bayesian classifier is employed. Furthermore, active learning enables solving the problem using a small set of labeled data points which are often very expensive to acquire. The proposed method includes two groups of actions i.e. offline and online. The former involves data preprocessing while the latter introduces the NADAL online method. The proposed method is compared to the incremental naive Bayesian classifier using the NSL-KDD standard dataset. There are three advantages with the proposed method: (1) overcoming the streaming data challenge; (2) reducing the high cost associated with instance labeling; and (3) improved accuracy and Kappa compared to the incremental naive Bayesian approach. Thus, the method is well-suited to IDS applications.",
"title": ""
}
] | scidocsrr |
a1a4c99e02f541e789f8618ca65b41f3 | Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction | [
{
"docid": "e3d212f67713f6a902fe0f3eb468eddf",
"text": "We propose a novel LSTM-based deep multi-task learning framework for aspect term extraction from user review sentences. Two LSTMs equipped with extended memories and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions. Sentimental sentence constraint is also added for more accurate prediction via another LSTM. Experiment results over two benchmark datasets demonstrate the effectiveness of our framework.",
"title": ""
}
] | [
{
"docid": "9ebdf3493d6a80d12c97348a2d203d3e",
"text": "Agile software development methodologies have been greeted with enthusiasm by many software developers, yet their widespread adoption has also resulted in closer examination of their strengths and weaknesses. While analyses and evaluations abound, the need still remains for an objective and systematic appraisal of Agile processes specifically aimed at defining strategies for their improvement. We provide a review of the strengths and weaknesses identified in Agile processes, based on which a strengths- weaknesses-opportunities-threats (SWOT) analysis of the processes is performed. We suggest this type of analysis as a useful tool for highlighting and addressing the problem issues in Agile processes, since the results can be used as improvement strategies.",
"title": ""
},
{
"docid": "b5097e718754c02cddd02a1c147c6398",
"text": "Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking linesegments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking linesegments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parkingslot under various situations and illumination conditions.",
"title": ""
},
{
"docid": "8107b3dc36d240921571edfc778107ff",
"text": "FinFET devices have been proposed as a promising substitute for conventional bulk CMOS-based devices at the nanoscale due to their extraordinary properties such as improved channel controllability, a high on/off current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. This brief builds standard cell libraries for the advanced 7-nm FinFET technology, supporting multiple threshold voltages and supply voltages. The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reductions on average in a superthreshold regime and 16× and 3000× energy reductions on average in a near-threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes, respectively.",
"title": ""
},
{
"docid": "ef65f603b9f0441378e53ec7cabf7940",
"text": "Event extraction has been well studied for more than two decades, through both the lens of document-level and sentence-level event extraction. However, event extraction methods to date do not yet offer a satisfactory solution to providing concise, structured, document-level summaries of events in news articles. Prior work on document-level event extraction methods have focused on highly specific domains, often with great reliance on handcrafted rules. Such approaches do not generalize well to new domains. In contrast, sentence-level event extraction methods have applied to a much wider variety of domains, but generate output at such fine-grained details that they cannot offer good document-level summaries of events. In this thesis, we propose a new framework for extracting document-level event summaries called macro-events, unifying together aspects of both information extraction and text summarization. The goal of this work is to extract concise, structured representations of documents that can clearly outline the main event of interest and all the necessary argument fillers to describe the event. Unlike work in abstractive and extractive summarization, we seek to create template-based, structured summaries, rather than plain text summaries. We propose three novel methods to address the macro-event extraction task. First, we introduce a structured prediction model based on the Learning to Search framework for jointly learning argument fillers both across and within event argument slots. Second, we propose a multi-layer neural network that is trained directly on macro-event annotated data. Finally, we propose a deep learning method that treats the problem as machine comprehension, which does not require training with any on-domain macro-event labeled data. Our experimental results on a variety of domains show that such algorithms can achieve stronger performance on this task compared to existing baseline approaches. On average across all datasets, neural networks can achieve a 1.76% and 3.96% improvement on micro-averaged and macro-averaged F1 respectively over baseline approaches, while Learning to Search achieves a 3.87% and 5.10% improvement over baseline approaches on the same metrics. Furthermore, under scenarios of limited training data, we find that machine comprehension models can offer very strong performance compared to directly supervised algorithms, while requiring very little human effort to adapt to new domains.",
"title": ""
},
{
"docid": "f20e0b50b72b4b2796b77757ff20210e",
"text": "The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. Unfortunately, the memory and computation cost incurred by these complex mechanisms are undesirable for practical applications. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures. We propose a simple but novel deep learning architecture for fast and efficient question-answer ranking and retrieval. More specifically, our proposed model, HyperQA, is a parameter efficient neural network that outperforms other parameter intensive models such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple QA benchmarks. The novelty behind HyperQA is a pairwise ranking objective that models the relationship between question and answer embeddings in Hyperbolic space instead of Euclidean space. This empowers our model with a self-organizing ability and enables automatic discovery of latent hierarchies while learning embeddings of questions and answers. Our model requires no feature engineering, no similarity matrix matching, no complicated attention mechanisms nor over-parameterized layers and yet outperforms and remains competitive to many models that have these functionalities on multiple benchmarks.",
"title": ""
},
{
"docid": "17c9a72c46f63a7121ea9c9b6b893a2f",
"text": "This paper presents the artificial neural network approach namely Back propagation network (BPNs) and probabilistic neural network (PNN). It is used to classify the type of tumor in MRI images of different patients with Astrocytoma type of brain tumor. The image processing techniques have been developed for detection of the tumor in the MRI images. Gray Level Co-occurrence Matrix (GLCM) is used to achieve the feature extraction. The whole system worked in two modes firstly Training/Learning mode and secondly Testing/Recognition mode.",
"title": ""
},
{
"docid": "e724d4405f50fd74a2184187dcc52401",
"text": "This paper presents security of Internet of things. In the Internet of Things vision, every physical object has a virtual component that can produce and consume services Such extreme interconnection will bring unprecedented convenience and economy, but it will also require novel approaches to ensure its safe and ethical use. The Internet and its users are already under continual attack, and a growing economy-replete with business models that undermine the Internet's ethical use-is fully focused on exploiting the current version's foundational weaknesses.",
"title": ""
},
{
"docid": "29ec723fb3f26290f43af77210ca5022",
"text": "—Social media and Social Network Analysis (SNA) acquired a huge popularity and represent one of the most important social and computer science phenomena of recent years. One of the most studied problems in this research area is influence and information propagation. The aim of this paper is to analyze the information diffusion process and predict the influence (represented by the rate of infected nodes at the end of the diffusion process) of an initial set of nodes in two networks: Flickr user's contacts and YouTube videos users commenting these videos. These networks are dissimilar in their structure (size, type, diameter, density, components), and the type of the relationships (explicit relationship represented by the contacts links, and implicit relationship created by commenting on videos), they are extracted using NodeXL tool. Three models are used for modeling the dissemination process: Linear Threshold Model (LTM), Independent Cascade Model (ICM) and an extension of this last called Weighted Cascade Model (WCM). Networks metrics and visualization were manipulated by NodeXL as well. Experiments results show that the structure of the network affect the diffusion process directly. Unlike results given in the blog world networks, the information can spread farther through explicit connections than through implicit relations.",
"title": ""
},
{
"docid": "b10447097f8d513795b4f4e08e1838d8",
"text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.",
"title": ""
},
{
"docid": "c0546dabfcd377af78ae65a6e0a6a255",
"text": "A hard real-time system is usually subject to stringent reliability and timing constraints since failure to produce correct results in a timely manner may lead to a disaster. One way to avoid missing deadlines is to trade the quality of computation results for timeliness, and software fault-tolerance is often achieved with the use of redundant programs. A deadline mechanism which combines these two methods is proposed to provide software faulttolerance in hard real-time periodic task systems. Specifically, we consider the problem of scheduling a set of realtime periodic tasks each of which has two versions:primary and alternate. The primary version contains more functions (thus more complex) and produces good quality results but its correctness is more difficult to verify because of its high level of complexity and resource usage. By contrast, the alternate version contains only the minimum required functions (thus simpler) and produces less precise but acceptable results, and its correctness is easy to verify. We propose a scheduling algorithm which (i) guarantees either the primary or alternate version of each critical task to be completed in time and (ii) attempts to complete as many primaries as possible. Our basic algorithm uses a fixed priority-driven preemptive scheduling scheme to pre-allocate time intervals to the alternates, and at run-time, attempts to execute primaries first. An alternate will be executed only (1) if its primary fails due to lack of time or manifestation of bugs, or (2) when the latest time to start execution of the alternate without missing the corresponding task deadline is reached. This algorithm is shown to be effective and easy to implement. This algorithm is enhanced further to prevent early failures in executing primaries from triggering failures in the subsequent job executions, thus improving efficiency of processor usage.",
"title": ""
},
{
"docid": "f69f8b58e926a8a4573dd650ee29f80b",
"text": "Zab is a crash-recovery atomic broadcast algorithm we designed for the ZooKeeper coordination service. ZooKeeper implements a primary-backup scheme in which a primary process executes clients operations and uses Zab to propagate the corresponding incremental state changes to backup processes1. Due the dependence of an incremental state change on the sequence of changes previously generated, Zab must guarantee that if it delivers a given state change, then all other changes it depends upon must be delivered first. Since primaries may crash, Zab must satisfy this requirement despite crashes of primaries.",
"title": ""
},
{
"docid": "8ae257994c6f412ceb843fcb98a67043",
"text": "Discovering the author's interest over time from documents has important applications in recommendation systems, authorship identification and opinion extraction. In this paper, we propose an interest drift model (IDM), which monitors the evolution of author interests in time-stamped documents. The model further uses the discovered author interest information to help finding better topics. Unlike traditional topic models, our model is sensitive to the ordering of words, thus it extracts more information from the semantic meaning of the context. The experiment results show that the IDM model learns better topics than state-of-the-art topic models.",
"title": ""
},
{
"docid": "95d767d1b9a2ba2aecdf26443b3dd4af",
"text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.",
"title": ""
},
{
"docid": "4c5b74544b1452ffe0004733dbeee109",
"text": "Literary genres are commonly viewed as being defined in terms of content and style. In this paper, we focus on one particular type of content feature, namely lexical expressions of emotion, and investigate the hypothesis that emotion-related information correlates with particular genres. Using genre classification as a testbed, we compare a model that computes lexiconbased emotion scores globally for complete stories with a model that tracks emotion arcs through stories on a subset of Project Gutenberg with five genres. Our main findings are: (a), the global emotion model is competitive with a largevocabulary bag-of-words genre classifier (80 % F1); (b), the emotion arc model shows a lower performance (59 % F1) but shows complementary behavior to the global model, as indicated by a very good performance of an oracle model (94 % F1) and an improved performance of an ensemble model (84 % F1); (c), genres differ in the extent to which stories follow the same emotional arcs, with particularly uniform behavior for anger (mystery) and fear (adventures, romance, humor, science fiction).",
"title": ""
},
{
"docid": "ce55485a60213c7656eb804b89be36cc",
"text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.",
"title": ""
},
{
"docid": "e349ca11637dfad2d68a5082e27f11ff",
"text": "As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.",
"title": ""
},
{
"docid": "77bbd6d3e1f1ae64bda32cd057cf0580",
"text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.",
"title": ""
},
{
"docid": "8c60d78e9c4db8a457c7555393089f7c",
"text": "Artificially structured metamaterials have enabled unprecedented flexibility in manipulating electromagnetic waves and producing new functionalities, including the cloak of invisibility based on coordinate transformation. Unlike other cloaking approaches4–6, which are typically limited to subwavelength objects, the transformation method allows the design of cloaking devices that render a macroscopic object invisible. In addition, the design is not sensitive to the object that is being cloaked. The first experimental demonstration of such a cloak at microwave frequencies was recently reported7. We note, however, that that design cannot be implemented for an optical cloak, which is certainly of particular interest because optical frequencies are where the word ‘invisibility’ is conventionally defined. Here we present the design of a non-magnetic cloak operating at optical frequencies. The principle and structure of the proposed cylindrical cloak are analysed, and the general recipe for the implementation of such a device is provided. The coordinate transformation used in the proposed nonmagnetic optical cloak of cylindrical geometry is similar to that in ref. 7, by which a cylindrical region r , b is compressed into a concentric cylindrical shell a , r , b as shown in Fig. 1a. This transformation results in the following requirements for anisotropic permittivity and permeability in the cloaking shell:",
"title": ""
},
{
"docid": "b75a9a52296877783431af9447200747",
"text": "Sentiment analysis has been a major area of interest, for which the existence of highquality resources is crucial. In Arabic, there is a reasonable number of sentiment lexicons but with major deficiencies. The paper presents a large-scale Standard Arabic Sentiment Lexicon (SLSA) that is publicly available for free and avoids the deficiencies in the current resources. SLSA has the highest up-to-date reported coverage. The construction of SLSA is based on linking the lexicon of AraMorph with SentiWordNet along with a few heuristics and powerful back-off. SLSA shows a relative improvement of 37.8% over a state-of-theart lexicon when tested for accuracy. It also outperforms it by an absolute 3.5% of F1-score when tested for sentiment analysis.",
"title": ""
}
] | scidocsrr |
785e7bc9e4b13685cc55441a65a157d2 | A Bayesian approach to covariance estimation and data fusion | [
{
"docid": "2d787b0deca95ce212e11385ae60c36d",
"text": "In this paper, we introduce three novel distributed Kalman filtering (DKF) algorithms for sensor networks. The first algorithm is a modification of a previous DKF algorithm presented by the author in CDC-ECC '05. The previous algorithm was only applicable to sensors with identical observation matrices which meant the process had to be observable by every sensor. The modified DKF algorithm uses two identical consensus filters for fusion of the sensor data and covariance information and is applicable to sensor networks with different observation matrices. This enables the sensor network to act as a collective observer for the processes occurring in an environment. Then, we introduce a continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network. This peer-to-peer distributed estimation method gives rise to two iterative distributed Kalman filtering algorithms with different consensus strategies on estimates. Communication complexity and packet-loss issues are discussed. The performance and effectiveness of these distributed Kalman filtering algorithms are compared and demonstrated on a target tracking task.",
"title": ""
},
{
"docid": "e9d0c366c241e1fc071d82ca810d1be2",
"text": "The problem of distributed Kalman filtering (DKF) for sensor networks is one of the most fundamental distributed estimation problems for scalable sensor fusion. This paper addresses the DKF problem by reducing it to two separate dynamic consensus problems in terms of weighted measurements and inverse-covariance matrices. These to data fusion problems are solved is a distributed way using low-pass and band-pass consensus filters. Consensus filters are distributed algorithms that allow calculation of average-consensus of time-varying signals. The stability properties of consensus filters is discussed in a companion CDC ’05 paper [24]. We show that a central Kalman filter for sensor networks can be decomposed into n micro-Kalman filters with inputs that are provided by two types of consensus filters. This network of micro-Kalman filters collectively are capable to provide an estimate of the state of the process (under observation) that is identical to the estimate obtained by a central Kalman filter given that all nodes agree on two central sums. Later, we demonstrate that our consensus filters can approximate these sums and that gives an approximate distributed Kalman filtering algorithm. A detailed account of the computational and communication architecture of the algorithm is provided. Simulation results are presented for a sensor network with 200 nodes and more than 1000 links.",
"title": ""
}
] | [
{
"docid": "5931cb779b24065c5ef48451bc46fac4",
"text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.",
"title": ""
},
{
"docid": "5b341604b207e80ef444d11a9de82f72",
"text": "Digital deformities continue to be a common ailment among many patients who present to foot and ankle specialists. When conservative treatment fails to eliminate patient complaints, surgical correction remains a viable treatment option. Proximal interphalangeal joint arthrodesis remains the standard procedure among most foot and ankle surgeons. With continued advances in fixation technology and techniques, surgeons continue to have better options for the achievement of excellent digital surgery outcomes. This article reviews current trends in fixation of digital deformities while highlighting pertinent aspects of the physical examination, radiographic examination, and surgical technique.",
"title": ""
},
{
"docid": "c197fcf3042099003f3ed682f7b7f19c",
"text": "Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an event-based characterization of critical behavioral patterns for temporally varying interaction graphs. We use non-overlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.",
"title": ""
},
{
"docid": "8c0b544b88ebe81ebe4b374a4e08bb5e",
"text": "We study 3D shape modeling from a single image and make contributions to it in three aspects. First, we present Pix3D, a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Building such a large-scale dataset, however, is highly challenging; existing datasets either contain only synthetic data, or lack precise alignment between 2D images and 3D shapes, or only have a small number of images. Second, we calibrate the evaluation criteria for 3D shape reconstruction through behavioral studies, and use them to objectively and systematically benchmark cutting-edge reconstruction algorithms on Pix3D. Third, we design a novel model that simultaneously performs 3D reconstruction and pose estimation; our multi-task learning approach achieves state-of-the-art performance on both tasks.",
"title": ""
},
{
"docid": "b596be97699686e5e37cab71bee8fe4a",
"text": "The task of selecting project portfolios is an important and recurring activity in many organizations. There are many techniques available to assist in this process, but no integrated framework for carrying it out. This paper simpli®es the project portfolio selection process by developing a framework which separates the work into distinct stages. Each stage accomplishes a particular objective and creates inputs to the next stage. At the same time, users are free to choose the techniques they ®nd the most suitable for each stage, or in some cases to omit or modify a stage if this will simplify and expedite the process. The framework may be implemented in the form of a decision support system, and a prototype system is described which supports many of the related decision making activities. # 1999 Published by Elsevier Science Ltd and IPMA. All rights reserved",
"title": ""
},
{
"docid": "57ca7842e7ab21b51c4069e76121fc26",
"text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.",
"title": ""
},
{
"docid": "d93795318775df2c451eaf8c04a764cf",
"text": "The queries issued to search engines are often ambiguous or multifaceted, which requires search engines to return diverse results that can fulfill as many different information needs as possible; this is called search result diversification. Recently, the relational learning to rank model, which designs a learnable ranking function following the criterion of maximal marginal relevance, has shown effectiveness in search result diversification [Zhu et al. 2014]. The goodness of a diverse ranking model is usually evaluated with diversity evaluation measures such as α-NDCG [Clarke et al. 2008], ERR-IA [Chapelle et al. 2009], and D#-NDCG [Sakai and Song 2011]. Ideally the learning algorithm would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing relational learning to rank algorithms, however, only train the ranking models by optimizing loss functions that loosely relate to the evaluation measures. To deal with the problem, we propose a general framework for learning relational ranking models via directly optimizing any diversity evaluation measure. In learning, the loss function upper-bounding the basic loss function defined on a diverse ranking measure is minimized. We can derive new diverse ranking algorithms under the framework, and several diverse ranking algorithms are created based on different upper bounds over the basic loss function. We conducted comparisons between the proposed algorithms with conventional diverse ranking methods using the TREC benchmark datasets. Experimental results show that the algorithms derived under the diverse learning to rank framework always significantly outperform the state-of-the-art baselines.",
"title": ""
},
{
"docid": "8b71cb1b7cdaa434ac4b238b97a30e66",
"text": "Research on interoperability of technology-enhanced learning (TEL) repositories throughout the last decade has led to a fragmented landscape of competing approaches, such as metadata schemas and interface mechanisms. However, so far Web-scale integration of resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. On the other hand, the Linked Data approach has emerged as the de-facto standard for sharing data on the Web and offers a large potential to solve interoperability issues in the field of TEL. In this paper, we describe a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. This approach has been implemented in the context of the mEducator project where data from a number of open TEL data repositories has been integrated, exposed and enriched by following Linked Data principles.",
"title": ""
},
{
"docid": "61e8deaaa02297ba3edb2eb14ffb7f26",
"text": "Given an edge-weighted graph G and two distinct vertices s and t of G, the next-to-shortest path problem asks for a path from s to t of minimum length among all paths from s to t except the shortest ones. In this article, we consider the version where G is directed and all edge weights are positive. Some properties of the requested path are derived when G is an arbitrary digraph. In addition, if G is planar, an O(n3)-time algorithm is proposed, where n is the number of vertices of G. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 000(00), 000–00",
"title": ""
},
{
"docid": "07e2b3550183fd4d2a42591a9726f77c",
"text": "Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics.\n This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and \"discovering\" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently.\n Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads.",
"title": ""
},
{
"docid": "7c5ce3005c4529e0c34220c538412a26",
"text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.",
"title": ""
},
{
"docid": "ce384939966654196aabbb076326c779",
"text": "We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.",
"title": ""
},
{
"docid": "f33ca4cfba0aab107eb8bd6d3d041b74",
"text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.",
"title": ""
},
{
"docid": "c6e6099599be3cd2d1d87c05635f4248",
"text": "PURPOSE\nThe Food Cravings Questionnaires are among the most often used measures for assessing the frequency and intensity of food craving experiences. However, there is a lack of studies that have examined specific cut-off scores that may indicate pathologically elevated levels of food cravings.\n\n\nMETHODS\nReceiver-Operating-Characteristic analysis was used to determine sensitivity and specificity of scores on the Food Cravings Questionnaire-Trait-reduced (FCQ-T-r) for discriminating between individuals with (n = 43) and without (n = 389) \"food addiction\" as assessed with the Yale Food Addiction Scale 2.0.\n\n\nRESULTS\nA cut-off score of 50 on the FCQ-T-r discriminated between individuals with and without \"food addiction\" with high sensitivity (85%) and specificity (93%).\n\n\nCONCLUSIONS\nFCQ-T-r scores of 50 and higher may indicate clinically relevant levels of trait food craving.\n\n\nLEVEL OF EVIDENCE\nLevel V, descriptive study.",
"title": ""
},
{
"docid": "104c71324594c907f87d483c8c222f0f",
"text": "Operational controls are designed to support the integration of wind and solar power within microgrids. An aggregated model of renewable wind and solar power generation forecast is proposed to support the quantification of the operational reserve for day-ahead and real-time scheduling. Then, a droop control for power electronic converters connected to battery storage is developed and tested. Compared with the existing droop controls, it is distinguished in that the droop curves are set as a function of the storage state-of-charge (SOC) and can become asymmetric. The adaptation of the slopes ensures that the power output supports the terminal voltage while at the same keeping the SOC within a target range of desired operational reserve. This is shown to maintain the equilibrium of the microgrid's real-time supply and demand. The controls are implemented for the special case of a dc microgrid that is vertically integrated within a high-rise host building of an urban area. Previously untapped wind and solar power are harvested on the roof and sides of a tower, thereby supporting delivery to electric vehicles on the ground. The microgrid vertically integrates with the host building without creating a large footprint.",
"title": ""
},
{
"docid": "fd576b16a55c8f6bc4922561ef0d80bd",
"text": "Abs t rad -Th i s paper presents all controllers for the general ~'® control problem (with no assumptions on the plant matrices). Necessary and sufficient conditions for the existence of an ~® controller of any order are given in terms of three Linear Matrix Inequalities (LMIs). Our existence conditions are equivalent to Scherer's results, but with a more elementary derivation. Furthermore, we provide the set of all ~(= controllers explicitly parametrized in the state space using the positive definite solutions to the LMIs. Even under standard assumptions (full rank, etc.), our controller parametrization has an advantage over the Q-parametrization. The freedom Q (a real-rational stable transfer matrix with the ~® norm bounded above by a specified number) is replaced by a constant matrix L of fixed dimension with a norm bound, and the solutions (X, Y) to the LMIs. The inequality formulation converts the existence conditions to a convex feasibility problem, and also the free matrix L and the pair (X, Y) define a finite dimensional design space, as opposed to the infinite dimensional space associated with the Q-parametrization.",
"title": ""
},
{
"docid": "e92ab865f33c7548c21ba99785912d03",
"text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.",
"title": ""
},
{
"docid": "3f00cb229ea1f64e8b60bebaff0d99fe",
"text": "It is widely known that in wireless sensor networks (WSN), energy efficiency is of utmost importance. WSN need to be energy efficient but also need to provide better performance, particularly latency. A common protocol design guideline has been to trade off some performance metrics such as throughput and delay for energy. This paper presents a novel MAC (Express Energy Efficient Media Access Control) protocol that not only preserves the energy efficiency of current alternatives but also coordinates the transfer of packets from source to destination in such a way that latency and jitter are improved considerably. Our simulations show how EX-MAC (Express Energy Efficient MAC) outperforms the well-known S-MAC protocols in several performance metrics.",
"title": ""
},
{
"docid": "2ba1321f64fc8567fd70c030ea49b9e0",
"text": "Datasets originating from social networks are very valuable to many fields such as sociology and psychology. However, the supports from technical perspective are far from enough, and specific approaches are urgently in need. This paper applies data mining to psychology area for detecting depressed users in social network services. Firstly, a sentiment analysis method is proposed utilizing vocabulary and man-made rules to calculate the depression inclination of each micro-blog. Secondly, a depression detection model is constructed based on the proposed method and 10 features of depressed users derived from psychological research. Then 180 users and 3 kinds of classifiers are used to verify the model, whose precisions are all around 80%. Also, the significance of each feature is analyzed. Lastly, an application is developed within the proposed model for mental health monitoring online. This study is supported by some psychologists, and facilitates them in data-centric aspect in turn.",
"title": ""
},
{
"docid": "7edddf437e1759b8b13821670f52f4ba",
"text": "This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.",
"title": ""
}
] | scidocsrr |
a94558043aadec25b546b7c275f808ed | Deformable Pose Traversal Convolution for 3D Action and Gesture Recognition | [
{
"docid": "1d6e23fedc5fa51b5125b984e4741529",
"text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.",
"title": ""
},
{
"docid": "401b2494b8b032751c219726671cb48e",
"text": "Current state-of-the-art approaches to skeleton-based action recognition are mostly based on recurrent neural networks (RNN). In this paper, we propose a novel convolutional neural networks (CNN) based framework for both action classification and detection. Raw skeleton coordinates as well as skeleton motion are fed directly into CNN for label prediction. A novel skeleton transformer module is designed to rearrange and select important skeleton joints automatically. With a simple 7-layer network, we obtain 89.3% accuracy on validation set of the NTU RGB+D dataset. For action detection in untrimmed videos, we develop a window proposal network to extract temporal segment proposals, which are further classified within the same network. On the recent PKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large margin.",
"title": ""
}
] | [
{
"docid": "901174e2dd911afada2e8ccf245d25f3",
"text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.",
"title": ""
},
{
"docid": "11557714ac3bbd9fc9618a590722212e",
"text": "In Taobao, the largest e-commerce platform in China, billions of items are provided and typically displayed with their images.For better user experience and business effectiveness, Click Through Rate (CTR) prediction in online advertising system exploits abundant user historical behaviors to identify whether a user is interested in a candidate ad. Enhancing behavior representations with user behavior images will help understand user's visual preference and improve the accuracy of CTR prediction greatly. So we propose to model user preference jointly with user behavior ID features and behavior images. However, training with user behavior images brings tens to hundreds of images in one sample, giving rise to a great challenge in both communication and computation. To handle these challenges, we propose a novel and efficient distributed machine learning paradigm called Advanced Model Server (AMS). With the well-known Parameter Server (PS) framework, each server node handles a separate part of parameters and updates them independently. AMS goes beyond this and is designed to be capable of learning a unified image descriptor model shared by all server nodes which embeds large images into low dimensional high level features before transmitting images to worker nodes. AMS thus dramatically reduces the communication load and enables the arduous joint training process. Based on AMS, the methods of effectively combining the images and ID features are carefully studied, and then we propose a Deep Image CTR Model. Our approach is shown to achieve significant improvements in both online and offline evaluations, and has been deployed in Taobao display advertising system serving the main traffic.",
"title": ""
},
{
"docid": "8994470e355b5db188090be731ee4fe9",
"text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.",
"title": ""
},
{
"docid": "557451621286ecd4fbf21909ff88450f",
"text": "BACKGROUND\nMany studies have demonstrated that honey has antibacterial activity in vitro, and a small number of clinical case studies have shown that application of honey to severely infected cutaneous wounds is capable of clearing infection from the wound and improving tissue healing. Research has also indicated that honey may possess anti-inflammatory activity and stimulate immune responses within a wound. The overall effect is to reduce infection and to enhance wound healing in burns, ulcers, and other cutaneous wounds. The objective of the study was to find out the results of topical wound dressings in diabetic wounds with natural honey.\n\n\nMETHODS\nThe study was conducted at department of Orthopaedics, Unit-1, Liaquat University of Medical and Health Sciences, Jamshoro from July 2006 to June 2007. Study design was experimental. The inclusion criteria were patients of either gender with any age group having diabetic foot Wagner type I, II, III and II. The exclusion criteria were patients not willing for studies and who needed urgent amputation due to deteriorating illness. Initially all wounds were washed thoroughly and necrotic tissues removed and dressings with honey were applied and continued up to healing of wounds.\n\n\nRESULTS\nTotal number of patients was 12 (14 feet). There were 8 males (66.67%) and 4 females (33.33%), 2 cases (16.67%) were presented with bilateral diabetic feet. The age range was 35 to 65 years (46 +/- 9.07 years). Amputations of big toe in 3 patients (25%), second and third toe ray in 2 patients (16.67%) and of fourth and fifth toes at the level of metatarsophalengeal joints were done in 3 patients (25%). One patient (8.33%) had below knee amputation.\n\n\nCONCLUSION\nIn our study we observed excellent results in treating diabetic wounds with dressings soaked with natural honey. The disability of diabetic foot patients was minimized by decreasing the rate of leg or foot amputations and thus enhancing the quality and productivity of individual life.",
"title": ""
},
{
"docid": "b24f07add0da3931b23f4a13ea6983b9",
"text": "Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.",
"title": ""
},
{
"docid": "e464cde1434026c17b06716c6a416b7a",
"text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.",
"title": ""
},
{
"docid": "4d42e42469fcead51969f3e642920abc",
"text": "In this paper, we present a dual-band antenna for Long Term Evolution (LTE) handsets. The proposed antenna is composed of a meandered monopole operating in the 700 MHz band and a parasitic element which radiates in the 2.5–2.7 GHz band. Two identical antennas are then closely positioned on the same 120×50 mm2 ground plane (Printed Circuit Board) which represents a modern-size PDA-mobile phone. To enhance the port-to-port isolation of the antennas, a neutralization technique is implemented between them. Scattering parameters, radiations patterns and total efficiencies are presented to illustrate the performance of the antenna-system.",
"title": ""
},
{
"docid": "fff89d9e97dbb5a13febe48c35d08c94",
"text": "The positive effects of social popularity (i.e., information based on other consumers’ behaviors) and deal scarcity (i.e., information provided by product vendors) on consumers’ consumption behaviors are well recognized. However, few studies have investigated their potential joint and interaction effects and how such effects may differ at different timing of a shopping process. This study examines the individual and interaction effects of social popularity and deal scarcity as well as how such effects change as consumers’ shopping goals become more concrete. The results of a laboratory experiment show that in the initial shopping stage when consumers do not have specific shopping goals, social popularity and deal scarcity information weaken each other’s effects; whereas in the later shopping stage when consumers have constructed concrete shopping goals, these two information cues reinforce each other’s effects. Implications on theory and practice are discussed.",
"title": ""
},
{
"docid": "d0e977ab137cd004420bda28bd0b11be",
"text": "This study investigates the roles of cohesion and coherence in evaluations of essay quality. Cohesion generally has a facilitative effect on text comprehension and is assumed to be related to essay coherence. By contrast, recent studies of essay writing have demonstrated that computational indices of cohesion are not predictive of evaluations of writing quality. This study investigates expert ratings of individual text features, including coherence, in order to examine their relation to evaluations of holistic essay quality. The results suggest that coherence is an important attribute of overall essay quality, but that expert raters evaluate coherence based on the absence of cohesive cues in the essays rather than their presence. This finding has important implications for text understanding and the role of coherence in writing quality.",
"title": ""
},
{
"docid": "733f5029329072adf5635f0b4d0ad1cb",
"text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.",
"title": ""
},
{
"docid": "08353c7d40a0df4909b09f2d3e5ab4fe",
"text": "Object detection has made great progress in the past few years along with the development of deep learning. However, most current object detection methods are resource hungry, which hinders their wide deployment to many resource restricted usages such as usages on always-on devices, battery-powered low-end devices, etc. This paper considers the resource and accuracy trade-off for resource-restricted usages during designing the whole object detection framework. Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages. Tiny-DSOD introduces two innovative and ultra-efficient architecture blocks: depthwise dense block (DDB) based backbone and depthwise feature-pyramid-network (D-FPN) based front-end. We conduct extensive experiments on three famous benchmarks (PASCAL VOC 2007, KITTI, and COCO), and compare Tiny-DSOD to the state-of-the-art ultra-efficient object detection solutions such as Tiny-YOLO, MobileNet-SSD (v1 & v2), SqueezeDet, Pelee, etc. Results show that Tiny-DSOD outperforms these solutions in all the three metrics (parameter-size, FLOPs, accuracy) in each comparison. For instance, Tiny-DSOD achieves 72.1% mAP with only 0.95M parameters and 1.06B FLOPs, which is by far the state-of-the-arts result with such a low resource requirement.∗",
"title": ""
},
{
"docid": "2665314258f4b7f59a55702166f59fcc",
"text": "In this paper, a wireless power transfer system with magnetically coupled resonators is studied. The idea to use metamaterials to enhance the coupling coefficient and the transfer efficiency is proposed and analyzed. With numerical calculations of a system with and without metamaterials, we show that the transfer efficiency can be improved with metamaterials.",
"title": ""
},
{
"docid": "be1c50de2963341423960ba0f59fbc1f",
"text": "Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning tasks. Generative models, on the other hand, have benefited less from hierarchical models with multiple layers of latent variables. In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with some existing variational methods, and provide some limitations on the kind of features existing models can learn. Finally we propose an alternative architecture that does not suffer from these limitations. Our model is able to learn highly interpretable and disentangled hierarchical features on several natural image datasets with no taskspecific regularization.",
"title": ""
},
{
"docid": "00602badbfba6bc97dffbdd6c5a2ae2d",
"text": "Accurately drawing 3D objects is difficult for untrained individuals, as it requires an understanding of perspective and its effects on geometry and proportions. Step-by-step tutorials break the complex task of sketching an entire object down into easy-to-follow steps that even a novice can follow. However, creating such tutorials requires expert knowledge and is time-consuming. As a result, the availability of tutorials for a given object or viewpoint is limited. How2Sketch (H2S) addresses this problem by automatically generating easy-to-follow tutorials for arbitrary 3D objects. Given a segmented 3D model and a camera viewpoint, H2S computes a sequence of steps for constructing a drawing scaffold comprised of geometric primitives, which helps the user draw the final contours in correct perspective and proportion. To make the drawing scaffold easy to construct, the algorithm solves for an ordering among the scaffolding primitives and explicitly makes small geometric modifications to the size and location of the object parts to simplify relative positioning. Technically, we formulate this scaffold construction as a single selection problem that simultaneously solves for the ordering and geometric changes of the primitives. We generate different tutorials on man-made objects using our method and evaluate how easily the tutorials can be followed with a user study.",
"title": ""
},
{
"docid": "efec2ff9384e17a698c88e742e41bcc9",
"text": "— A new versatile Hydraulically-powered Quadruped robot (HyQ) has been developed to serve as a platform to study not only highly dynamic motions such as running and jumping, but also careful navigation over very rough terrain. HyQ stands 1 meter tall, weighs roughly 90kg and features 12 torque-controlled joints powered by a combination of hydraulic and electric actuators. The hydraulic actuation permits the robot to perform powerful and dynamic motions that are hard to achieve with more traditional electrically actuated robots. This paper describes design and specifications of the robot and presents details on the hardware of the quadruped platform, such as the mechanical design of the four articulated legs and of the torso frame, and the configuration of the hydraulic power system. Results from the first walking experiments are presented along with test studies using a previously built prototype leg. 1 INTRODUCTION The development of mobile robotic platforms is an important and active area of research. Within this domain, the major focus has been to develop wheeled or tracked systems that cope very effectively with flat and well-structured solid surfaces (e.g. laboratories and roads). In recent years, there has been considerable success with robotic vehicles even for off-road conditions [1]. However, wheeled robots still have major limitations and difficulties in navigating uneven and rough terrain. These limitations and the capabilities of legged animals encouraged researchers for the past decades to focus on the construction of biologically inspired legged machines. These robots have the potential to outperform the more traditional designs with wheels and tracks in terms of mobility and versatility. The vast majority of the existing legged robots have been, and continue to be, actuated by electric motors with high gear-ratio reduction drives, which are popular because of their size, price, ease of use and accuracy of control. However, electric motors produce small torques relative to their size and weight, thereby making reduction drives with high ratios essential to convert velocity into torque. Unfortunately, this approach results in systems with reduced speed capability and limited passive back-driveability and therefore not very suitable for highly dynamic motions and interactions with unforeseen terrain variance. Significant examples of such legged robots are: the biped series of HRP robots [2], Toyota humanoid robot [3], and Honda's Asimo [4]; and the quadruped robot series of Hirose et al. [5], Sony's AIBO [6] and Little Dog [7]. In combination with high position gain control and …",
"title": ""
},
{
"docid": "01295570af41ff14f0b55d6fe7139c9d",
"text": "YES is a simplified stroke-based method for sorting Chinese characters. It is free from stroke counting and grouping, and thus much faster and more accurate than the traditional method. This paper presents a collation element table built in YES for a large joint Chinese character set covering (a) all 20,902 characters of Unicode CJK Unified Ideographs, (b) all 11,408 characters in the Complete List of Chinese Characters Used by the Media in 2013, (c) all 13,000 plus characters in the latest versions of Xinhua Dictionary(v11) and Contemporary Chinese Dictionary(v6). Of the 20,902 Chinese characters in Unicode, 97.23% have one-to-one relationship with their stroke order codes in YES, comparing with 90.69% of the traditional method. Enhanced with the secondary and tertiary sorting levels of stroke layout and Unicode value, there is a guarantee of one-to-one relationship between the characters and collation elements. The collation element table has been successfully applied to sorting CC-CEDICT, a Chinese-English dictionary of over 112,000 word entries.",
"title": ""
},
{
"docid": "dbe0b895c78dd90c69cc1a1f8289aadf",
"text": "This paper presents the design procedure of monolithic microwave integrated circuit (MMIC) high-power amplifiers (HPAs) as well as implementation of high-efficiency and compact-size HPAs in a 0.25- μm AlGaAs-InGaAs pHEMT technology. Presented design techniques used to extend bandwidth, improve efficiency, and reduce chip area of the HPAs are described in detail. The first HPA delivers 5 W of output power with 40% power-added efficiency (PAE) in the frequency band of 8.5-12.5 GHz, while providing 20 dB of small-signal gain. The second HPA delivers 8 W of output power with 35% PAE in the frequency band of 7.5-12 GHz, while maintaining a small-signal gain of 17.5 dB. The 8-W HPA chip area is 8.8 mm2, which leads to the maximum power/area ratio of 1.14 W/mm2. These are the lowest area and highest power/area ratio reported in GaAs HPAs operating within the same frequency band.",
"title": ""
},
{
"docid": "e8ef5dfb9aafb4a2b453ebdda6e923ea",
"text": "This paper addresses the problem of vegetation detection from laser measurements. The ability to detect vegetation is important for robots operating outdoors, since it enables a robot to navigate more efficiently and safely in such environments. In this paper, we propose a novel approach for detecting low, grass-like vegetation using laser remission values. In our algorithm, the laser remission is modeled as a function of distance, incidence angle, and material. We classify surface terrain based on 3D scans of the surroundings of the robot. The model is learned in a self-supervised way using vibration-based terrain classification. In all real world experiments we carried out, our approach yields a classification accuracy of over 99%. We furthermore illustrate how the learned classifier can improve the autonomous navigation capabilities of mobile robots.",
"title": ""
},
{
"docid": "2793f528a9b29345b1ee8ce1202933e3",
"text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.",
"title": ""
},
{
"docid": "884281b32a82a1d1f9811acc73257387",
"text": "The low power wide area network (LPWAN) technologies, which is now embracing a booming era with the development in the Internet of Things (IoT), may offer a brand new solution for current smart grid communications due to their excellent features of low power, long range, and high capacity. The mission-critical smart grid communications require secure and reliable connections between the utilities and the devices with high quality of service (QoS). This is difficult to achieve for unlicensed LPWAN technologies due to the crowded license-free band. Narrowband IoT (NB-IoT), as a licensed LPWAN technology, is developed based on the existing long-term evolution specifications and facilities. Thus, it is able to provide cellular-level QoS, and henceforth can be viewed as a promising candidate for smart grid communications. In this paper, we introduce NB-IoT to the smart grid and compare it with the existing representative communication technologies in the context of smart grid communications in terms of data rate, latency, range, etc. The overall requirements of communications in the smart grid from both quantitative and qualitative perspectives are comprehensively investigated and each of them is carefully examined for NB-IoT. We further explore the representative applications in the smart grid and analyze the corresponding feasibility of NB-IoT. Moreover, the performance of NB-IoT in typical scenarios of the smart grid communication environments, such as urban and rural areas, is carefully evaluated via Monte Carlo simulations.",
"title": ""
}
] | scidocsrr |
dbd0d01702a50dcaab924ba4033ab378 | An information theoretical approach to prefrontal executive function | [
{
"docid": "5dde27787ee92c2e56729b25b9ca4311",
"text": "The prefrontal cortex (PFC) subserves cognitive control: the ability to coordinate thoughts or actions in relation with internal goals. Its functional architecture, however, remains poorly understood. Using brain imaging in humans, we showed that the lateral PFC is organized as a cascade of executive processes from premotor to anterior PFC regions that control behavior according to stimuli, the present perceptual context, and the temporal episode in which stimuli occur, respectively. The results support an unified modular model of cognitive control that describes the overall functional organization of the human lateral PFC and has basic methodological and theoretical implications.",
"title": ""
}
] | [
{
"docid": "594bbdf08b7c3d0a31b2b0f60e50bae3",
"text": "This paper concerns the behavior of spatially extended dynamical systems —that is, systems with both temporal and spatial degrees of freedom. Such systems are common in physics, biology, and even social sciences such as economics. Despite their abundance, there is little understanding of the spatiotemporal evolution of these complex systems. ' Seemingly disconnected from this problem are two widely occurring phenomena whose very generality require some unifying underlying explanation. The first is a temporal effect known as 1/f noise or flicker noise; the second concerns the evolution of a spatial structure with scale-invariant, self-similar (fractal) properties. Here we report the discovery of a general organizing principle governing a class of dissipative coupled systems. Remarkably, the systems evolve naturally toward a critical state, with no intrinsic time or length scale. The emergence of the self-organized critical state provides a connection between nonlinear dynamics, the appearance of spatial self-similarity, and 1/f noise in a natural and robust way. A short account of some of these results has been published previously. The usual strategy in physics is to reduce a given problem to one or a few important degrees of freedom. The effect of coupling between the individual degrees of freedom is usually dealt with in a perturbative manner —or in a \"mean-field manner\" where the surroundings act on a given degree of freedom as an external field —thus again reducing the problem to a one-body one. In dynamics theory one sometimes finds that complicated systems reduce to a few collective degrees of freedom. This \"dimensional reduction'* has been termed \"selforganization, \" or the so-called \"slaving principle, \" and much insight into the behavior of dynamical systems has been achieved by studying the behavior of lowdimensional at tractors. On the other hand, it is well known that some dynamical systems act in a more concerted way, where the individual degrees of freedom keep each other in a more or less stab1e balance, which cannot be described as a \"perturbation\" of some decoupled state, nor in terms of a few collective degrees of freedom. For instance, ecological systems are organized such that the different species \"support\" each other in a way which cannot be understood by studying the individual constituents in isolation. The same interdependence of species also makes the ecosystem very susceptible to small changes or \"noise.\" However, the system cannot be too sensitive since then it could not have evolved into its present state in the first place. Owing to this balance we may say that such a system is \"critical. \" We shall see that this qualitative concept of criticality can be put on a firm quantitative basis. Such critical systems are abundant in nature. We shaB see that the dynamics of a critical state has a specific ternporal fingerprint, namely \"flicker noise, \" in which the power spectrum S(f) scales as 1/f at low frequencies. Flicker noise is characterized by correlations extended over a wide range of time scales, a clear indication of some sort of cooperative effect. Flicker noise has been observed, for example, in the light from quasars, the intensity of sunspots, the current through resistors, the sand flow in an hour glass, the flow of rivers such as the Nile, and even stock exchange price indices. ' All of these may be considered to be extended dynamical systems. Despite the ubiquity of flicker noise, its origin is not well understood. Indeed, one may say that because of its ubiquity, no proposed mechanism to data can lay claim as the single general underlying root of 1/f noise. We shall argue that flicker noise is in fact not noise but reflects the intrinsic dynamics of self-organized critical systems. Another signature of criticality is spatial selfsimilarity. It has been pointed out that nature is full of self-similar \"fractal\" structures, though the physical reason for this is not understood. \" Most notably, the whole universe is an extended dynamical system where a self-similar cosmic string structure has been claimed. Turbulence is a phenomenon where self-similarity is believed to occur in both space and time. Cooperative critical phenomena are well known in the context of phase transitions in equilibrium statistical mechanics. ' At the transition point, spatial selfsirnilarity occurs, and the dynamical response function has a characteristic power-law \"1/f\" behavior. (We use quotes because often flicker noise involves frequency spectra with dependence f ~ with P only roughly equal to 1.0.) Low-dimensional nonequilibrium dynamical systems also undergo phase transitions (bifurcations, mode locking, intermittency, etc.) where the properties of the attractors change. However, the critical point can be reached only by fine tuning a parameter (e.g. , temperature), and so may occur only accidentally in nature: It",
"title": ""
},
{
"docid": "3fcce3664db5812689c121138e2af280",
"text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "63c2662fdac3258587c5b1baa2133df9",
"text": "Automatic design via Bayesian optimization holds great promise given the constant increase of available data across domains. However, it faces difficulties from high-dimensional, potentially discrete, search spaces. We propose to probabilistically embed inputs into a lower dimensional, continuous latent space, where we perform gradient-based optimization guided by a Gaussian process. Building on variational autoncoders, we use both labeled and unlabeled data to guide the encoding and increase its accuracy. In addition, we propose an adversarial extension to render the latent representation invariant with respect to specific design attributes, which allows us to transfer these attributes across structures. We apply the framework both to a functional-protein dataset and to perform optimization of drag coefficients directly over high-dimensional shapes without incorporating domain knowledge or handcrafted features.",
"title": ""
},
{
"docid": "072b17732d8b628d3536e7045cd0047d",
"text": "In this paper, we propose a high-speed parallel 128 bit multiplier for Ghash Function in conjunction with its FPGA implementation. Through the use of Verilog the designs are evaluated by using Xilinx Vertax5 with 65nm technic and 30,000 logic cells. The highest throughput of 30.764Gpbs can be achieved on virtex5 with the consumption of 8864 slices LUT. The proposed design of the multiplier can be utilized as a design IP core for the implementation of the Ghash Function. The architecture of the multiplier can also apply in more general polynomial basis. Moreover it can be used as arithmetic module in other encryption field.",
"title": ""
},
{
"docid": "561b37c506657693d27fa65341faf51e",
"text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.",
"title": ""
},
{
"docid": "f8e3b21fd5481137a80063e04e9b5488",
"text": "On the basis of the notion that the ability to exert self-control is critical to the regulation of aggressive behaviors, we suggest that mindfulness, an aspect of the self-control process, plays a key role in curbing workplace aggression. In particular, we note the conceptual and empirical distinctions between dimensions of mindfulness (i.e., mindful awareness and mindful acceptance) and investigate their respective abilities to regulate workplace aggression. In an experimental study (Study 1), a multiwave field study (Study 2a), and a daily diary study (Study 2b), we established that the awareness dimension, rather than the acceptance dimension, of mindfulness plays a more critical role in attenuating the association between hostility and aggression. In a second multiwave field study (Study 3), we found that mindful awareness moderates the association between hostility and aggression by reducing the extent to which individuals use dysfunctional emotion regulation strategies (i.e., surface acting), rather than by reducing the extent to which individuals engage in dysfunctional thought processes (i.e., rumination). The findings are discussed in terms of the implications of differentiating the dimensions and mechanisms of mindfulness for regulating workplace aggression. (PsycINFO Database Record",
"title": ""
},
{
"docid": "4502ba935124c2daa9a49fc24ec5865b",
"text": "Medical image processing is the most challenging and emerging field now a day’s. In this field, detection of brain tumor from MRI brain scan has become one of the most challenging problems, due to complex structure of brain. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. A computer aided diagnostic system has been proposed here for detecting the tumor texture in biological study. This is an attempt made which describes the proposed strategy for detection of tumor with the help of segmentation techniques in MATLAB; which incorporates preprocessing stages of noise removal, image enhancement and edge detection. Processing stages includes segmentation like intensity and watershed based segmentation, thresholding to extract the area of unwanted cells from the whole image. Here algorithms are proposed to calculate area and percentage of the tumor. Keywords— MRI, FCM, MKFCM, SVM, Otsu, threshold, fudge factor",
"title": ""
},
{
"docid": "11c245ca7bc133155ff761374dfdea6e",
"text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.",
"title": ""
},
{
"docid": "05b1be7a90432eff4b62675826b77e09",
"text": "People invest time, attention, and emotion while engaging in various activities in the real-world, for either purposes of awareness or participation. Social media platforms such as Twitter offer tremendous opportunities for people to become engaged in such real-world events through information sharing and communicating about these events. However, little is understood about the factors that affect people’s Twitter engagement in such real-world events. In this paper, we address this question by first operationalizing a person’s Twitter engagement in real-world events such as posting, retweeting, or replying to tweets about such events. Next, we construct statistical models that examine multiple predictive factors associated with four different perspectives of users’ Twitter engagement, and quantify their potential influence on predicting the (i) presence; and (ii) degree – of the user’s engagement with 643 real-world events. We also consider the effect of these factors with respect to a finer granularization of the different categories of events. We find that the measures of people’s prior Twitter activities, topical interests, geolocation, and social network structures are all variously correlated to their engagement with real-world events.",
"title": ""
},
{
"docid": "d6f322f4dd7daa9525f778ead18c8b5e",
"text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.",
"title": ""
},
{
"docid": "8a1e94245d8fbdaf97402923d4dbc213",
"text": "This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.",
"title": ""
},
{
"docid": "840d4b26eec402038b9b3462fc0a98ac",
"text": "A bench model of the new generation intelligent universal transformer (IUT) has been recently developed for distribution applications. The distribution IUT employs high-voltage semiconductor device technologies along with multilevel converter circuits for medium-voltage grid connection. This paper briefly describes the basic operation of the IUT and its experimental setup. Performances under source and load disturbances are characterized with extensive tests using a voltage sag generator and various linear and nonlinear loads. Experimental results demonstrate that IUT input and output can avoid direct impact from its opposite side disturbances. The output voltage is well regulated when the voltage sag is applied to the input. The input voltage and current maintains clean sinusoidal and unity power factor when output is nonlinear load. Under load transients, the input and output voltages remain well regulated. These key features prove that the power quality performance of IUT is far superior to that of conventional copper-and-iron based transformers",
"title": ""
},
{
"docid": "e6dba9e9ad2db632caed6b19b9f5a010",
"text": "Efficient and accurate similarity searching on a large time series data set is an important but non- trivial problem. In this work, we propose a new approach to improve the quality of similarity search on time series data by combining symbolic aggregate approximation (SAX) and piecewise linear approximation. The approach consists of three steps: transforming real valued time series sequences to symbolic strings via SAX, pattern matching on the symbolic strings and a post-processing via Piecewise Linear Approximation.",
"title": ""
},
{
"docid": "d6cf367f29ed1c58fb8fd0b7edf69458",
"text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.",
"title": ""
},
{
"docid": "641d09ff15b731b679dbe3e9004c1578",
"text": "In recent years, geological disposal of radioactive waste has focused on placement of highand intermediate-level wastes in mined underground caverns at depths of 500–800 m. Notwithstanding the billions of dollars spent to date on this approach, the difficulty of finding suitable sites and demonstrating to the public and regulators that a robust safety case can be developed has frustrated attempts to implement disposal programmes in several countries, and no disposal facility for spent nuclear fuel exists anywhere. The concept of deep borehole disposal was first considered in the 1950s, but was rejected as it was believed to be beyond existing drilling capabilities. Improvements in drilling and associated technologies and advances in sealing methods have prompted a re-examination of this option for the disposal of high-level radioactive wastes, including spent fuel and plutonium. Since the 1950s, studies of deep boreholes have involved minimal investment. However, deep borehole disposal offers a potentially safer, more secure, cost-effective and environmentally sound solution for the long-term management of high-level radioactive waste than mined repositories. Potentially it could accommodate most of the world’s spent fuel inventory. This paper discusses the concept, the status of existing supporting equipment and technologies and the challenges that remain.",
"title": ""
},
{
"docid": "ab677299ffa1e6ae0f65daf5de75d66c",
"text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.",
"title": ""
},
{
"docid": "e7f91b90eab54dfd7f115a3a0225b673",
"text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.",
"title": ""
},
{
"docid": "684b9d64f4476a6b9dd3df1bd18bcb1d",
"text": "We present the cases of three children with patent ductus arteriosus (PDA), pulmonary arterial hypertension (PAH), and desaturation. One of them had desaturation associated with atrial septal defect (ASD). His ASD, PAH, and desaturation improved after successful device closure of the PDA. The other two had desaturation associated with Down syndrome. One had desaturation only at room air oxygen (21% oxygen) but well saturated with 100% oxygen, subsequently underwent successful device closure of the PDA. The other had experienced desaturation at a younger age but spontaneously recovered when he was older, following attempted device closure of the PDA, with late embolization of the device.",
"title": ""
},
{
"docid": "527e750a6047100cba1f78a3036acb9b",
"text": "This paper presents a Generative Adversarial Network (GAN) to model multi-turn dialogue generation, which trains a latent hierarchical recurrent encoder-decoder simultaneously with a discriminative classifier that make the prior approximate to the posterior. Experiments show that our model achieves better results.",
"title": ""
},
{
"docid": "27ddea786e06ffe20b4f526875cdd76b",
"text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .",
"title": ""
}
] | scidocsrr |
7b2ed986ed98f67cdc3456f543a73f54 | In-DBMS Sampling-based Sub-trajectory Clustering | [
{
"docid": "03aba9a44f1ee13cc7f16aadbebb7165",
"text": "The increasing pervasiveness of location-acquisition technologies has enabled collection of huge amount of trajectories for almost any kind of moving objects. Discovering useful patterns from their movement behaviors can convey valuable knowledge to a variety of critical applications. In this light, we propose a novel concept, called gathering, which is a trajectory pattern modeling various group incidents such as celebrations, parades, protests, traffic jams and so on. A key observation is that these incidents typically involve large congregations of individuals, which form durable and stable areas with high density. In this work, we first develop a set of novel techniques to tackle the challenge of efficient discovery of gathering patterns on archived trajectory dataset. Afterwards, since trajectory databases are inherently dynamic in many real-world scenarios such as traffic monitoring, fleet management and battlefield surveillance, we further propose an online discovery solution by applying a series of optimization schemes, which can keep track of gathering patterns while new trajectory data arrive. Finally, the effectiveness of the proposed concepts and the efficiency of the approaches are validated by extensive experiments based on a real taxicab trajectory dataset.",
"title": ""
}
] | [
{
"docid": "2089f931cf6fca595898959cbfbca28a",
"text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.",
"title": ""
},
{
"docid": "c551e19208e367cc5546a3d46f7534c8",
"text": "We propose a novel approach for solving the approximate nearest neighbor search problem in arbitrary metric spaces. The distinctive feature of our approach is that we can incrementally build a non-hierarchical distributed structure for given metric space data with a logarithmic complexity scaling on the size of the structure and adjustable accuracy probabilistic nearest neighbor queries. The structure is based on a small world graph with vertices corresponding to the stored elements, edges for links between them and the greedy algorithm as base algorithm for searching. Both search and addition algorithms require only local information from the structure. The performed simulation for data in the Euclidian space shows that the structure built using the proposed algorithm has navigable small world properties with logarithmic search complexity at fixed accuracy and has weak (power law) scalability with the dimensionality of the stored data.",
"title": ""
},
{
"docid": "880aa3de3b839739927cbd82b7abcf8a",
"text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.",
"title": ""
},
{
"docid": "9441113599194d172b6f618058b2ba88",
"text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.",
"title": ""
},
{
"docid": "997a1ec16394a20b3a7f2889a583b09d",
"text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "1583d8c41b15fb77787deef955ace886",
"text": "The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is im- portant that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e., to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera- based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human 'ground-truth' maneuvers were then recorded, to yield the 'failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.",
"title": ""
},
{
"docid": "f81059b5ff3d621dfa9babc8e68bc0ab",
"text": "A zero voltage switching (ZVS) isolated Sepic converter with active clamp topology is presented. The buck-boost type of active clamp is connected in parallel with the primary side of the transformer to absorb all the energy stored in the transformer leakage inductance and to limit the peak voltage on the switching device. During the transition interval between the main and auxiliary switches, the resonance based on the output capacitor of switch and the transformer leakage inductor can achieve ZVS for both switches. The operational principle, steady state analysis and design consideration of the proposed converter are presented. Finally, the proposed converter is verified by the experimental results based on an 180 W prototype circuit.",
"title": ""
},
{
"docid": "c57c69fd1858b50998ec9706e34f6c46",
"text": "Hashing has recently attracted considerable attention for large scale similarity search. However, learning compact codes with good performance is still a challenge. In many cases, the real-world data lies on a low-dimensional manifold embedded in high-dimensional ambient space. To capture meaningful neighbors, a compact hashing representation should be able to uncover the intrinsic geometric structure of the manifold, e.g., the neighborhood relationships between subregions. Most existing hashing methods only consider this issue during mapping data points into certain projected dimensions. When getting the binary codes, they either directly quantize the projected values with a threshold, or use an orthogonal matrix to refine the initial projection matrix, which both consider projection and quantization separately, and will not well preserve the locality structure in the whole learning process. In this paper, we propose a novel hashing algorithm called Locality Preserving Hashing to effectively solve the above problems. Specifically, we learn a set of locality preserving projections with a joint optimization framework, which minimizes the average projection distance and quantization loss simultaneously. Experimental comparisons with other state-of-the-art methods on two large scale datasets demonstrate the effectiveness and efficiency of our method.",
"title": ""
},
{
"docid": "fd32f2117ae01049314a0c1cfb565724",
"text": "Smart phones, tablets, and the rise of the Internet of Things are driving an insatiable demand for wireless capacity. This demand requires networking and Internet infrastructures to evolve to meet the needs of current and future multimedia applications. Wireless HetNets will play an important role toward the goal of using a diverse spectrum to provide high quality-of-service, especially in indoor environments where most data are consumed. An additional tier in the wireless HetNets concept is envisioned using indoor gigabit small-cells to offer additional wireless capacity where it is needed the most. The use of light as a new mobile access medium is considered promising. In this article, we describe the general characteristics of WiFi and VLC (or LiFi) and demonstrate a practical framework for both technologies to coexist. We explore the existing research activity in this area and articulate current and future research challenges based on our experience in building a proof-of-concept prototype VLC HetNet.",
"title": ""
},
{
"docid": "638c9e4ba1c3d35fdb766c17b188529d",
"text": "Association football is a popular sport, but it is also a big business. From a managerial perspective, the most important decisions that team managers make concern player transfers, so issues related to player valuation, especially the determination of transfer fees and market values, are of major concern. Market values can be understood as estimates of transfer fees—that is, prices that could be paid for a player on the football market—so they play an important role in transfer negotiations. These values have traditionally been estimated by football experts, but crowdsourcing has emerged as an increasingly popular approach to estimating market value. While researchers have found high correlations between crowdsourced market values and actual transfer fees, the process behind crowd judgments is not transparent, crowd estimates are not replicable, and they are updated infrequently because they require the participation of many users. Data analytics may thus provide a sound alternative or a complementary approach to crowd-based estimations of market value. Based on a unique data set that is comprised of 4217 players from the top five European leagues and a period of six playing seasons, we estimate players’ market values using multilevel regression analysis. The regression results suggest that data-driven estimates of market value can overcome several of the crowd’s practical limitations while producing comparably accurate numbers. Our results have important implications for football managers and scouts, as data analytics facilitates precise, objective, and reliable estimates of market value that can be updated at any time. © 2017 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )",
"title": ""
},
{
"docid": "5dda89fbe7f5757588b5dff0e6c2565d",
"text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight gures to be more attractive than normal or overweight gures, regardless of WHR. The female gure with the high WHR (0.86) was judged to be more attractive than the gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These ndings lend stronger support to sociocultural rather than evolutionary hypotheses.",
"title": ""
},
{
"docid": "a492dcdbb9ec095cdfdab797c4b4e659",
"text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.",
"title": ""
},
{
"docid": "813b4607e9675ad4811ba181a912bbe9",
"text": "The end-Permian mass extinction was the most severe biodiversity crisis in Earth history. To better constrain the timing, and ultimately the causes of this event, we collected a suite of geochronologic, isotopic, and biostratigraphic data on several well-preserved sedimentary sections in South China. High-precision U-Pb dating reveals that the extinction peak occurred just before 252.28 ± 0.08 million years ago, after a decline of 2 per mil (‰) in δ(13)C over 90,000 years, and coincided with a δ(13)C excursion of -5‰ that is estimated to have lasted ≤20,000 years. The extinction interval was less than 200,000 years and synchronous in marine and terrestrial realms; associated charcoal-rich and soot-bearing layers indicate widespread wildfires on land. A massive release of thermogenic carbon dioxide and/or methane may have caused the catastrophic extinction.",
"title": ""
},
{
"docid": "fe94febc520eab11318b49391d46476b",
"text": "BACKGROUND\nDiabetes is a chronic disease, with high prevalence across many nations, which is characterized by elevated level of blood glucose and risk of acute and chronic complication. The Kingdom of Saudi Arabia (KSA) has one of the highest levels of diabetes prevalence globally. It is well-known that the treatment of diabetes is complex process and requires both lifestyle change and clear pharmacologic treatment plan. To avoid the complication from diabetes, the effective behavioural change and extensive education and self-management is one of the key approaches to alleviate such complications. However, this process is lengthy and expensive. The recent studies on the user of smart phone technologies for diabetes self-management have proven to be an effective tool in controlling hemoglobin (HbA1c) levels especially in type-2 diabetic (T2D) patients. However, to date no reported study addressed the effectiveness of this approach in the in Saudi patients. This study investigates the impact of using mobile health technologies for the self-management of diabetes in Saudi Arabia.\n\n\nMETHODS\nIn this study, an intelligent mobile diabetes management system (SAED), tailored for T2D patients in KSA was developed. A pilot study of the SAED system was conducted in Saudi Arabia with 20 diabetic patients for 6 months duration. The patients were randomly categorized into a control group who did not use the SAED system and an intervention group whom used the SAED system for their diabetes management during this period. At the end of the follow-up period, the HbA1c levels in the patients in both groups were measure together with a diabetes knowledge test was also conducted to test the diabetes awareness of the patients.\n\n\nRESULTS\nThe results of SAED pilot study showed that the patients in the intervention group were able to significantly decrease their HbA1c levels compared to the control group. The SAED system also enhanced the diabetes awareness amongst the patients in the intervention group during the trial period. These outcomes confirm the global studies on the effectiveness of smart phone technologies in diabetes management. The significance of the study is that this was one of the first such studies conducted on Saudi patients and of their acceptance for such technology in their diabetes self-management treatment plans.\n\n\nCONCLUSIONS\nThe pilot study of the SAED system showed that a mobile health technology can significantly improve the HbA1C levels among Saudi diabetic and improve their disease management plans. The SAED system can also be an effective and low-cost solution in improving the quality of life of diabetic patients in the Kingdom considering the high level of prevalence and the increasing economic burden of this disease.",
"title": ""
},
{
"docid": "98d40e5a6df5b6a3ab39a04bf04c6a65",
"text": "T Internet has increased the flexibility of retailers, allowing them to operate an online arm in addition to their physical stores. The online channel offers potential benefits in selling to customer segments that value the convenience of online shopping, but it also raises new challenges. These include the higher likelihood of costly product returns when customers’ ability to “touch and feel” products is important in determining fit. We study competing retailers that can operate dual channels (“bricks and clicks”) and examine how pricing strategies and physical store assistance levels change as a result of the additional Internet outlet. A central result we obtain is that when differentiation among competing retailers is not too high, having an online channel can actually increase investment in store assistance levels (e.g., greater shelf display, more-qualified sales staff, floor samples) and decrease profits. Consequently, when the decision to open an Internet channel is endogenized, there can exist an asymmetric equilibrium where only one retailer elects to operate an online arm but earns lower profits than its bricks-only rival. We also characterize equilibria where firms open an online channel, even though consumers only use it for research and learning purposes but buy in stores. A number of extensions are discussed, including retail settings where firms carry multiple product categories, shipping and handling costs, and the role of store assistance in impacting consumer perceived benefits.",
"title": ""
},
{
"docid": "ecd7fca4f2ea0207582755a2b9733419",
"text": "This work introduces a novel framework for quantifying the presence and strength of recurrent dynamics in video data. Specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in a way which does not require segmentation, training, object tracking or 1-dimensional surrogate signals. Our methodology operates directly on video data. The approach combines ideas from nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem of determining the circularity or toroidality of an associated geometric space. Through extensive testing, we show the robustness of our scores with respect to several noise models/levels; we show that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.",
"title": ""
},
{
"docid": "2a89fb135d7c53bda9b1e3b8598663a5",
"text": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "dc54b73eb740bc1bbdf1b834a7c40127",
"text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.",
"title": ""
}
] | scidocsrr |
b0fe005c63685b8e6c294dd475fc55e9 | BilBOWA: Fast Bilingual Distributed Representations without Word Alignments | [
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
},
{
"docid": "8acd410ff0757423d09928093e7e8f63",
"text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .",
"title": ""
}
] | [
{
"docid": "2639c6ed94ad68f5e0c4579f84f52f35",
"text": "This article introduces the Swiss Army Menu (SAM), a radial menu that enables a very large number of functions on a single small tactile screen. The design of SAM relies on four different kinds of items, support for navigating in hierarchies of items and a control based on small thumb movements. SAM can thus offer a set of functions so large that it would typically have required a number of widgets that could not have been displayed in a single viewport at the same time.",
"title": ""
},
{
"docid": "feca1bd8b881f3d550f0f0912913081f",
"text": "There is an ever-increasing interest in the development of automatic medical diagnosis systems due to the advancement in computing technology and also to improve the service by medical community. The knowledge about health and disease is required for reliable and accurate medical diagnosis. Diabetic Retinopathy (DR) is one of the most common causes of blindness and it can be prevented if detected and treated early. DR has different signs and the most distinctive are microaneurysm and haemorrhage which are dark lesions and hard exudates and cotton wool spots which are bright lesions. Location and structure of blood vessels and optic disk play important role in accurate detection and classification of dark and bright lesions for early detection of DR. In this article, we propose a computer aided system for the early detection of DR. The article presents algorithms for retinal image preprocessing, blood vessel enhancement and segmentation and optic disk localization and detection which eventually lead to detection of different DR lesions using proposed hybrid fuzzy classifier. The developed methods are tested on four different publicly available databases. The presented methods are compared with recently published methods and the results show that presented methods outperform all others.",
"title": ""
},
{
"docid": "ea048488791219be809072862a061444",
"text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .",
"title": ""
},
{
"docid": "24632f6891d12600619e4bf7f9a444d1",
"text": "Product recommender systems are often deployed by e-commerce websites to improve user experience and increase sales. However, recommendation is limited by the product information hosted in those e-commerce sites and is only triggered when users are performing e-commerce activities. In this paper, we develop a novel product recommender system called METIS, a MErchanT Intelligence recommender System, which detects users' purchase intents from their microblogs in near real-time and makes product recommendation based on matching the users' demographic information extracted from their public profiles with product demographics learned from microblogs and online reviews. METIS distinguishes itself from traditional product recommender systems in the following aspects: 1) METIS was developed based on a microblogging service platform. As such, it is not limited by the information available in any specific e-commerce website. In addition, METIS is able to track users' purchase intents in near real-time and make recommendations accordingly. 2) In METIS, product recommendation is framed as a learning to rank problem. Users' characteristics extracted from their public profiles in microblogs and products' demographics learned from both online product reviews and microblogs are fed into learning to rank algorithms for product recommendation. We have evaluated our system in a large dataset crawled from Sina Weibo. The experimental results have verified the feasibility and effectiveness of our system. We have also made a demo version of our system publicly available and have implemented a live system which allows registered users to receive recommendations in real time.",
"title": ""
},
{
"docid": "9817009ca281ae09baf45b5f8bdef87d",
"text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.",
"title": ""
},
{
"docid": "4290b4ba8000aeaf24cd7fb8640b4570",
"text": "Drawing on semi-structured interviews and cognitive mapping with 14 craftspeople, this paper analyzes the socio-technical arrangements of people and tools in the context of workspaces and productivity. Using actor-network theory and the concept of companionability, both of which emphasize the role of human and non-human actants in the socio-technical fabrics of everyday life, I analyze the relationships between people, productivity and technology through the following themes: embodiment, provenance, insecurity, flow and companionability. The discussion section develops these themes further through comparison with rhetoric surrounding the Internet of Things (IoT). By putting the experiences of craftspeople in conversation with IoT rhetoric, I suggest several policy interventions for understanding connectivity and inter-device operability as material, flexible and respectful of human agency.",
"title": ""
},
{
"docid": "4782e5fb1044fa5f6a54cf8130f8f6fb",
"text": "Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.",
"title": ""
},
{
"docid": "48703205408e6ebd8f8fc357560acc41",
"text": "Two experiments found that when asked to perform the physically exerting tasks of clapping and shouting, people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone. This decrease, which we call social loafing, is in addition to losses due to faulty coordination of group efforts. Social loafing is discussed in terms of its experimental generality and theoretical importance. The widespread occurrence, the negative consequences for society, and some conditions that can minimize social loafing are also explored.",
"title": ""
},
{
"docid": "8b3ab5df68f71ff4be4d3902c81e35be",
"text": "When learning to program, frustrating experiences contribute to negative learning outcomes and poor retention in the field. Defining a common framework that explains why these experiences occur can lead to better interventions and learning mechanisms. To begin constructing such a framework, we asked 45 software developers about the severity of their frustration and to recall their most recent frustrating programming experience. As a result, 67% considered their frustration to be severe. Further, we distilled the reported experiences into 11 categories, which include issues with mapping behaviors to code and broken programming tools. Finally, we discuss future directions for defining our framework and designing future interventions.",
"title": ""
},
{
"docid": "05ea7a05b620c0dc0a0275f55becfbc3",
"text": "Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a midlevel of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.",
"title": ""
},
{
"docid": "a81e4507632505b64f4839a1a23fa440",
"text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.",
"title": ""
},
{
"docid": "c6a7c67fa77d2a5341b8e01c04677058",
"text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.",
"title": ""
},
{
"docid": "0f20cfce49eaa9f447fc45b1d4c04be0",
"text": "Face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media and law enforcement. There has been tremendous progress in face recognition accuracy over the past few decades, much of which can be attributed to deep learning based approaches during the last five years. Indeed, automated face recognition systems are now believed to surpass human performance in some scenarios. Despite this progress, a crucial question still remains unanswered: given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. By explicitly accounting for two sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is able to estimate the capacity of any given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of a 128-dimensional deep neural network based face representation, FaceNet [1], and that of the classical Eigenfaces [2] representation of the same dimensionality. Our numerical experiments on unconstrained faces indicate that, (a) our capacity estimation model yields a capacity upper bound of 5.8×108 for FaceNet and 1×100 for Eigenface representation at a false acceptance rate (FAR) of 1%, (b) the capacity of the face representation reduces drastically as you lower the desired FAR (for FaceNet representation; the capacity at FAR of 0.1% and 0.001% is 2.4×106 and 7.0×102, respectively), and (c) the empirical performance of the FaceNet representation is significantly below the theoretical limit.",
"title": ""
},
{
"docid": "152122f523efc9150033dbf5798c650f",
"text": "Nowadays, computer systems are presented in almost all types of human activity and they support any kind of industry as well. Most of these systems are distributed where the communication between nodes is based on computer networks of any kind. Connectivity between system components is the key issue when designing distributed systems, especially systems of industrial informatics. The industrial area requires a wide range of computer communication means, particularly time-constrained and safety-enhancing ones. From fieldbus and industrial Ethernet technologies through wireless and internet-working solutions to standardization issues, there are many aspects of computer networks uses and many interesting research domains. Lots of them are quite sophisticated or even unique. The main goal of this paper is to present the survey of the latest trends in the communication domain of industrial distributed systems and to emphasize important questions as dependability, and standardization. Finally, the general assessment and estimation of the future development is provided. The presentation is based on the abstract description of dataflow within a system.",
"title": ""
},
{
"docid": "90c2121fc04c0c8d9c4e3d8ee7b8ecc0",
"text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.",
"title": ""
},
{
"docid": "28e1c4c2622353fc87d3d8a971b9e874",
"text": "In-memory key/value store (KV-store) is a key building block for many systems like databases and large websites. Two key requirements for such systems are efficiency and availability, which demand a KV-store to continuously handle millions of requests per second. A common approach to availability is using replication, such as primary-backup (PBR), which, however, requires M+1 times memory to tolerate M failures. This renders scarce memory unable to handle useful user jobs.\n This article makes the first case of building highly available in-memory KV-store by integrating erasure coding to achieve memory efficiency, while not notably degrading performance. A main challenge is that an in-memory KV-store has much scattered metadata. A single KV put may cause excessive coding operations and parity updates due to excessive small updates to metadata. Our approach, namely Cocytus, addresses this challenge by using a hybrid scheme that leverages PBR for small-sized and scattered data (e.g., metadata and key), while only applying erasure coding to relatively large data (e.g., value). To mitigate well-known issues like lengthy recovery of erasure coding, Cocytus uses an online recovery scheme by leveraging the replicated metadata information to continuously serve KV requests. To further demonstrate the usefulness of Cocytus, we have built a transaction layer by using Cocytus as a fast and reliable storage layer to store database records and transaction logs. We have integrated the design of Cocytus to Memcached and extend it to support in-memory transactions. Evaluation using YCSB with different KV configurations shows that Cocytus incurs low overhead for latency and throughput, can tolerate node failures with fast online recovery, while saving 33% to 46% memory compared to PBR when tolerating two failures. A further evaluation using the SmallBank OLTP benchmark shows that in-memory transactions can run atop Cocytus with high throughput, low latency, and low abort rate and recover fast from consecutive failures.",
"title": ""
},
{
"docid": "fee1419f689259bc5fe7e4bfd8f0242c",
"text": "One of the challenges in computer vision is how to learn an accurate classifier for a new domain by using labeled images from an old domain under the condition that there is no available labeled images in the new domain. Domain adaptation is an outstanding solution that tackles this challenge by employing available source-labeled datasets, even with significant difference in distribution and properties. However, most prior methods only reduce the difference in subspace marginal or conditional distributions across domains while completely ignoring the source data label dependence information in a subspace. In this paper, we put forward a novel domain adaptation approach, referred to as Enhanced Subspace Distribution Matching. Specifically, it aims to jointly match the marginal and conditional distributions in a kernel principal dimensionality reduction procedure while maximizing the source label dependence in a subspace, thus raising the subspace distribution matching degree. Extensive experiments verify that it can significantly outperform several state-of-the-art methods for cross-domain image classification problems.",
"title": ""
},
{
"docid": "2d6ea84dcdae28291c5fdca01495d51f",
"text": "This paper presents how to generate questions from given passages using neural networks, where large scale QA pairs are automatically crawled and processed from Community-QA website, and used as training data. The contribution of the paper is 2-fold: First, two types of question generation approaches are proposed, one is a retrieval-based method using convolution neural network (CNN), the other is a generation-based method using recurrent neural network (RNN); Second, we show how to leverage the generated questions to improve existing question answering systems. We evaluate our question generation method for the answer sentence selection task on three benchmark datasets, including SQuAD, MS MARCO, and WikiQA. Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.",
"title": ""
},
{
"docid": "0a35370e6c99e122b8051a977029d77a",
"text": "To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.",
"title": ""
},
{
"docid": "a30de4a213fe05c606fb16d204b9b170",
"text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD",
"title": ""
}
] | scidocsrr |
519cad491c492024d286bfcba25e17a6 | A Heuristics Approach for Fast Detecting Suspicious Money Laundering Cases in an Investment Bank | [
{
"docid": "e67dc912381ebbae34d16aad0d3e7d92",
"text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.",
"title": ""
},
{
"docid": "0a0f4f5fc904c12cacb95e87f62005d0",
"text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.",
"title": ""
}
] | [
{
"docid": "5666b1a6289f4eac05531b8ff78755cb",
"text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.",
"title": ""
},
{
"docid": "bfa178f35027a55e8fd35d1c87789808",
"text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.",
"title": ""
},
{
"docid": "56cf91a279fdcee59841cb9b8c866626",
"text": "This paper describes a new maximum-power-point-tracking method for a photovoltaic system based on the Lagrange Interpolation Formula and proposes the particle swarm optimization method. The proposed control scheme eliminates the problems of conventional methods by using only a simple numerical calculation to initialize the particles around the global maximum power point. Hence, the suggested control scheme will utilize less iterations to reach the maximum power point. Simulation study is carried out using MATLAB/SIMULINK and compared with the Perturb and Observe method, the Incremental Conductance method, and the conventional Particle Swarm Optimization algorithm. The proposed algorithm is verified with the OPAL-RT real-time simulator. The simulation results confirm that the proposed algorithm can effectively enhance the stability and the fast tracking capability under abnormal insolation conditions.",
"title": ""
},
{
"docid": "70d7c838e7b5c4318e8764edb5a70555",
"text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.",
"title": ""
},
{
"docid": "9fab400cba6d9c91aba707c6952889f8",
"text": "Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called ‘adversarial subspaces’) in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets . Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.",
"title": ""
},
{
"docid": "db1d87d3e5ab39ef639d7c53a740340a",
"text": "Plants are natural producers of chemical substances, providing potential treatment of human ailments since ancient times. Some herbal chemicals in medicinal plants of traditional and modern medicine carry the risk of herb induced liver injury (HILI) with a severe or potentially lethal clinical course, and the requirement of a liver transplant. Discontinuation of herbal use is mandatory in time when HILI is first suspected as diagnosis. Although, herbal hepatotoxicity is of utmost clinical and regulatory importance, lack of a stringent causality assessment remains a major issue for patients with suspected HILI, while this problem is best overcome by the use of the hepatotoxicity specific CIOMS (Council for International Organizations of Medical Sciences) scale and the evaluation of unintentional reexposure test results. Sixty five different commonly used herbs, herbal drugs, and herbal supplements and 111 different herbs or herbal mixtures of the traditional Chinese medicine (TCM) are reported causative for liver disease, with levels of causality proof that appear rarely conclusive. Encouraging steps in the field of herbal hepatotoxicity focus on introducing analytical methods that identify cases of intrinsic hepatotoxicity caused by pyrrolizidine alkaloids, and on omics technologies, including genomics, proteomics, metabolomics, and assessing circulating micro-RNA in the serum of some patients with intrinsic hepatotoxicity. It remains to be established whether these new technologies can identify idiosyncratic HILI cases. To enhance its globalization, herbal medicine should universally be marketed as herbal drugs under strict regulatory surveillance in analogy to regulatory approved chemical drugs, proving a positive risk/benefit profile by enforcing evidence based clinical trials and excellent herbal drug quality.",
"title": ""
},
{
"docid": "57290d8e0a236205c4f0ce887ffed3ab",
"text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.",
"title": ""
},
{
"docid": "a6e2652aa074719ac2ca6e94d12fed03",
"text": "■ Lincoln Laboratory led the nation in the development of high-power wideband radar with a unique capability for resolving target scattering centers and producing three-dimensional images of individual targets. The Laboratory fielded the first wideband radar, called ALCOR, in 1970 at Kwajalein Atoll. Since 1970 the Laboratory has developed and fielded several other wideband radars for use in ballistic-missile-defense research and space-object identification. In parallel with these radar systems, the Laboratory has developed high-capacity, high-speed signal and data processing techniques and algorithms that permit generation of target images and derivation of other target features in near real time. It has also pioneered new ways to realize improved resolution and scatterer-feature identification in wideband radars by the development and application of advanced signal processing techniques. Through the analysis of dynamic target images and other wideband observables, we can acquire knowledge of target form, structure, materials, motion, mass distribution, identifying features, and function. Such capability is of great benefit in ballistic missile decoy discrimination and in space-object identification.",
"title": ""
},
{
"docid": "e82cd7c22668b0c9ed62b4afdf49d1f4",
"text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.",
"title": ""
},
{
"docid": "10d9758469a1843d426f56a379c2fecb",
"text": "A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler.",
"title": ""
},
{
"docid": "58858f0cd3561614f1742fe7b0380861",
"text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.",
"title": ""
},
{
"docid": "e5539337c36ec7a03bf327069156ea2c",
"text": "An approach is proposed to estimate the location, velocity, and acceleration of a target vehicle to avoid a possible collision. Radial distance, velocity, and acceleration are extracted from the hybrid linear frequency modulation (LFM)/frequency-shift keying (FSK) echoed signals and then processed using the Kalman filter and the trilateration process. This approach proves to converge fast with good accuracy. Two other approaches, i.e., an extended Kalman filter (EKF) and a two-stage Kalman filter (TSKF), are used as benchmarks for comparison. Several scenarios of vehicle movement are also presented to demonstrate the effectiveness of this approach.",
"title": ""
},
{
"docid": "1ad353e3d7765e1681c062c777087be7",
"text": "The cyber world provides an anonymous environment for criminals to conduct malicious activities such as spamming, sending ransom e-mails, and spreading botnet malware. Often, these activities involve textual communication between a criminal and a victim, or between criminals themselves. The forensic analysis of online textual documents for addressing the anonymity problem called authorship analysis is the focus of most cybercrime investigations. Authorship analysis is the statistical study of linguistic and computational characteristics of the written documents of individuals. This paper is the first work that presents a unified data mining solution to address authorship analysis problems based on the concept of frequent pattern-based writeprint. Extensive experiments on real-life data suggest that our proposed solution can precisely capture the writing styles of individuals. Furthermore, the writeprint is effective to identify the author of an anonymous text from ∗Corresponding author Email addresses: [email protected] (Farkhund Iqbal), [email protected] (Hamad Binsalleeh), [email protected] (Benjamin C. M. Fung), [email protected] (Mourad Debbabi) Preprint submitted to Information Sciences March 10, 2011 a group of suspects and to infer sociolinguistic characteristics of the author.",
"title": ""
},
{
"docid": "fb6494dcf01a927597ff784a3323e8c2",
"text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.",
"title": ""
},
{
"docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2",
"text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).",
"title": ""
},
{
"docid": "70c6da9da15ad40b4f64386b890ccf51",
"text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.",
"title": ""
},
{
"docid": "0fb45311d5e6a7348917eaa12ffeab46",
"text": "Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this task, which is a form of Memory Network, that recognizes entities and their relations to answers through a focus attention mechanism. Our model is named Question Dependent Recurrent Entity Network and extends Recurrent Entity Network by exploiting aspects of the question during the memorization process. We validate the model on both synthetic and real datasets: the bAbI question answering dataset and the CNN & Daily News reading comprehension dataset. In our experiments, the models achieved a State-ofThe-Art in the former and competitive results in the latter.",
"title": ""
},
{
"docid": "decbbd09bcf7a36a3886d52864e9a08c",
"text": "INTRODUCTION\nBirth preparedness and complication readiness (BPCR) is a strategy to promote timely use of skilled maternal and neonatal care during childbirth. According to World Health Organization, BPCR should be a key component of focused antenatal care. Dakshina Kannada, a coastal district of Karnataka state, is categorized as a high-performing district (institutional delivery rate >25%) under the National Rural Health Mission. However, a substantial proportion of women in the district experience complications during pregnancy (58.3%), childbirth (45.7%), and postnatal (17.4%) period. There is a paucity of data on BPCR practice and the factors associated with it in the district. Exploring this would be of great use in the evidence-based fine-tuning of ongoing maternal and child health interventions.\n\n\nOBJECTIVE\nTo assess BPCR practice and the factors associated with it among the beneficiaries of two rural Primary Health Centers (PHCs) of Dakshina Kannada district, Karnataka, India.\n\n\nMETHODS\nA facility-based cross-sectional study was conducted among 217 pregnant (>28 weeks of gestation) and recently delivered (in the last 6 months) women in two randomly selected PHCs from June -September 2013. Exit interviews were conducted using a pre-designed semi-structured interview schedule. Information regarding socio-demographic profile, obstetric variables, and knowledge of key danger signs was collected. BPCR included information on five key components: identified the place of delivery, saved money to pay for expenses, mode of transport identified, identified a birth companion, and arranged a blood donor if the need arises. In this study, a woman who recalled at least two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (total six) was considered as knowledgeable on key danger signs. Optimal BPCR practice was defined as following at least three out of five key components of BPCR.\n\n\nOUTCOME MEASURES\nProportion, Odds ratio, and adjusted Odds ratio (adj OR) for optimal BPCR practice.\n\n\nRESULTS\nA total of 184 women completed the exit interview (mean age: 26.9±3.9 years). Optimal BPCR practice was observed in 79.3% (95% CI: 73.5-85.2%) of the women. Multivariate logistic regression revealed that age >26 years (adj OR = 2.97; 95%CI: 1.15-7.7), economic status of above poverty line (adj OR = 4.3; 95%CI: 1.12-16.5), awareness of minimum two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (adj OR = 3.98; 95%CI: 1.4-11.1), preference to private health sector for antenatal care/delivery (adj OR = 2.9; 95%CI: 1.1-8.01), and woman's discussion about the BPCR with her family members (adj OR = 3.4; 95%CI: 1.1-10.4) as the significant factors associated with optimal BPCR practice.\n\n\nCONCLUSION\nIn this study population, BPCR practice was better than other studies reported from India. Healthcare workers at the grassroots should be encouraged to involve women's family members while explaining BPCR and key danger signs with a special emphasis on young (<26 years) and economically poor women. Ensuring a reinforcing discussion between woman and her family members may further enhance the BPCR practice.",
"title": ""
},
{
"docid": "91eaef6e482601533656ca4786b7a023",
"text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.",
"title": ""
},
{
"docid": "bba4d637cf40e81ea89e61e875d3c425",
"text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.",
"title": ""
}
] | scidocsrr |
d85aa425e7c3ca40f0275b09af8446bf | A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection | [
{
"docid": "00e8c142e7f059c10cd9eabdb78e0120",
"text": "Running average method and its modified version are two simple and fast methods for background modeling. In this paper, some weaknesses of running average method and standard background subtraction are mentioned. Then, a fuzzy approach for background modeling and background subtraction is proposed. For fuzzy background modeling, fuzzy running average is suggested. Background modeling and background subtraction algorithms are very commonly used in vehicle detection systems. To demonstrate the advantages of fuzzy running average and fuzzy background subtraction, these methods and their standard versions are compared in vehicle detection application. Experimental results show that fuzzy approach is relatively more accurate than classical approach.",
"title": ""
}
] | [
{
"docid": "4c5dd43f350955b283f1a04ddab52d41",
"text": "This thesis deals with interaction design for a class of upcoming computer technologies for human use characterized by being different from traditional desktop computers in their physical appearance and the contexts in which they are used. These are typically referred to as emerging technologies. Emerging technologies often imply interaction dissimilar from how computers are usually operated. This challenges the scope and applicability of existing knowledge about human-computer interaction design. The thesis focuses on three specific technologies: virtual reality, augmented reality and mobile computer systems. For these technologies, five themes are addressed: current focus of research, concepts, interaction styles, methods and tools. These themes inform three research questions, which guide the conducted research. The thesis consists of five published research papers and a summary. In the summary, current focus of research is addressed from the perspective of research methods and research purpose. Furthermore, the notions of human-computer interaction design and emerging technologies are discussed and two central distinctions are introduced. Firstly, interaction design is divided into two categories with focus on systems and processes respectively. Secondly, the three studied emerging technologies are viewed in relation to immersion into virtual space and mobility in physical space. These distinctions are used to relate the five paper contributions, each addressing one of the three studied technologies with focus on properties of systems or the process of creating them respectively. Three empirical sources contribute to the results. Experiments with interaction design inform the development of concepts and interaction styles suitable for virtual reality, augmented reality and mobile computer systems. Experiments with designing interaction inform understanding of how methods and tools support design processes for these technologies. Finally, a literature survey informs a review of existing research, and identifies current focus, limitations and opportunities for future research. The primary results of the thesis are: 1) Current research within human-computer interaction design for the studied emerging technologies focuses on building systems ad-hoc and evaluating them in artificial settings. This limits the generation of cumulative theoretical knowledge. 2) Interaction design for the emerging technologies studied requires the development of new suitable concepts and interaction styles. Suitable concepts describe unique properties and challenges of a technology. Suitable interaction styles respond to these challenges by exploiting the technology’s unique properties. 3) Designing interaction for the studied emerging technologies involves new use situations, a distance between development and target platforms and complex programming. Elements of methods exist, which are useful for supporting the design of interaction, but they are fragmented and do not support the process as a whole. The studied tools do not support the design process as a whole either but support aspects of interaction design by bridging the gulf between development and target platforms and providing advanced programming environments. Menneske-maskine interaktionsdesign for opkommende teknologier Virtual Reality, Augmented Reality og Mobile Computersystemer",
"title": ""
},
{
"docid": "b04ba2e942121b7a32451f0b0f690553",
"text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381",
"title": ""
},
{
"docid": "9aa24f6e014ac5104c5b9ff68dc45576",
"text": "The development of social networks has led the public in general to find easy accessibility for communication with respect to rapid communication to each other at any time. Such services provide the quick transmission of information which is its positive side but its negative side needs to be kept in mind thereby misinformation can spread. Nowadays, in this era of digitalization, the validation of such information has become a real challenge, due to lack of information authentication method. In this paper, we design a framework for the rumors detection from the Facebook events data, which is based on inquiry comments. The proposed Inquiry Comments Detection Model (ICDM) identifies inquiry comments utilizing a rule-based approach which entails regular expressions to categorize the sentences as an inquiry into those starting with an intransitive verb (like is, am, was, will, would and so on) and also those sentences ending with a question mark. We set the threshold value to compare with the ratio of Inquiry to English comments and identify the rumors. We verified the proposed ICDM on labeled data, collected from snopes.com. Our experiments revealed that the proposed method achieved considerably well in comparison to the existing machine learning techniques. The proposed ICDM approach attained better results of 89% precision, 77% recall, and 82% F-measure. We are of the opinion that our experimental findings of this study will be useful for the worldwide adoption. Keywords—Social networks; rumors; inquiry comments; question identification",
"title": ""
},
{
"docid": "153f452486e2eacb9dc1cf95275dd015",
"text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.",
"title": ""
},
{
"docid": "096bc66bb6f4c04109cf26d9d474421c",
"text": "A statistical analysis of full text downloads of articles in Elsevier's ScienceDirect covering all disciplines reveals large differences in download frequencies, their skewness, and their correlation with Scopus-based citation counts, between disciplines, journals, and document types. Download counts tend to be two orders of magnitude higher and less skewedly distributed than citations. A mathematical model based on the sum of two exponentials does not adequately capture monthly download counts. The degree of correlation at the article level within a journal is similar to that at the journal level in the discipline covered by that journal, suggesting that the differences between journals are to a large extent discipline-specific. Despite the fact that in all study journals download and citation counts per article positively correlate, little overlap may exist between the set of articles appearing in the top of the citation distribution and that with the most frequently downloaded ones. Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions.",
"title": ""
},
{
"docid": "9728b73d9b5075b5b0ee878ddfc9379a",
"text": "The security research community has invested significant effort in improving the security of Android applications over the past half decade. This effort has addressed a wide range of problems and resulted in the creation of many tools for application analysis. In this article, we perform the first systematization of Android security research that analyzes applications, characterizing the work published in more than 17 top venues since 2010. We categorize each paper by the types of problems they solve, highlight areas that have received the most attention, and note whether tools were ever publicly released for each effort. Of the released tools, we then evaluate a representative sample to determine how well application developers can apply the results of our community’s efforts to improve their products. We find not only that significant work remains to be done in terms of research coverage but also that the tools suffer from significant issues ranging from lack of maintenance to the inability to produce functional output for applications with known vulnerabilities. We close by offering suggestions on how the community can more successfully move forward.",
"title": ""
},
{
"docid": "1585d7e1f1e6950949dc954c2d0bba51",
"text": "The state-of-the-art techniques for aspect-level sentiment analysis focus on feature modeling using a variety of deep neural networks (DNN). Unfortunately, their practical performance may fall short of expectations due to semantic complexity of natural languages. Motivated by the observation that linguistic hints (e.g. explicit sentiment words and shift words) can be strong indicators of sentiment, we present a joint framework, SenHint, which integrates the output of deep neural networks and the implication of linguistic hints into a coherent reasoning model based on Markov Logic Network (MLN). In SenHint, linguistic hints are used in two ways: (1) to identify easy instances, whose sentiment can be automatically determined by machine with high accuracy; (2) to capture implicit relations between aspect polarities. We also empirically evaluate the performance of SenHint on both English and Chinese benchmark datasets. Our experimental results show that SenHint can effectively improve accuracy compared with the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "bf445955186e2f69f4ef182850090ffc",
"text": "The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.",
"title": ""
},
{
"docid": "63dcb42d456ab4b6512c47437e354f7b",
"text": "The deep learning revolution brought us an extensive array of neural network architectures that achieve state-of-the-art performance in a wide variety of Computer Vision tasks including among others classification, detection and segmentation. In parallel, we have also been observing an unprecedented demand in computational and memory requirements, rendering the efficient use of neural networks in low-powered devices virtually unattainable. Towards this end, we propose a threestage compression and acceleration pipeline that sparsifies, quantizes and entropy encodes activation maps of Convolutional Neural Networks. Sparsification increases the representational power of activation maps leading to both acceleration of inference and higher model accuracy. Inception-V3 and MobileNet-V1 can be accelerated by as much as 1.6× with an increase in accuracy of 0.38% and 0.54% on the ImageNet and CIFAR-10 datasets respectively. Quantizing and entropy coding the sparser activation maps lead to higher compression over the baseline, reducing the memory cost of the network execution. Inception-V3 and MobileNet-V1 activation maps, quantized to 16 bits, are compressed by as much as 6× with an increase in accuracy of 0.36% and 0.55% respectively.",
"title": ""
},
{
"docid": "023fa0ac94b2ea1740f1bbeb8de64734",
"text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.",
"title": ""
},
{
"docid": "2d492d66d0abee5d5dd41cf73a83e943",
"text": "Using a novel replacement gate SOI FinFET device structure, we have fabricated FinFETs with fin width (D<inf>Fin</inf>) of 4nm, fin pitch (FP) of 40nm, and gate length (L<inf>G</inf>) of 20nm. With this structure, we have achieved arrays of thousands of fins for D<inf>Fin</inf> down to 4nm with robust yield and structural integrity. We observe performance degradation, increased variability, and V<inf>T</inf> shift as D<inf>Fin</inf> is reduced. Capacitance measurements agree with quantum confinement behavior which has been predicted to pose a fundamental limit to scaling FinFETs below 10nm L<inf>G</inf>.",
"title": ""
},
{
"docid": "b3a775719d87c3837de671001c77568b",
"text": "Regularization of Deep Neural Networks (DNNs) for the sake of improving their generalization capability is important and challenging. The development in this line benefits theoretical foundation of DNNs and promotes their usability in different areas of artificial intelligence. In this paper, we investigate the role of Rademacher complexity in improving generalization of DNNs and propose a novel regularizer rooted in Local Rademacher Complexity (LRC). While Rademacher complexity is well known as a distribution-free complexity measure of function class that help boost generalization of statistical learning methods, extensive study shows that LRC, its counterpart focusing on a restricted function class, leads to sharper convergence rates and potential better generalization given finite training sample. Our LRC based regularizer is developed by estimating the complexity of the function class centered at the minimizer of the empirical loss of DNNs. Experiments on various types of network architecture demonstrate the effectiveness of LRC regularization in improving generalization. Moreover, our method features the state-of-the-art result on the CIFAR-10 dataset with network architecture found by neural architecture search.",
"title": ""
},
{
"docid": "c41038d0e3cf34e8a1dcba07a86cce9a",
"text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.",
"title": ""
},
{
"docid": "4cbec8031ea32380675b1d8dff107cab",
"text": "Quorum-sensing bacteria communicate with extracellular signal molecules called autoinducers. This process allows community-wide synchronization of gene expression. A screen for additional components of the Vibrio harveyi and Vibrio cholerae quorum-sensing circuits revealed the protein Hfq. Hfq mediates interactions between small, regulatory RNAs (sRNAs) and specific messenger RNA (mRNA) targets. These interactions typically alter the stability of the target transcripts. We show that Hfq mediates the destabilization of the mRNA encoding the quorum-sensing master regulators LuxR (V. harveyi) and HapR (V. cholerae), implicating an sRNA in the circuit. Using a bioinformatics approach to identify putative sRNAs, we identified four candidate sRNAs in V. cholerae. The simultaneous deletion of all four sRNAs is required to stabilize hapR mRNA. We propose that Hfq, together with these sRNAs, creates an ultrasensitive regulatory switch that controls the critical transition into the high cell density, quorum-sensing mode.",
"title": ""
},
{
"docid": "329487a07d4f71e30b64da5da1c6684a",
"text": "The purpose was to investigate the effect of 25 weeks heavy strength training in young elite cyclists. Nine cyclists performed endurance training and heavy strength training (ES) while seven cyclists performed endurance training only (E). ES, but not E, resulted in increases in isometric half squat performance, lean lower body mass, peak power output during Wingate test, peak aerobic power output (W(max)), power output at 4 mmol L(-1)[la(-)], mean power output during 40-min all-out trial, and earlier occurrence of peak torque during the pedal stroke (P < 0.05). ES achieved superior improvements in W(max) and mean power output during 40-min all-out trial compared with E (P < 0.05). The improvement in 40-min all-out performance was associated with the change toward achieving peak torque earlier in the pedal stroke (r = 0.66, P < 0.01). Neither of the groups displayed alterations in VO2max or cycling economy. In conclusion, heavy strength training leads to improved cycling performance in elite cyclists as evidenced by a superior effect size of ES training vs E training on relative improvements in power output at 4 mmol L(-1)[la(-)], peak power output during 30-s Wingate test, W(max), and mean power output during 40-min all-out trial.",
"title": ""
},
{
"docid": "a059fc50eb0e4cab21b04a75221b3160",
"text": "This paper presents the design of an X-band active antenna self-oscillating down-converter mixer in substrate integrated waveguide technology (SIW). Electromagnetic analysis is used to design a SIW cavity backed patch antenna with resonance at 9.9 GHz used as the receiving antenna, and subsequently harmonic balance analysis combined with optimization techniques are used to synthesize a self-oscillating mixer with oscillating frequency of 6.525 GHz. The conversion gain is optimized for the mixing product involving the second harmonic of the oscillator and the RF input signal, generating an IF frequency of 3.15 GHz to have conversion gain in at least 600 MHz bandwidth around the IF frequency. The active antenna circuit finds application in compact receiver front-end modules as well as active self-oscillating mixer arrays.",
"title": ""
},
{
"docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c",
"text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.",
"title": ""
},
{
"docid": "094906bcd076ae3207ba04755851c73a",
"text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.",
"title": ""
},
{
"docid": "598744a94cbff466c42e6788d5e23a79",
"text": "The energy consumption of DRAM is a critical concern in modern computing systems. Improvements in manufacturing process technology have allowed DRAM vendors to lower the DRAM supply voltage conservatively, which reduces some of the DRAM energy consumption. We would like to reduce the DRAM supply voltage more aggressively, to further reduce energy. Aggressive supply voltage reduction requires a thorough understanding of the effect voltage scaling has on DRAM access latency and DRAM reliability.\n In this paper, we take a comprehensive approach to understanding and exploiting the latency and reliability characteristics of modern DRAM when the supply voltage is lowered below the nominal voltage level specified by DRAM standards. Using an FPGA-based testing platform, we perform an experimental study of 124 real DDR3L (low-voltage) DRAM chips manufactured recently by three major DRAM vendors. We find that reducing the supply voltage below a certain point introduces bit errors in the data, and we comprehensively characterize the behavior of these errors. We discover that these errors can be avoided by increasing the latency of three major DRAM operations (activation, restoration, and precharge). We perform detailed DRAM circuit simulations to validate and explain our experimental findings. We also characterize the various relationships between reduced supply voltage and error locations, stored data patterns, DRAM temperature, and data retention.\n Based on our observations, we propose a new DRAM energy reduction mechanism, called Voltron. The key idea of Voltron is to use a performance model to determine by how much we can reduce the supply voltage without introducing errors and without exceeding a user-specified threshold for performance loss. Our evaluations show that Voltron reduces the average DRAM and system energy consumption by 10.5% and 7.3%, respectively, while limiting the average system performance loss to only 1.8%, for a variety of memory-intensive quad-core workloads. We also show that Voltron significantly outperforms prior dynamic voltage and frequency scaling mechanisms for DRAM.",
"title": ""
}
] | scidocsrr |
0bd9c78ab4332552b8a0deee10c732db | Programming models for sensor networks: A survey | [
{
"docid": "f3574f1e3f0ef3a5e1d20cb15b040105",
"text": "Composed of tens of thousands of tiny devices with very limited resources (\"motes\"), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost.We present Maté, a tiny communication-centric virtual machine designed for sensor networks. Maté's high-level interface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Maté's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.",
"title": ""
}
] | [
{
"docid": "0f3cad05c9c267f11c4cebd634a12c59",
"text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.",
"title": ""
},
{
"docid": "49fa638e44d13695217c7f1bbb3f6ebd",
"text": "Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they cannot manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nyström low-rank approximation of kernel spaces. The resulting “kernelized” neural network achieves state-of-the-art accuracy in three different tasks.",
"title": ""
},
{
"docid": "4b68d3c94ef785f80eac9c4c6ca28cfe",
"text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.",
"title": ""
},
{
"docid": "54b43b5e3545710dfe37f55b93084e34",
"text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.",
"title": ""
},
{
"docid": "ca8bb290339946e2d3d3e14c01023aa5",
"text": "OBJECTIVE\nTo establish a centile chart of cervical length between 18 and 32 weeks of gestation in a low-risk population of women.\n\n\nMETHODS\nA prospective longitudinal cohort study of women with a low risk, singleton pregnancy using public healthcare facilities in Cape Town, South Africa. Transvaginal measurement of cervical length was performed between 16 and 32 weeks of gestation and used to construct centile charts. The distribution of cervical length was determined for gestational ages and was used to establish estimates of longitudinal percentiles. Centile charts were constructed for nulliparous and multiparous women together and separately.\n\n\nRESULTS\nCentile estimation was based on data from 344 women. Percentiles showed progressive cervical shortening with increasing gestational age. Averaged over the entire follow-up period, mean cervical length was 1.5 mm shorter in nulliparous women compared with multiparous women (95% CI, 0.4-2.6).\n\n\nCONCLUSIONS\nEstablishment of longitudinal reference values of cervical length in a low-risk population will contribute toward a better understanding of cervical length in women at risk for preterm labor.",
"title": ""
},
{
"docid": "2d0cc17115692f1e72114c636ba74811",
"text": "A new inline coupling topology for narrowband helical resonator filters is proposed that allows to introduce selectively located transmission zeros (TZs) in the stopband. We show that a pair of helical resonators arranged in an interdigital configuration can realize a large range of in-band coupling coefficient values and also selectively position a TZ in the stopband. The proposed technique dispenses the need for auxiliary elements, so that the size, complexity, power handling and insertion loss of the filter are not compromised. A second order prototype filter with dimensions of the order of 0.05λ, power handling capability up to 90 W, measured insertion loss of 0.18 dB and improved selectivity is presented.",
"title": ""
},
{
"docid": "b5d3c7822f2ba9ca89d474dda5f180b6",
"text": "We consider a class of a nested optimization problems involving inner and outer objectives. We observe that by taking into explicit account the optimization dynamics for the inner objective it is possible to derive a general framework that unifies gradient-based hyperparameter optimization and meta-learning (or learning-to-learn). Depending on the specific setting, the variables of the outer objective take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We show that some recently proposed methods in the latter setting can be instantiated in our framework and tackled with the same gradient-based algorithms. Finally, we discuss possible design patterns for learning-to-learn and present encouraging preliminary experiments for few-shot learning.",
"title": ""
},
{
"docid": "d8752c40782d8189d454682d1d30738e",
"text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.",
"title": ""
},
{
"docid": "1461157186183f11d7270d89eecd926a",
"text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.",
"title": ""
},
{
"docid": "1a69b777e03d2d2589dd9efb9cda2a10",
"text": "Three-dimensional measurement of joint motion is a promising tool for clinical evaluation and therapeutic treatment comparisons. Although many devices exist for joints kinematics assessment, there is a need for a system that could be used in routine practice. Such a system should be accurate, ambulatory, and easy to use. The combination of gyroscopes and accelerometers (i.e., inertial measurement unit) has proven to be suitable for unrestrained measurement of orientation during a short period of time (i.e., few minutes). However, due to their inability to detect horizontal reference, inertial-based systems generally fail to measure differential orientation, a prerequisite for computing the three-dimentional knee joint angle recommended by the Internal Society of Biomechanics (ISB). A simple method based on a leg movement is proposed here to align two inertial measurement units fixed on the thigh and shank segments. Based on the combination of the former alignment and a fusion algorithm, the three-dimensional knee joint angle is measured and compared with a magnetic motion capture system during walking. The proposed system is suitable to measure the absolute knee flexion/extension and abduction/adduction angles with mean (SD) offset errors of -1 degree (1 degree ) and 0 degrees (0.6 degrees ) and mean (SD) root mean square (RMS) errors of 1.5 degrees (0.4 degrees ) and 1.7 degrees (0.5 degrees ). The system is also suitable for the relative measurement of knee internal/external rotation (mean (SD) offset error of 3.4 degrees (2.7 degrees )) with a mean (SD) RMS error of 1.6 degrees (0.5 degrees ). The method described in this paper can be easily adapted in order to measure other joint angular displacements such as elbow or ankle.",
"title": ""
},
{
"docid": "88def96b7287ce217f1abf8fb1b413a5",
"text": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN’s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.",
"title": ""
},
{
"docid": "2de3078c249eb87b041a2a74b6efcfdf",
"text": "To lay the groundwork for devising, improving and implementing strategies to prevent or delay the onset of disability in the elderly, we conducted a systematic literature review of longitudinal studies published between 1985 and 1997 that reported statistical associations between individual base-line risk factors and subsequent functional status in community-living older persons. Functional status decline was defined as disability or physical function limitation. We used MEDLINE, PSYCINFO, SOCA, EMBASE, bibliographies and expert consultation to select the articles, 78 of which met the selection criteria. Risk factors were categorized into 14 domains and coded by two independent abstractors. Based on the methodological quality of the statistical analyses between risk factors and functional outcomes (e.g. control for base-line functional status, control for confounding, attrition rate), the strength of evidence was derived for each risk factor. The association of functional decline with medical findings was also analyzed. The highest strength of evidence for an increased risk in functional status decline was found for (alphabetical order) cognitive impairment, depression, disease burden (comorbidity), increased and decreased body mass index, lower extremity functional limitation, low frequency of social contacts, low level of physical activity, no alcohol use compared to moderate use, poor self-perceived health, smoking and vision impairment. The review revealed that some risk factors (e.g. nutrition, physical environment) have been neglected in past research. This review will help investigators set priorities for future research of the Disablement Process, plan health and social services for elderly persons and develop more cost-effective programs for preventing disability among them.",
"title": ""
},
{
"docid": "96af2e34acf9f1e9c0c57cc24795d0f9",
"text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.",
"title": ""
},
{
"docid": "80c9f1d983bc3ddfd73cdf2abc936600",
"text": "Jazz guitar solos are improvised melody lines played on one instrument on top of a chordal accompaniment (comping). As the improvisation happens spontaneously, a reference score is non-existent, only a lead sheet. There are situations, however, when one would like to have the original melody lines in the form of notated music, see the Real Book. The motivation is either for the purpose of practice and imitation or for musical analysis. In this work, an automatic transcriber for jazz guitar solos is developed. It resorts to a very intuitive representation of tonal music signals: the pitchgram. No instrument-specific modeling is involved, so the transcriber should be applicable to other pitched instruments as well. Neither is there the need to learn any note profiles prior to or during the transcription. Essentially, the proposed transcriber is a decision tree, thus a classifier, with a depth of 3. It has a (very) low computational complexity and can be run on-line. The decision rules can be refined or extended with no or little musical education. The transcriber’s performance is evaluated on a set of ten jazz solo excerpts and compared with a state-of-the-art transcription system for the guitar plus PYIN. We achieve an improvement of 34 % w.r.t. the reference system and 19 % w.r.t. PYIN in terms of the F-measure. Another measure of accuracy, the error score, attests that the number of erroneous pitch detections is reduced by more than 50 % w.r.t. the reference system and by 45 % w.r.t. PYIN.",
"title": ""
},
{
"docid": "c0cbea5f38a04e0d123fc51af30d08c0",
"text": "This brief presents a high-efficiency current-regulated charge pump for a white light-emitting diode driver. The charge pump incorporates no series current regulator, unlike conventional voltage charge pump circuits. Output current regulation is accomplished by the proposed pumping current control. The experimental system, with two 1-muF flying and load capacitors, delivers a regulated 20-mA current from an input supply voltage of 2.8-4.2 V. The measured variation is less than 0.6% at a pumping frequency of 200 kHz. The active area of the designed chip is 0.43 mm2 in a 0.5-mum CMOS process.",
"title": ""
},
{
"docid": "334e97a1f50b5081ac08651c1d7ed943",
"text": "Veterans of all war eras have a high rate of chronic disease, mental health disorders, and chronic multi-symptom illnesses (CMI).(1-3) Many veterans report symptoms that affect multiple biological systems as opposed to isolated disease states. Standard medical treatments often target isolated disease states such as headaches, insomnia, or back pain and at times may miss the more complex, multisystem dysfunction that has been documented in the veteran population. Research has shown that veterans have complex symptomatology involving physical, cognitive, psychological, and behavioral disturbances, such as difficult to diagnose pain patterns, irritable bowel syndrome, chronic fatigue, anxiety, depression, sleep disturbance, or neurocognitive dysfunction.(2-4) Meditation and acupuncture are each broad-spectrum treatments designed to target multiple biological systems simultaneously, and thus, may be well suited for these complex chronic illnesses. The emerging literature indicates that complementary and integrative medicine (CIM) approaches augment standard medical treatments to enhance positive outcomes for those with chronic disease, mental health disorders, and CMI.(5-12.)",
"title": ""
},
{
"docid": "a6a98d0599c1339c1f2c6a6c7525b843",
"text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.",
"title": ""
},
{
"docid": "c9f2fd6bdcca5e55c5c895f65768e533",
"text": "We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.",
"title": ""
},
{
"docid": "160726aa34ba677292a2ae14666727e8",
"text": "Child sex tourism is an obscure industry where the tourist‟s primary purpose is to engage in a sexual experience with a child. Under international legislation, tourism with the intent of having sexual relations with a minor is in violation of the UN Convention of the Rights of a Child. The intent and act is a crime and in violation of human rights. This paper examines child sex tourism in the Philippines, a major destination country for the purposes of child prostitution. The purpose is to bring attention to the atrocities that occur under the guise of tourism. It offers a definition of the crisis, a description of the victims and perpetrators, and a discussion of the social and cultural factors that perpetuate the problem. Research articles and reports from non-government organizations, advocacy groups, governments and educators were examined. Although definitional challenges did emerge, it was found that several of the articles and reports varied little in their definitions of child sex tourism and in the descriptions of the victims and perpetrators. A number of differences emerged that identified the social and cultural factors responsible for the creation and perpetuation of the problem.",
"title": ""
}
] | scidocsrr |
451a376941d11616feea90f81cf4ea7d | Gami fi cation and Mobile Marketing Effectiveness | [
{
"docid": "84647b51dbbe755534e1521d9d9cf843",
"text": "Social Mediator is a forum exploring the ways that HCI research and principles interact---or might interact---with practices in the social media world.<br /><b><i>Joe McCarthy, Editor</i></b>",
"title": ""
}
] | [
{
"docid": "1286a39cec0d00f269c7490fb38f422b",
"text": "BACKGROUND\nAttention-deficit/hyperactivity disorder (ADHD) is one of the most common developmental disorders experienced in childhood and can persist into adulthood. The disorder has early onset and is characterized by a combination of overactive, poorly modulated behavior with marked inattention. In the long term it can impair academic performance, vocational success and social-emotional development. Meditation is increasingly used for psychological conditions and could be used as a tool for attentional training in the ADHD population.\n\n\nOBJECTIVES\nTo assess the effectiveness of meditation therapies as a treatment for ADHD.\n\n\nSEARCH STRATEGY\nOur extensive search included: CENTRAL, MEDLINE, EMBASE, CINAHL, ERIC, PsycINFO, C2-SPECTR, dissertation abstracts, LILACS, Virtual Health Library (VHL) in BIREME, Complementary and Alternative Medicine specific databases, HSTAT, Informit, JST, Thai Psychiatric databases and ISI Proceedings, plus grey literature and trial registries from inception to January 2010.\n\n\nSELECTION CRITERIA\nRandomized controlled trials that investigated the efficacy of meditation therapy in children or adults diagnosed with ADHD.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo authors extracted data independently using a pre-designed data extraction form. We contacted study authors for additional information required. We analyzed data using mean difference (MD) to calculate the treatment effect. The results are presented in tables, figures and narrative form.\n\n\nMAIN RESULTS\nFour studies, including 83 participants, are included in this review. Two studies used mantra meditation while the other two used yoga compared with drugs, relaxation training, non-specific exercises and standard treatment control. Design limitations caused high risk of bias across the studies. Only one out of four studies provided data appropriate for analysis. For this study there was no statistically significant difference between the meditation therapy group and the drug therapy group on the teacher rating ADHD scale (MD -2.72, 95% CI -8.49 to 3.05, 15 patients). Likewise, there was no statistically significant difference between the meditation therapy group and the standard therapy group on the teacher rating ADHD scale (MD -0.52, 95% CI -5.88 to 4.84, 17 patients). There was also no statistically significant difference between the meditation therapy group and the standard therapy group in the distraction test (MD -8.34, 95% CI -107.05 to 90.37, 17 patients).\n\n\nAUTHORS' CONCLUSIONS\nAs a result of the limited number of included studies, the small sample sizes and the high risk of bias, we are unable to draw any conclusions regarding the effectiveness of meditation therapy for ADHD. The adverse effects of meditation have not been reported. More trials are needed.",
"title": ""
},
{
"docid": "1b5dd28d1cb6fedeb24d7ac5195595c6",
"text": "Modulation recognition algorithms have recently received a great deal of attention in academia and industry. In addition to their application in the military field, these algorithms found civilian use in reconfigurable systems, such as cognitive radios. Most previously existing algorithms are focused on recognition of a single modulation. However, a multiple-input multiple-output two-way relaying channel (MIMO TWRC) with physical-layer network coding (PLNC) requires the recognition of the pair of sources modulations from the superposed constellation at the relay. In this paper, we propose an algorithm for recognition of sources modulations for MIMO TWRC with PLNC. The proposed algorithm is divided in two steps. The first step uses the higher order statistics based features in conjunction with genetic algorithm as a features selection method, while the second step employs AdaBoost as a classifier. Simulation results show the ability of the proposed algorithm to provide a good recognition performance at acceptable signal-to-noise values.",
"title": ""
},
{
"docid": "ca24d679117baf6f262609a5e4c1acfa",
"text": "Fake news pose serious threat to our society nowadays, particularly due to its wide spread through social networks. While human fact checkers cannot handle such tremendous information online in real time, AI technology can be leveraged to automate fake news detection. The first step leading to a sophisticated fake news detection system is the stance detection between statement and body text. In this work, we analyze the dataset from Fake News Challenge (FNC1) and explore several neural stance detection models based on the ideas of natural language inference and machine comprehension. Experiment results show that all neural network models can outperform the hand-crafted feature based system. By improving Attentive Reader with a full attention mechanism between body text and headline and implementing bilateral multi-perspective mathcing models, we are able to further bring up the performance and reach metric score close to 87%.",
"title": ""
},
{
"docid": "4d6559e3216836c475b4b069aa924a88",
"text": "HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Asteroids. From Observations to Models. D. Hestroffer, Paolo Tanga",
"title": ""
},
{
"docid": "cf6d0e1b0fd5a258fdcdb5a9fe8d2b65",
"text": "UNLABELLED\nPrevious studies have shown that resistance training with restricted venous blood flow (Kaatsu) results in significant strength gains and muscle hypertrophy. However, few studies have examined the concurrent vascular responses following restrictive venous blood flow training protocols.\n\n\nPURPOSE\nThe purpose of this study was to examine the effects of 4 wk of handgrip exercise training, with and without venous restriction, on handgrip strength and brachial artery flow-mediated dilation (BAFMD).\n\n\nMETHODS\nTwelve participants (mean +/- SD: age = 22 +/- 1 yr, men = 5, women = 7) completed 4 wk of bilateral handgrip exercise training (duration = 20 min, intensity = 60% of the maximum voluntary contraction, cadence = 15 grips per minute, frequency = three sessions per week). During each session, venous blood flow was restricted in one arm (experimental (EXP) arm) using a pneumatic cuff placed 4 cm proximal to the antecubital fossa and inflated to 80 mm Hg for the duration of each exercise session. The EXP and the control (CON) arms were randomly selected. Handgrip strength was measured using a hydraulic hand dynamometer. Brachial diameters and blood velocity profiles were assessed, using Doppler ultrasonography, before and after 5 min of forearm occlusion (200 mm Hg) before and at the end of the 4-wk exercise.\n\n\nRESULTS\nAfter exercise training, handgrip strength increased 8.32% (P = 0.05) in the CON arm and 16.17% (P = 0.05) in the EXP arm. BAFMD increased 24.19% (P = 0.0001) in the CON arm and decreased 30.36% (P = 0.0001) in the EXP arm.\n\n\nCONCLUSIONS\nThe data indicate handgrip training combined with venous restriction results in superior strength gains but reduced BAFMD compared with the nonrestricted arm.",
"title": ""
},
{
"docid": "984f7a2023a14efbbd5027abfc12a586",
"text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.",
"title": ""
},
{
"docid": "eff903cb53fc7f7e9719a2372d517ab3",
"text": "The freshwater angelfishes (Pterophyllum) are South American cichlids that have become very popular among aquarists, yet scarce information on their culture and aquarium husbandry exists. We studied Pterophyllum scalare to analyze dietary effects on fecundity, growth, and survival of eggs and larvae during 135 days. Three diets were used: A) decapsulated cysts of Artemia, B) commercial dry fish food, and C) a mix diet of the rotifer Brachionus plicatilis and the cladoceran Daphnia magna. The initial larval density was 100 organisms in each 40 L aquarium. With diet A, larvae reached a maximum weight of 3.80 g, a total length of 6.3 cm, and a height of 5.8 cm; with diet B: 2.80 g, 4.81 cm, and 4.79 cm, and with diet C: 3.00 g, 5.15 cm, and 5.10 cm, respectively. Significant differences were observed between diet A, and diet B and C, but no significantly differences were observed between diets B and C. Fecundity varied from 234 to 1,082 eggs in 20 and 50 g females, respectively. Egg survival ranged from 87.4% up to 100%, and larvae survival (80 larvae/40 L aquarium) from 50% to 66.3% using diet B and A, respectively. Live food was better for growing fish than the commercial balanced food diet. Fecundity and survival are important factors in planning a good production of angelfish.",
"title": ""
},
{
"docid": "2d73a7ab1e5a784d4755ed2fe44078db",
"text": "Over the last years, many papers have been published about how to use machine learning for classifying postings on microblogging platforms like Twitter, e.g., in order to assist users to reach tweets that interest them. Typically, the automatic classification results are then evaluated against a gold standard classification which consists of either (i) the hashtags of the tweets' authors, or (ii) manual annotations of independent human annotators. In this paper, we show that there are fundamental differences between these two kinds of gold standard classifications, i.e., human annotators are more likely to classify tweets like other human annotators than like the tweets' authors. Furthermore, we discuss how these differences may influence the evaluation of automatic classifications, like they may be achieved by Latent Dirichlet Allocation (LDA). We argue that researchers who conduct machine learning experiments for tweet classification should pay particular attention to the kind of gold standard they use. One may even argue that hashtags are not appropriate as a gold standard for tweet classification.",
"title": ""
},
{
"docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d",
"text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.",
"title": ""
},
{
"docid": "f48639ad675b863a28bb1bc773664ab0",
"text": "The definition and phenomenological features of 'burnout' and its eventual relationship with depression and other clinical conditions are reviewed. Work is an indispensable way to make a decent and meaningful way of living, but can also be a source of stress for a variety of reasons. Feelings of inadequate control over one's work, frustrated hopes and expectations and the feeling of losing of life's meaning, seem to be independent causes of burnout, a term that describes a condition of professional exhaustion. It is not synonymous with 'job stress', 'fatigue', 'alienation' or 'depression'. Burnout is more common than generally believed and may affect every aspect of the individual's functioning, have a deleterious effect on interpersonal and family relationships and lead to a negative attitude towards life in general. Empirical research suggests that burnout and depression are separate entities, although they may share several 'qualitative' characteristics, especially in the more severe forms of burnout, and in vulnerable individuals, low levels of satisfaction derived from their everyday work. These final issues need further clarification and should be the focus of future clinical research.",
"title": ""
},
{
"docid": "92e7a7603ec6e10d5066634955386d9b",
"text": "Obfuscation-based private web search (OB-PWS) solutions allow users to search for information in the Internet while concealing their interests. The basic privacy mechanism in OB-PWS is the automatic generation of dummy queries that are sent to the search engine along with users' real requests. These dummy queries prevent the accurate inference of search profiles and provide query deniability. In this paper we propose an abstract model and an associated analysis framework to systematically evaluate the privacy protection offered by OB-PWS systems. We analyze six existing OB-PWS solutions using our framework and uncover vulnerabilities in their designs. Based on these results, we elicit a set of features that must be taken into account when analyzing the security of OB-PWS designs to avoid falling into the same pitfalls as previous proposals.",
"title": ""
},
{
"docid": "945c5c7cd9eb2046c1b164e64318e52f",
"text": "This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches.",
"title": ""
},
{
"docid": "b9261a0d56a6305602ff27da5ec160e8",
"text": "In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.",
"title": ""
},
{
"docid": "eed5c66d0302c492f2480a888678d1dc",
"text": "In 1988 Kennedy and Chua introduced the dynamical canonical nonlinear programming circuit (NPC) to solve in real time nonlinear programming problems where the objective function and the constraints are smooth (twice continuously differentiable) functions. In this paper, a generalized circuit is introduced (G-NPC), which is aimed at solving in real time a much wider class of nonsmooth nonlinear programming problems where the objective function and the constraints are assumed to satisfy only the weak condition of being regular functions. G-NPC, which derives from a natural extension of NPC, has a neural-like architecture and also features the presence of constraint neurons modeled by ideal diodes with infinite slope in the conducting region. By using the Clarke's generalized gradient of the involved functions, G-NPC is shown to obey a gradient system of differential inclusions, and its dynamical behavior and optimization capabilities, both for convex and nonconvex problems, are rigorously analyzed in the framework of nonsmooth analysis and the theory of differential inclusions. In the special important case of linear and quadratic programming problems, salient dynamical features of G-NPC, namely the presence of sliding modes , trajectory convergence in finite time, and the ability to compute the exact optimal solution of the problem being modeled, are uncovered and explained in the developed analytical framework.",
"title": ""
},
{
"docid": "b91204ac8a118fcde9a774e925f24a7e",
"text": "Document clustering has been recognized as a central problem in text data management. Such a problem becomes particularly challenging when document contents are characterized by subtopical discussions that are not necessarily relevant to each other. Existing methods for document clustering have traditionally assumed that a document is an indivisible unit for text representation and similarity computation, which may not be appropriate to handle documents with multiple topics. In this paper, we address the problem of multi-topic document clustering by leveraging the natural composition of documents in text segments that are coherent with respect to the underlying subtopics. We propose a novel document clustering framework that is designed to induce a document organization from the identification of cohesive groups of segment-based portions of the original documents. We empirically give evidence of the significance of our segment-based approach on large collections of multi-topic documents, and we compare it to conventional methods for document clustering.",
"title": ""
},
{
"docid": "c7a13f85fdeb234c09237581b7a83238",
"text": "Acoustic structures of sound in Gunnison's prairie dog alarm calls are described, showing how these acoustic structures may encode information about three different predator species (red-tailed hawk-Buteo jamaicensis; domestic dog-Canis familaris; and coyote-Canis latrans). By dividing each alarm call into 25 equal-sized partitions and using resonant frequencies within each partition, commonly occurring acoustic structures were identified as components of alarm calls for the three predators. Although most of the acoustic structures appeared in alarm calls elicited by all three predator species, the frequency of occurrence of these acoustic structures varied among the alarm calls for the different predators, suggesting that these structures encode identifying information for each of the predators. A classification analysis of alarm calls elicited by each of the three predators showed that acoustic structures could correctly classify 67% of the calls elicited by domestic dogs, 73% of the calls elicited by coyotes, and 99% of the calls elicited by red-tailed hawks. The different distributions of acoustic structures associated with alarm calls for the three predator species suggest a duality of function, one of the design elements of language listed by Hockett [in Animal Sounds and Communication, edited by W. E. Lanyon and W. N. Tavolga (American Institute of Biological Sciences, Washington, DC, 1960), pp. 392-430].",
"title": ""
},
{
"docid": "cbf10563c5eb251f765b93be554b7439",
"text": "BACKGROUND\nAlthough fine-needle aspiration (FNA) is a safe and accurate diagnostic procedure for assessing thyroid nodules, it has limitations in diagnosing follicular neoplasms due to its relatively high false-positive rate. The purpose of the present study was to evaluate the diagnostic role of core-needle biopsy (CNB) for thyroid nodules with follicular neoplasm (FN) in comparison with FNA.\n\n\nMETHODS\nA series of 107 patients (24 men, 83 women; mean age, 47.4 years) from 231 FNAs and 107 patients (29 men, 78 women; mean age, 46.3 years) from 186 CNBs with FN readings, all of whom underwent surgery, from October 2008 to December 2013 were retrospectively analyzed. The false-positive rate, unnecessary surgery rate, and malignancy rate for the FNA and CNB patients according to the final diagnosis following surgery were evaluated.\n\n\nRESULTS\nThe CNB showed a significantly lower false-positive and unnecessary surgery rate than the FNA (4.7% versus 30.8%, 3.7% versus 26.2%, p < 0.001, respectively). In the FNA group, 33 patients (30.8%) had non-neoplasms, including nodular hyperplasia (n = 32) and chronic lymphocytic thyroiditis (n = 1). In the CNB group, 5 patients (4.7%) had non-neoplasms, all of which were nodular hyperplasia. Moreover, the CNB group showed a significantly higher malignancy rate than FNA (57.9% versus 28%, p < 0.001).\n\n\nCONCLUSIONS\nCNB showed a significantly lower false-positive rate and a higher malignancy rate than FNA in diagnosing FN. Therefore, CNB could minimize unnecessary surgery and provide diagnostic confidence when managing patients with FN to perform surgery.",
"title": ""
},
{
"docid": "c09f3698f350ef749d3ef3e626c86788",
"text": "The te rm \"reactive system\" was introduced by David Harel and Amir Pnueli [HP85], and is now commonly accepted to designate permanent ly operating systems, and to distinguish them from \"trans]ormational systems\" i.e, usual programs whose role is to terminate with a result, computed from an initial da ta (e.g., a compiler). In synchronous programming, we understand it in a more restrictive way, distinguishing between \"interactive\" and \"reactive\" systems: Interactive systems permanent ly communicate with their environment, but at their own speed. They are able to synchronize with their environment, i.e., making it wait. Concurrent processes considered in operat ing systems or in data-base management , are generally interactive. Reactive systems, in our meaning, have to react to an environment which cannot wait. Typical examples appear when the environment is a physical process. The specific features of reactive systems have been pointed out many times [Ha193,BCG88,Ber89]:",
"title": ""
},
{
"docid": "7e720290d507c3370fc50782df3e90c4",
"text": "Photobacterium damselae subsp. piscicida is the causative agent of pasteurellosis in wild and farmed marine fish worldwide. Although serologically homogeneous, recent molecular advances have led to the discovery of distinct genetic clades, depending on geographical origin. Further details of the strategies for host colonisation have arisen including information on the role of capsule, susceptibility to oxidative stress, confirmation of intracellular survival in host epithelial cells, and induced apoptosis of host macrophages. This improved understanding has given rise to new ideas and advances in vaccine technologies, which are reviewed in this paper.",
"title": ""
}
] | scidocsrr |
34330a7b716612a45a2972cf020b7b37 | Towards a Reduced-Wire Interface for CMUT-Based Intravascular Ultrasound Imaging Systems | [
{
"docid": "ffadf882ac55d9cb06b77b3ce9a6ad8c",
"text": "Three experimental techniques based on automatic swept-frequency network and impedance analysers were used to measure the dielectric properties of tissue in the frequency range 10 Hz to 20 GHz. The technique used in conjunction with the impedance analyser is described. Results are given for a number of human and animal tissues, at body temperature, across the frequency range, demonstrating that good agreement was achieved between measurements using the three pieces of equipment. Moreover, the measured values fall well within the body of corresponding literature data.",
"title": ""
},
{
"docid": "170a1dba20901d88d7dc3988647e8a22",
"text": "This paper discusses two antennas monolithically integrated on-chip to be used respectively for wireless powering and UWB transmission of a tag designed and fabricated in 0.18-μm CMOS technology. A multiturn loop-dipole structure with inductive and resistive stubs is chosen for both antennas. Using these on-chip antennas, the chip employs asymmetric communication links: at downlink, the tag captures the required supply wirelessly from the received RF signal transmitted by a reader and, for the uplink, ultra-wideband impulse-radio (UWB-IR), in the 3.1-10.6-GHz band, is employed instead of backscattering to achieve extremely low power and a high data rate up to 1 Mb/s. At downlink with the on-chip power-scavenging antenna and power-management unit circuitry properly designed, 7.5-cm powering distance has been achieved, which is a huge improvement in terms of operation distance compared with other reported tags with on-chip antenna. Also, 7-cm operating distance is achieved with the implemented on-chip UWB antenna. The tag can be powered up at all the three ISM bands of 915 MHz and 2.45 GHz, with off-chip antennas, and 5.8 GHz with the integrated on-chip antenna. The tag receives its clock and the commands wirelessly through the modulated RF powering-up signal. Measurement results show that the tag can operate up to 1 Mb/s data rate with a minimum input power of -19.41 dBm at 915-MHz band, corresponding to 15.7 m of operation range with an off-chip 0-dB gain antenna. This is a great improvement compared with conventional passive RFIDs in term of data rate and operation distance. The power consumption of the chip is measured to be just 16.6 μW at the clock frequency of 10 MHz at 1.2-V supply. In addition, in this paper, for the first time, the radiation pattern of an on-chip antenna at such a frequency is measured. The measurement shows that the antenna has an almost omnidirectional radiation pattern so that the chip's performance is less direction-dependent.",
"title": ""
}
] | [
{
"docid": "01ed88c12ed9b2ca96cdf46700005493",
"text": "Using soft tissue fillers to correct postrhinoplasty deformities in the nose is appealing. Fillers are minimally invasive and can potentially help patients who are concerned with the financial expense, anesthetic risk, or downtime generally associated with a surgical intervention. A variety of filler materials are currently available and have been used for facial soft tissue augmentation. Of these, hyaluronic acid (HA) derivatives, calcium hydroxylapatite gel (CaHA), and silicone have most frequently been used for treating nasal deformities. While effective, silicone is known to cause severe granulomatous reactions in some patients and should be avoided. HA and CaHA are likely safer, but still may occasionally lead to complications such as infection, thinning of the skin envelope, and necrosis. Nasal injection technique must include sub-SMAS placement to eliminate visible or palpable nodularity. Restricting the use of fillers to the nasal dorsum and sidewalls minimizes complications because more adverse events occur after injections to the nasal tip and alae. We believe that HA and CaHA are acceptable for the treatment of postrhinoplasty deformities in carefully selected patients; however, patients who are treated must be followed closely for complications. The use of any soft tissue filler in the nose should always be approached with great caution and with a thorough consideration of a patient's individual circumstances.",
"title": ""
},
{
"docid": "36c4b2ab451c24d2d0d6abcbec491116",
"text": "A key advantage of scientific workflow systems over traditional scripting approaches is their ability to automatically record data and process dependencies introduced during workflow runs. This information is often represented through provenance graphs, which can be used by scientists to better understand, reproduce, and verify scientific results. However, while most systems record and store data and process dependencies, few provide easy-to-use and efficient approaches for accessing and querying provenance information. Instead, users formulate provenance graph queries directly against physical data representations (e.g., relational, XML, or RDF), leading to queries that are difficult to express and expensive to evaluate. We address these problems through a high-level query language tailored for expressing provenance graph queries. The language is based on a general model of provenance supporting scientific workflows that process XML data and employ update semantics. Query constructs are provided for querying both structure and lineage information. Unlike other languages that return sets of nodes as answers, our query language is closed, i.e., answers to lineage queries are sets of lineage dependencies (edges) allowing answers to be further queried. We provide a formal semantics for the language and present novel techniques for efficiently evaluating lineage queries. Experimental results on real and synthetic provenance traces demonstrate that our lineage based optimizations outperform an in-memory and standard database implementation by orders of magnitude. We also show that our strategies are feasible and can significantly reduce both provenance storage size and query execution time when compared with standard approaches.",
"title": ""
},
{
"docid": "c65f050e911abb4b58b4e4f9b9aec63b",
"text": "The abundant spatial and contextual information provided by the advanced remote sensing technology has facilitated subsequent automatic interpretation of the optical remote sensing images (RSIs). In this paper, a novel and effective geospatial object detection framework is proposed by combining the weakly supervised learning (WSL) and high-level feature learning. First, deep Boltzmann machine is adopted to infer the spatial and structural information encoded in the low-level and middle-level features to effectively describe objects in optical RSIs. Then, a novel WSL approach is presented to object detection where the training sets require only binary labels indicating whether an image contains the target object or not. Based on the learnt high-level features, it jointly integrates saliency, intraclass compactness, and interclass separability in a Bayesian framework to initialize a set of training examples from weakly labeled images and start iterative learning of the object detector. A novel evaluation criterion is also developed to detect model drift and cease the iterative learning. Comprehensive experiments on three optical RSI data sets have demonstrated the efficacy of the proposed approach in benchmarking with several state-of-the-art supervised-learning-based object detection approaches.",
"title": ""
},
{
"docid": "09d7b14190056f357aa24ca7db71a74c",
"text": "Thirty-six blast-exposed patients and twenty-nine non-blast-exposed control subjects were tested on a battery of behavioral and electrophysiological tests that have been shown to be sensitive to central auditory processing deficits. Abnormal performance among the blast-exposed patients was assessed with reference to normative values established as the mean performance on each test by the control subjects plus or minus two standard deviations. Blast-exposed patients performed abnormally at rates significantly above that which would occur by chance on three of the behavioral tests of central auditory processing: the Gaps-In-Noise, Masking Level Difference, and Staggered Spondaic Words tests. The proportion of blast-exposed patients performing abnormally on a speech-in-noise test (Quick Speech-In-Noise) was also significantly above that expected by chance. These results suggest that, for some patients, blast exposure may lead to difficulties with hearing in complex auditory environments, even when peripheral hearing sensitivity is near normal limits.",
"title": ""
},
{
"docid": "54ab143dc18413c58c20612dbae142eb",
"text": "Elderly adults may master challenging cognitive demands by additionally recruiting the cross-hemispheric counterparts of otherwise unilaterally engaged brain regions, a strategy that seems to be at odds with the notion of lateralized functions in cerebral cortex. We wondered whether bilateral activation might be a general coping strategy that is independent of age, task content and brain region. While using functional magnetic resonance imaging (fMRI), we pushed young and old subjects to their working memory (WM) capacity limits in verbal, spatial, and object domains. Then, we compared the fMRI signal reflecting WM maintenance between hemispheric counterparts of various task-relevant cerebral regions that are known to exhibit lateralization. Whereas language-related areas kept their lateralized activation pattern independent of age in difficult tasks, we observed bilaterality in dorsolateral and anterior prefrontal cortex across WM domains and age groups. In summary, the additional recruitment of cross-hemispheric counterparts seems to be an age-independent domain-general strategy to master cognitive challenges. This phenomenon is largely confined to prefrontal cortex, which is arguably less specialized and more flexible than other parts of the brain.",
"title": ""
},
{
"docid": "a77517d692ec646474a5c77b9f188ef0",
"text": "Accurate segmentation of the heart is an important step towards evaluating cardiac function. In this paper, we present a fully automated framework for segmentation of the left (LV) and right (RV) ventricular cavities and the myocardium (Myo) on short-axis cardiac MR images. We investigate various 2D and 3D convolutional neural network architectures for this task. Experiments were performed on the ACDC 2017 challenge training dataset comprising cardiac MR images of 100 patients, where manual reference segmentations were made available for end-diastolic (ED) and end-systolic (ES) frames. We find that processing the images in a slice-by-slice fashion using 2D networks is beneficial due to a relatively large slice thickness. However, the exact network architecture only plays a minor role. We report mean Dice coefficients of 0.950 (LV), 0.893 (RV), and 0.899 (Myo), respectively with an average evaluation time of 1.1 seconds per volume on a modern GPU.",
"title": ""
},
{
"docid": "6e9e687db8f202a8fa6d49c5996e7141",
"text": "Although various scalable deep learning software packages have been proposed, it remains unclear how to best leverage parallel and distributed computing infrastructure to accelerate their training and deployment. Moreover, the effectiveness of existing parallel and distributed systems varies widely based on the neural network architecture and dataset under consideration. In order to efficiently explore the space of scalable deep learning systems and quickly diagnose their effectiveness for a given problem instance, we introduce an analytical performance model called PALEO. Our key observation is that a neural network architecture carries with it a declarative specification of the computational requirements associated with its training and evaluation. By extracting these requirements from a given architecture and mapping them to a specific point within the design space of software, hardware and communication strategies, PALEO can efficiently and accurately model the expected scalability and performance of a putative deep learning system. We show that PALEO is robust to the choice of network architecture, hardware, software, communication schemes, and parallelization strategies. We further demonstrate its ability to accurately model various recently published scalability results for CNNs such as NiN, Inception and AlexNet.",
"title": ""
},
{
"docid": "c5c46fb727ff9447ebe75e3625ad375b",
"text": "Plenty of face detection and recognition methods have been proposed and got delightful results in decades. Common face recognition pipeline consists of: 1) face detection, 2) face alignment, 3) feature extraction, 4) similarity calculation, which are separated and independent from each other. The separated face analyzing stages lead the model redundant calculation and are hard for end-to-end training. In this paper, we proposed a novel end-to-end trainable convolutional network framework for face detection and recognition, in which a geometric transformation matrix was directly learned to align the faces, instead of predicting the facial landmarks. In training stage, our single CNN model is supervised only by face bounding boxes and personal identities, which are publicly available from WIDER FACE [36] dataset and CASIA-WebFace [37] dataset. Tested on Face Detection Dataset and Benchmark (FDDB) [11] dataset and Labeled Face in the Wild (LFW) [9] dataset, we have achieved 89.24% recall for face detection task and 98.63% verification accuracy for face recognition task simultaneously, which are comparable to state-of-the-art results.",
"title": ""
},
{
"docid": "dcad8812d2d5f22cd940f45ce64fb16b",
"text": "Bioinformatics software quality assurance is essential in genomic medicine. Systematic verification and validation of bioinformatics software is difficult because it is often not possible to obtain a realistic \"gold standard\" for systematic evaluation. Here we apply a technique that originates from the software testing literature, namely Metamorphic Testing (MT), to systematically test three widely used short-read sequence alignment programs. MT alleviates the problems associated with the lack of gold standard by checking that the results from multiple executions of a program satisfy a set of expected or desirable properties that can be derived from the software specification or user expectations. We tested BWA, Bowtie and Bowtie2 using simulated data and one HapMap dataset. It is interesting to observe that multiple executions of the same aligner using slightly modified input FASTQ sequence file, such as after randomly re-ordering of the reads, may affect alignment results. Furthermore, we found that the list of variant calls can be affected unless strict quality control is applied during variant calling. Thorough testing of bioinformatics software is important in delivering clinical genomic medicine. This paper demonstrates a different framework to test a program that involves checking its properties, thus greatly expanding the number and repertoire of test cases we can apply in practice.",
"title": ""
},
{
"docid": "8dcb0f20c000a30c0d3330f6ac6b373b",
"text": "Although social networking sites (SNSs) have attracted increased attention and members in recent years, there has been little research on it: particularly on how a users’ extroversion or introversion can affect their intention to pay for these services and what other factors might influence them. We therefore proposed and tested a model that measured the users’ value and satisfaction perspectives by examining the influence of these factors in an empirical survey of 288 SNS members. At the same time, the differences due to their psychological state were explored. The causal model was validated using PLSGraph 3.0; six out of eight study hypotheses were supported. The results indicated that perceived value significantly influenced the intention to pay SNS subscription fees while satisfaction did not. Moreover, extroverts thought more highly of the social value of the SNS, while introverts placed more importance on its emotional and price value. The implications of these findings are discussed. Crown Copyright 2010 Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "c851bad8a1f7c8526d144453b3f2aa4f",
"text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.",
"title": ""
},
{
"docid": "b7d13c090e6d61272f45b1e3090f0341",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "96db5cbe83ce9fbee781b8cc26d97fc8",
"text": "We present a novel method to obtain a 3D Euclidean reconstruction of both the background and moving objects in a video sequence. We assume that, multiple objects are moving rigidly on a ground plane observed by a moving camera. The video sequence is first segmented into static background and motion blobs by a homography-based motion segmentation method. Then classical \"Structure from Motion\" (SfM) techniques are applied to obtain a Euclidean reconstruction of the static background. The motion blob corresponding to each moving object is treated as if there were a static object observed by a hypothetical moving camera, called a \"virtual camera\". This virtual camera shares the same intrinsic parameters with the real camera but moves differently due to object motion. The same SfM techniques are applied to estimate the 3D shape of each moving object and the pose of the virtual camera. We show that the unknown scale of moving objects can be approximately determined by the ground plane, which is a key contribution of this paper. Another key contribution is that we prove that the 3D motion of moving objects can be solved from the virtual camera motion with a linear constraint imposed on the object translation. In our approach, a planartranslation constraint is formulated: \"the 3D instantaneous translation of moving objects must be parallel to the ground plane\". Results on real-world video sequences demonstrate the effectiveness and robustness of our approach.",
"title": ""
},
{
"docid": "4e8c0810a7869b5b4cddf27c12aea4d9",
"text": "The success of deep learning has been a catalyst to solving increasingly complex machine-learning problems, which often involve multiple data modalities. We review recent advances in deep multimodal learning and highlight the state-of the art, as well as gaps and challenges in this active research field. We first classify deep multimodal learning architectures and then discuss methods to fuse learned multimodal representations in deep-learning architectures. We highlight two areas of research–regularization strategies and methods that learn or optimize multimodal fusion structures–as exciting areas for future work.",
"title": ""
},
{
"docid": "d9b75ed31fefa68e5b43e803cafe286b",
"text": "Flavor and color of roasted peanuts are important research areas due to their significant influence on consumer preference. The aim of the present study was to explore correlations between sensory attributes of peanuts, volatile headspace compounds and color parameters. Different raw peanuts were selected to be representative of common market types, varieties, growing locations and grades used in Europe. Peanuts were roasted by a variety of processing technologies, resulting in 134 unique samples, which were analyzed for color, volatile composition and flavor profile by expert panel. Several headspace volatile compounds which positively or negatively correlated to \"roasted peanut\", \"raw bean\", \"dark roast\" and \"sweet\" attributes were identified. Results demonstrated that the correlation of CIELAB color parameters with roast related aromas, often taken for granted by the industry, is not strong when samples of different raw materials are subjected to different processing conditions.",
"title": ""
},
{
"docid": "d4cd46d9c8f0c225d4fe7e34b308e8f1",
"text": "In this paper, a 10 kW current-fed DC-DC converter using resonant push-pull topology is demonstrated and analyzed. The grounds for component dimensioning are given and the advantages and disadvantages of the resonant push-pull topology are discussed. The converter characteristics and efficiencies are demonstrated by calculations and prototype measurements.",
"title": ""
},
{
"docid": "ffc8c9a339d05c9b24d64fc52ee341ef",
"text": "This paper presents a proposed smartphone application for the unique SmartAbility Framework that supports interaction with technology for people with reduced physical ability, through focusing on the actions that they can perform independently. The Framework is a culmination of knowledge obtained through previously conducted technology feasibility trials and controlled usability evaluations involving the user community. The Framework is an example of ability-based design that focuses on the abilities of users instead of their disabilities. The paper includes a summary of Versions 1 and 2 of the Framework, including the results of a two-phased validation approach, conducted at the UK Mobility Roadshow and via a focus group of domain experts. A holistic model developed by adapting the House of Quality (HoQ) matrix of the Quality Function Deployment (QFD) approach is also described. A systematic literature review of sensor technologies built into smart devices establishes the capabilities of sensors in the Android and iOS operating systems. The review defines a set of inclusion and exclusion criteria, as well as search terms used to elicit literature from online repositories. The key contribution is the mapping of ability-based sensor technologies onto the Framework, to enable the future implementation of a smartphone application. Through the exploitation of the SmartAbility application, the Framework will increase technology amongst people with reduced physical ability and provide a promotional tool for assistive technology manufacturers.",
"title": ""
},
{
"docid": "ce636f568fc8c07b5a44190ae171c043",
"text": "Students, researchers and professional analysts lack effective tools to make personal and collective sense of problems while working in distributed teams. Central to this work is the process of sharing—and contesting—interpretations via different forms of argument. How does the “Web 2.0” paradigm challenge us to deliver useful, usable tools for online argumentation? This paper reviews the current state of the art in Web Argumentation, describes key features of the Web 2.0 orientation, and identifies some of the tensions that must be negotiated in bringing these worlds together. It then describes how these design principles are interpreted in Cohere, a web tool for social bookmarking, idea-linking, and argument visualization.",
"title": ""
},
{
"docid": "bfcef77dedf22118700737904be13c0e",
"text": "Autonomous operation is becoming an increasingly important factor for UAVs. It enables a vehicle to decide on the most appropriate action under consideration of the current vehicle and environment state. We investigated the decision-making process using the cognitive agent-based architecture Soar, which uses techniques adapted from human decision-making. Based on Soar an agent was developed which enables UAVs to autonomously make decisions and interact with a dynamic environment. One or more UAV agents were then tested in a simulation environment which has been developed using agent-based modelling. By simulating a dynamic environment, the capabilities of a UAV agent can be tested under defined conditions and additionally its behaviour can be visualised. The agent’s abilities were demonstrated using a scenario consisting of a highly dynamic border-surveillance mission with multiple autonomous UAVs. We can show that the autonomous agents are able to execute the mission successfully and can react adaptively to unforeseen events. We conclude that using a cognitive architecture is a promising approach for modelling autonomous behaviour.",
"title": ""
},
{
"docid": "4229e2db880628ea2f0922a94c30efe0",
"text": "Since the end of the 20th century, it has become clear that web browsers will play a crucial role in accessing Internet resources such as the World Wide Web. They evolved into complex software suites that are able to process a multitude of data formats. Just-In-Time (JIT) compilation was incorporated to speed up the execution of script code, but is also used besides web browsers for performance reasons. Attackers happily welcomed JIT in their own way, and until today, JIT compilers are an important target of various attacks. This includes for example JIT-Spray, JIT-based code-reuse attacks and JIT-specific flaws to circumvent mitigation techniques in order to simplify the exploitation of memory-corruption vulnerabilities. Furthermore, JIT compilers are complex and provide a large attack surface, which is visible in the steady stream of critical bugs appearing in them. In this paper, we survey and systematize the jungle of JIT compilers of major (client-side) programs, and provide a categorization of offensive techniques for abusing JIT compilation. Thereby, we present techniques used in academic as well as in non-academic works which try to break various defenses against memory-corruption vulnerabilities. Additionally, we discuss what mitigations arouse to harden JIT compilers to impede exploitation by skilled attackers wanting to abuse Just-In-Time compilers.",
"title": ""
}
] | scidocsrr |
563185b3c4f805438a9fbd53f5aeb52c | A Knowledge-Grounded Neural Conversation Model | [
{
"docid": "5cc1f15c45f57d1206e9181dc601ee4a",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using an End-to-End Memory Network, MemN2N, a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset. The corpus has been converted for the occasion in order to frame the hidden state variable inference as a questionanswering task based on a sequence of utterances extracted from a dialog. We show that the proposed tracker gives encouraging results. Then, we propose to extend the DSTC-2 dataset and the definition of this dialog state task with specific reasoning capabilities like counting, list maintenance, yes-no question answering and indefinite knowledge management. Finally, we present encouraging results using our proposed MemN2N based tracking model.",
"title": ""
},
{
"docid": "9b30a07edc14ed2d1132421d8f372cd2",
"text": "Even when the role of a conversational agent is well known users persist in confronting them with Out-of-Domain input. This often results in inappropriate feedback, leaving the user unsatisfied. In this paper we explore the automatic creation/enrichment of conversational agents’ knowledge bases by taking advantage of natural language interactions present in the Web, such as movies subtitles. Thus, we introduce Filipe, a chatbot that answers users’ request by taking advantage of a corpus of turns obtained from movies subtitles (the Subtle corpus). Filipe is based on Say Something Smart, a tool responsible for indexing a corpus of turns and selecting the most appropriate answer, which we fully describe in this paper. Moreover, we show how this corpus of turns can help an existing conversational agent to answer Out-of-Domain interactions. A preliminary evaluation is also presented.",
"title": ""
},
{
"docid": "56bad8cef0c8ed0af6882dbc945298ef",
"text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.",
"title": ""
}
] | [
{
"docid": "c2cb1c6fcf040fa6514c2e281b3bfacb",
"text": "We analyze the line simpli cation algorithm reported by Douglas and Peucker and show that its worst case is quadratic in n, the number of input points. Then we give a algorithm, based on path hulls, that uses the geometric structure of the problem to attain a worst-case running time proportional to n log 2 n, which is the best case of the Douglas algorithm. We give complete C code and compare the two algorithms theoretically, by operation counts, and practically, by machine timings.",
"title": ""
},
{
"docid": "87e0bec51e1188b7c8ae88c2e111b2b5",
"text": "For the last few years, the EC Commission has been reviewing its application of Article 82EC which prohibits the abuse of a dominant position on the Common Market. The review has resulted in a Communication from the EC Commission which for the first time sets out its enforcement priorities under Article 82EC. The review had been limited to the so-called ‘exclusionary’ abuses and excluded ‘exploitative’ abuses; the enforcement priorities of the EC Commission set out in the Guidance (2008) are also limited to ‘exclusionary’ abuses. This is, however, odd since the EC Commission expresses the objective of Article 82EC as enhancing consumer welfare: exploitative abuses can directly harm consumers unlike exclusionary abuses which can only indirectly harm consumers as the result of exclusion of competitors. This paper questions whether and under which circumstances exploitation can and/or should be found ‘abusive’. It argues that ‘exploitative’ abuse can and should be used as the test of anticompetitive effects on the market under an effects-based approach and thus conduct should only be found abusive if it is ‘exploitative’. Similarly, mere exploitation does not demonstrate harm to competition and without the latter, exploitation on its own should not be found abusive. December 2008",
"title": ""
},
{
"docid": "059aed9f2250d422d76f3e24fd62bed8",
"text": "Single case studies led to the discovery and phenomenological description of Gelotophobia and its definition as the pathological fear of appearing to social partners as a ridiculous object (Titze 1995, 1996, 1997). The aim of the present study is to empirically examine the core assumptions about the fear of being laughed at in a sample comprising a total of 863 clinical and non-clinical participants. Discriminant function analysis yielded that gelotophobes can be separated from other shame-based neurotics, non-shamebased neurotics, and controls. Separation was best for statements specifically describing the gelotophobic symptomatology and less potent for more general questions describing socially avoidant behaviors. Factor analysis demonstrates that while Gelotophobia is composed of a set of correlated elements in homogenous samples, overall the concept is best conceptualized as unidimensional. Predicted and actual group membership converged well in a cross-classification (approximately 69% of correctly classified cases). Overall, it can be concluded that the fear of being laughed at varies tremendously among adults and might hold a key to understanding certain forms",
"title": ""
},
{
"docid": "2eebc7477084b471f9e9872ba8751359",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
},
{
"docid": "a425425658207587c079730a68599572",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstoLorg/aboutiterms.html. JSTOR's Terms and Conditions ofDse provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. Operations Research is published by INFORMS. Please contact the publisher for further permissions regarding the use of this work. Publisher contact information may be obtained at http://www.jstor.org/jowllalslinforms.html.",
"title": ""
},
{
"docid": "37e7ee6d3cc3a999ba7f4bd6dbaa27e7",
"text": "Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multi-modal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. This review article provides a factual listing of methods and summarizes the broad scientific challenges faced in the field of medical image fusion. We characterize the medical image fusion research based on (1) the widely used image fusion methods, (2) imaging modalities, and (3) imaging of organs that are under study. This review concludes that even though there exists several open ended technological and scientific challenges, the fusion of medical images has proved to be useful for advancing the clinical reliability of using medical imaging for medical diagnostics and analysis, and is a scientific discipline that has the potential to significantly grow in the coming years.",
"title": ""
},
{
"docid": "7b7b0c7ef54255839f9ff9d09669fe11",
"text": "Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research.",
"title": ""
},
{
"docid": "ed6e8f1d3bcfdd7586af7ed2541bf23b",
"text": "Many real-world datasets are comprised of different representations or views which often provide information complementary to each other. To integrate information from multiple views in the unsupervised setting, multiview clustering algorithms have been developed to cluster multiple views simultaneously to derive a solution which uncovers the common latent structure shared by multiple views. In this paper, we propose a novel NMFbased multi-view clustering algorithm by searching for a factorization that gives compatible clustering solutions across multiple views. The key idea is to formulate a joint matrix factorization process with the constraint that pushes clustering solution of each view towards a common consensus instead of fixing it directly. The main challenge is how to keep clustering solutions across different views meaningful and comparable. To tackle this challenge, we design a novel and effective normalization strategy inspired by the connection between NMF and PLSA. Experimental results on synthetic and several real datasets demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "edf52710738647f7ebd4c017ddf56c2c",
"text": "Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed in order to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, designed to perform urban reconnaissance missions, that won the MAGIC 2010 competition. This paper describes a variety of autonomous systems which require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. In this paper, we will describe technical contributions throughout our system that played a significant role in the performance of our system. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.",
"title": ""
},
{
"docid": "758eb7a0429ee116f7de7d53e19b3e02",
"text": "With the rapid development of the Internet, many types of websites have been developed. This variety of websites makes it necessary to adopt systemized evaluation criteria with a strong theoretical basis. This study proposes a set of evaluation criteria derived from an architectural perspective which has been used for over a 1000 years in the evaluation of buildings. The six evaluation criteria are internal reliability and external security for structural robustness, useful content and usable navigation for functional utility, and system interface and communication interface for aesthetic appeal. The impacts of the six criteria on user satisfaction and loyalty have been investigated through a large-scale survey. The study results indicate that the six criteria have different impacts on user satisfaction for different types of websites, which can be classified along two dimensions: users’ goals and users’ activity levels.",
"title": ""
},
{
"docid": "81a45cb4ca02c38839a81ad567eb1491",
"text": "Big data is often mined using clustering algorithms. Density-Based Spatial Clustering of Applications with Noise (DBSCAN) is a popular spatial clustering algorithm. However, it is computationally expensive and thus for clustering big data, parallel processing is required. The two prevalent paradigms for parallel processing are High-Performance Computing (HPC) based on Message Passing Interface (MPI) or Open Multi-Processing (OpenMP) and the newer big data frameworks such as Apache Spark or Hadoop. This report surveys for these two different paradigms publicly available implementations that aim at parallelizing DBSCAN and compares their performance. As a result, it is found that the big data implementations are not yet mature and in particular for skewed data, the implementation’s decomposition of the input data into parallel tasks has a huge influence on the performance in terms of running time.",
"title": ""
},
{
"docid": "69ab1b5f07c307397253f6619681a53f",
"text": "BACKGROUND\nIncreasing evidence demonstrates that motor-skill memories improve across a night of sleep, and that non-rapid eye movement (NREM) sleep commonly plays a role in orchestrating these consolidation enhancements. Here we show the benefit of a daytime nap on motor memory consolidation and its relationship not simply with global sleep-stage measures, but unique characteristics of sleep spindles at regionally specific locations; mapping to the corresponding memory representation.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nTwo groups of subjects trained on a motor-skill task using their left hand - a paradigm known to result in overnight plastic changes in the contralateral, right motor cortex. Both groups trained in the morning and were tested 8 hr later, with one group obtaining a 60-90 minute intervening midday nap, while the other group remained awake. At testing, subjects that did not nap showed no significant performance improvement, yet those that did nap expressed a highly significant consolidation enhancement. Within the nap group, the amount of offline improvement showed a significant correlation with the global measure of stage-2 NREM sleep. However, topographical sleep spindle analysis revealed more precise correlations. Specifically, when spindle activity at the central electrode of the non-learning hemisphere (left) was subtracted from that in the learning hemisphere (right), representing the homeostatic difference following learning, strong positive relationships with offline memory improvement emerged-correlations that were not evident for either hemisphere alone.\n\n\nCONCLUSIONS/SIGNIFICANCE\nThese results demonstrate that motor memories are dynamically facilitated across daytime naps, enhancements that are uniquely associated with electrophysiological events expressed at local, anatomically discrete locations of the brain.",
"title": ""
},
{
"docid": "cc33bcc919e5878fa17fd17b63bb8a34",
"text": "This paper deals with mean-field Eshelby-based homogenization techniques for multi-phase composites and focuses on three subjects which in our opinion deserved more attention than they did in the existing literature. Firstly, for two-phase composites, that is when in a given representative volume element all the inclusions have the same material properties, aspect ratio and orientation, an interpolative double inclusion model gives perhaps the best predictions to date for a wide range of volume fractions and stiffness contrasts. Secondly, for multi-phase composites (including two-phase composites with non-aligned inclusions as a special case), direct homogenization schemes might lead to a non-symmetric overall stiffness tensor, while a two-step homogenization procedure gives physically acceptable results. Thirdly, a general procedure allows to formulate the thermo-elastic version of any homogenization model defined by its isothermal strain concentration tensors. For all three subjects, the theory is presented in detail and validated against experimental data or finite element results for numerous composite systems. 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "224a2739ade3dd64e474f5c516db89a7",
"text": "Big data storage and processing are considered as one of the main applications for cloud computing systems. Furthermore, the development of the Internet of Things (IoT) paradigm has advanced the research on Machine to Machine (M2M) communications and enabled novel tele-monitoring architectures for E-Health applications. However, there is a need for converging current decentralized cloud systems, general software for processing big data and IoT systems. The purpose of this paper is to analyze existing components and methods of securely integrating big data processing with cloud M2M systems based on Remote Telemetry Units (RTUs) and to propose a converged E-Health architecture built on Exalead CloudView, a search based application. Finally, we discuss the main findings of the proposed implementation and future directions.",
"title": ""
},
{
"docid": "c3eaaa0812eb9ab7e5402339733daa28",
"text": "BACKGROUND\nHypovitaminosis D and a low calcium intake contribute to increased parathyroid function in elderly persons. Calcium and vitamin D supplements reduce this secondary hyperparathyroidism, but whether such supplements reduce the risk of hip fractures among elderly people is not known.\n\n\nMETHODS\nWe studied the effects of supplementation with vitamin D3 (cholecalciferol) and calcium on the frequency of hip fractures and other nonvertebral fractures, identified radiologically, in 3270 healthy ambulatory women (mean [+/- SD] age, 84 +/- 6 years). Each day for 18 months, 1634 women received tricalcium phosphate (containing 1.2 g of elemental calcium) and 20 micrograms (800 IU) of vitamin D3, and 1636 women received a double placebo. We measured serial serum parathyroid hormone and 25-hydroxyvitamin D (25(OH)D) concentrations in 142 women and determined the femoral bone mineral density at base line and after 18 months in 56 women.\n\n\nRESULTS\nAmong the women who completed the 18-month study, the number of hip fractures was 43 percent lower (P = 0.043) and the total number of nonvertebral fractures was 32 percent lower (P = 0.015) among the women treated with vitamin D3 and calcium than among those who received placebo. The results of analyses according to active treatment and according to intention to treat were similar. In the vitamin D3-calcium group, the mean serum parathyroid hormone concentration had decreased by 44 percent from the base-line value at 18 months (P < 0.001) and the serum 25(OH)D concentration had increased by 162 percent over the base-line value (P < 0.001). The bone density of the proximal femur increased 2.7 percent in the vitamin D3-calcium group and decreased 4.6 percent in the placebo group (P < 0.001).\n\n\nCONCLUSIONS\nSupplementation with vitamin D3 and calcium reduces the risk of hip fractures and other nonvertebral fractures among elderly women.",
"title": ""
},
{
"docid": "f62b7597dd84e4bb18a32fc1e5713394",
"text": "Automated personality prediction from social media is gaining increasing attention in natural language processing and social sciences communities. However, due to high labeling costs and privacy issues, the few publicly available datasets are of limited size and low topic diversity. We address this problem by introducing a large-scale dataset derived from Reddit, a source so far overlooked for personality prediction. The dataset is labeled with Myers-Briggs Type Indicators (MBTI) and comes with a rich set of features for more than 9k users. We carry out a preliminary feature analysis, revealing marked differences between the MBTI dimensions and poles. Furthermore, we use the dataset to train and evaluate benchmark personality prediction models, achieving macro F1-scores between 67% and 82% on the individual dimensions and 82% accuracy for exact or one-off accurate type prediction. These results are encouraging and comparable with the reliability of standardized tests.",
"title": ""
},
{
"docid": "bd88c04b8862f699e122e248ef416963",
"text": "Optic ataxia is a high-order deficit in reaching to visual goals that occurs with posterior parietal cortex (PPC) lesions. It is a component of Balint's syndrome that also includes attentional and gaze disorders. Aspects of optic ataxia are misreaching in the contralesional visual field, difficulty preshaping the hand for grasping, and an inability to correct reaches online. Recent research in nonhuman primates (NHPs) suggests that many aspects of Balint's syndrome and optic ataxia are a result of damage to specific functional modules for reaching, saccades, grasp, attention, and state estimation. The deficits from large lesions in humans are probably composite effects from damage to combinations of these functional modules. Interactions between these modules, either within posterior parietal cortex or downstream within frontal cortex, may account for more complex behaviors such as hand-eye coordination and reach-to-grasp.",
"title": ""
},
{
"docid": "8e770bdbddbf28c1a04da0f9aad4cf16",
"text": "This paper presents a novel switch-mode power amplifier based on a multicell multilevel circuit topology. The total output voltage of the system is formed by series connection of several switching cells having a low dc-link voltage. Therefore, the cells can be realized using modern low-voltage high-current power MOSFET devices and the dc link can easily be buffered by rechargeable batteries or “super” capacitors to achieve very high amplifier peak output power levels (“flying-battery” concept). The cells are operated in a phase-shifted interleaved pulsewidth-modulation mode, which, in connection with the low partial voltage of each cell, reduces the filtering effort at the output of the total amplifier to a large extent and, consequently, improves the dynamic system behavior. The paper describes the operating principle of the system, analyzes the fundamental relationships being relevant for the circuit design, and gives guidelines for the dimensioning of the control circuit. Furthermore, simulation results as well as results of measurements taken from a laboratory setup are presented.",
"title": ""
},
{
"docid": "b8a5d42e3ca09ac236414cd0081f5d48",
"text": "Convolution Neural Networks on Graphs are important generalization and extension of classical CNNs. While previous works generally assumed that the graph structures of samples are regular with unified dimensions, in many applications, they are highly diverse or even not well defined. Under some circumstances, e.g. chemical molecular data, clustering or coarsening for simplifying the graphs is hard to be justified chemically. In this paper, we propose a more general and flexible graph convolution network (EGCN) fed by batch of arbitrarily shaped data together with their evolving graph Laplacians trained in supervised fashion. Extensive experiments have been conducted to demonstrate the superior performance in terms of both the acceleration of parameter fitting and the significantly improved prediction accuracy on multiple graph-structured datasets.",
"title": ""
}
] | scidocsrr |
a4e6969885378a9a417b58a5ddf66d67 | Circularly Polarized Substrate-Integrated Waveguide Tapered Slot Antenna for Millimeter-Wave Applications | [
{
"docid": "e50355a29533bc7a91468aae1053873d",
"text": "A substrate integrated waveguide (SIW)-fed circularly polarized (CP) antenna array with a broad bandwidth of axial ratio (AR) is presented for 60-GHz wireless personal area networks (WPAN) applications. The widened AR bandwidth of an antenna element is achieved by positioning a slot-coupled rotated strip above a slot cut onto the broadwall of an SIW. A 4 × 4 antenna array is designed and fabricated using low temperature cofired ceramic (LTCC) technology. A metal-topped via fence is introduced around the strip to reduce the mutual coupling between the elements of the array. The measured results show that the AR bandwidth is more than 7 GHz. A stable boresight gain is greater than 12.5 dBic across the desired bandwidth of 57-64 GHz.",
"title": ""
},
{
"docid": "e43ede0fe674fe92fbfa2f76165cf034",
"text": "In this communication, a compact circularly polarized (CP) substrate integrated waveguide (SIW) horn antenna is proposed and investigated. Through etching a sloping slot on the common broad wall of two SIWs, mode coupling is generated between the top and down SIWs, and thus, a new field component as TE01 mode is produced. During the coupling process along the sloping slot, the difference in guide wavelengths of the two orthogonal modes also brings a phase shift between the two modes, which provides a possibility for radiating the CP wave. Moreover, the two different ports will generate the electric field components of TE01 mode with the opposite direction, which indicates the compact SIW horn antenna with a dual CP property can be realized as well. Measured results indicate that the proposed antenna operates with a wide 3-dB axial ratio bandwidth of 11.8% ranging from 17.6 to 19.8 GHz. The measured results are in good accordance with the simulated ones.",
"title": ""
}
] | [
{
"docid": "37f157cdcd27c1647548356a5194f2bc",
"text": "Purpose – The aim of this paper is to propose a novel evaluation framework to explore the “root causes” that hinder the acceptance of using internal cloud services in a university. Design/methodology/approach – The proposed evaluation framework incorporates the duo-theme DEMATEL (decision making trial and evaluation laboratory) with TAM (technology acceptance model). The operational procedures were proposed and tested on a university during the post-implementation phase after introducing the internal cloud services. Findings – According to the results, clear understanding and operational ease under the theme perceived ease of use (PEOU) are more imperative; whereas improved usefulness and productivity under the theme perceived usefulness (PU) are more urgent to foster the usage of internal clouds in the case university. Research limitations/implications – Based on the findings, some intervention activities were suggested to enhance the level of users’ acceptance of internal cloud solutions in the case university. However, the results should not be generalized to apply to other educational establishments. Practical implications – To reduce the resistance from using internal clouds, some necessary intervention activities such as developing attractive training programs, creating interesting workshops, and rewriting user friendly manual or handbook are recommended. Originality/value – The novel two-theme DEMATEL has greatly contributed to the conventional one-theme DEMATEL theory. The proposed two-theme DEMATEL procedures were the first attempt to evaluate the acceptance of using internal clouds in university. The results have provided manifest root-causes under two distinct themes, which help derive effectual intervention activities to foster the acceptance of usage of internal clouds in a university.",
"title": ""
},
{
"docid": "55f11df001ffad95e07cd20b3b27406d",
"text": "CNNs have proven to be a very successful yet computationally expensive technique which made them slow to be adopted in mobile and embedded systems. There is a number of possible optimizations: minimizing the memory footprint, using lower precision and approximate computation, reducing computation cost of convolutions with FFTs. These have been explored recently and were shown to work. This project take ideas of using FFTs further and develops an alternative way to computing CNN – purely in frequency domain. As a side result it develops intuition about nonlinear elements: why do they work and how new types can be created.",
"title": ""
},
{
"docid": "865306ad6f5288cf62a4082769e8068a",
"text": "The rapid growth of the Internet has brought with it an exponential increase in the type and frequency of cyber attacks. Many well-known cybersecurity solutions are in place to counteract these attacks. However, the generation of Big Data over computer networks is rapidly rendering these traditional solutions obsolete. To cater for this problem, corporate research is now focusing on Security Analytics, i.e., the application of Big Data Analytics techniques to cybersecurity. Analytics can assist network managers particularly in the monitoring and surveillance of real-time network streams and real-time detection of both malicious and suspicious (outlying) patterns. Such a behavior is envisioned to encompass and enhance all traditional security techniques. This paper presents a comprehensive survey on the state of the art of Security Analytics, i.e., its description, technology, trends, and tools. It hence aims to convince the reader of the imminent application of analytics as an unparalleled cybersecurity solution in the near future.",
"title": ""
},
{
"docid": "ac08d20a1430ee10c7ff761cae9d9ada",
"text": "OBJECTIVES\nTo evaluate the clinical response at 12 month in a cohort of patients with rheumatoid arthritis treated with Etanar (rhTNFR:Fc), and to register the occurrence of adverse effects.\n\n\nMETHODS\nThis is a multicentre observational cohort study. It included patients over 18 years of age with an active rheumatoid arthritis diagnosis for which the treating physician had begun a treatment scheme of 25 mg of subcutaneous etanercept (Etanar ® 25 mg: biologic type rhTNFR:Fc), twice per week. Follow-up was done during 12 months, with assessments at weeks 12, 24, 36 and 48. Evaluated outcomes included tender joint count, swollen joint count, ACR20, ACR50, ACR70, HAQ and DAS28.\n\n\nRESULTS\nOne-hundred and five (105) subjects were entered into the cohort. The median of tender and swollen joint count, ranged from 19 and 14, respectively at onset to 1 at the 12th month. By month 12, 90.5% of the subjects reached ACR20, 86% ACR50, and 65% ACR70. The median of DAS28 went from 4.7 to 2, and the median HAQ went from 1.3 to 0.2. The rate of adverse effects was 14 for every 100 persons per year. No serious adverse effects were reported. The most frequent were pruritus (5 cases), and rhinitis (3 cases).\n\n\nCONCLUSIONS\nAfter a year of following up a patient cohort treated with etanercept 25 mg twice per week, significant clinical results were observed, resulting in adequate disease control in a high percentage of patients with an adequate level of safety.",
"title": ""
},
{
"docid": "85b169515b4e4b86117abcdd83f002ea",
"text": "While Bitcoin (Peer-to-Peer Electronic Cash) [Nak]solved the double spend problem and provided work withtimestamps on a public ledger, it has not to date extendedthe functionality of a blockchain beyond a transparent andpublic payment system. Satoshi Nakamoto's original referenceclient had a decentralized marketplace service which was latertaken out due to a lack of resources [Deva]. We continued withNakamoto's vision by creating a set of commercial-grade ser-vices supporting a wide variety of business use cases, includinga fully developed blockchain-based decentralized marketplace,secure data storage and transfer, and unique user aliases thatlink the owner to all services controlled by that alias.",
"title": ""
},
{
"docid": "6a64d064220681e83751938ce0190151",
"text": "Forensic dentistry can be defined in many ways. One of the more elegant definitions is simply that forensic dentistry represents the overlap between the dental and the legal professions. This two-part series presents the field of forensic dentistry by outlining two of the major aspects of the profession: human identification and bite marks. This first paper examines the use of the human dentition and surrounding structures to enable the identification of found human remains. Conventional and novel techniques are presented.",
"title": ""
},
{
"docid": "2f20f587bb46f7133900fd8c22cea3ab",
"text": "Recent years have witnessed the significant advance in fine-grained visual categorization, which targets to classify the objects belonging to the same species. To capture enough subtle visual differences and build discriminative visual description, most of the existing methods heavily rely on the artificial part annotations, which are expensive to collect in real applications. Motivated to conquer this issue, this paper proposes a multi-level coarse-to-fine object description. This novel description only requires the original image as input, but could automatically generate visual descriptions discriminative enough for fine-grained visual categorization. This description is extracted from five sources representing coarse-to-fine visual clues: 1) original image is used as the source of global visual clue; 2) object bounding boxes are generated using convolutional neural network (CNN); 3) with the generated bounding box, foreground is segmented using the proposed k nearest neighbour-based co-segmentation algorithm; and 4) two types of part segmentations are generated by dividing the foreground with an unsupervised part learning strategy. The final description is generated by feeding these sources into CNN models and concatenating their outputs. Experiments on two public benchmark data sets show the impressive performance of this coarse-to-fine description, i.e., classification accuracy achieves 82.5% on CUB-200-2011, and 86.9% on fine-grained visual categorization-Aircraft, respectively, which outperform many recent works.",
"title": ""
},
{
"docid": "c75388c19397bf1e743970cb32649b17",
"text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.",
"title": ""
},
{
"docid": "a5e4199c16668f66656474f4eeb5d663",
"text": "Advances in information technology, particularly in the e-business arena, are enabling firms to rethink their supply chain strategies and explore new avenues for inter-organizational cooperation. However, an incomplete understanding of the value of information sharing and physical flow coordination hinder these efforts. This research attempts to help fill these gaps by surveying prior research in the area, categorized in terms of information sharing and flow coordination. We conclude by highlighting gaps in the current body of knowledge and identifying promising areas for future research. Subject Areas: e-Business, Inventory Management, Supply Chain Management, and Survey Research.",
"title": ""
},
{
"docid": "2f8eb33eed4aabce1d31f8b7dfe8e7de",
"text": "A pre-trained convolutional deep neural network (CNN) is a feed-forward computation perspective, which is widely used for the embedded systems, requires high power-and-area efficiency. This paper realizes a binarized CNN which treats only binary 2-values (+1/-1) for the inputs and the weights. In this case, the multiplier is replaced into an XNOR circuit instead of a dedicated DSP block. For hardware implementation, using binarized inputs and weights is more suitable. However, the binarized CNN requires the batch normalization techniques to retain the classification accuracy. In that case, the additional multiplication and addition require extra hardware, also, the memory access for its parameters reduces system performance. In this paper, we propose the batch normalization free CNN which is mathematically equivalent to the CNN using batch normalization. The proposed CNN treats the binarized inputs and weights with the integer bias. We implemented the VGG-16 benchmark CNN on the NetFPGA-SUME FPGA board, which has the Xilinx Inc. Virtex7 FPGA and three off-chip QDR II+ Synchronous SRAMs. Compared with the conventional FPGA realizations, although the classification error rate is 6.5% decayed, the performance is 2.82 times faster, the power efficiency is 1.76 times lower, and the area efficiency is 11.03 times smaller. Thus, our method is suitable for the embedded computer system.",
"title": ""
},
{
"docid": "6c8a6a1713473ae94d610891d917133f",
"text": "68 Computer Music Journal As digitization and information technologies advance, document analysis and optical-characterrecognition technologies have become more widely used. Optical Music Recognition (OMR), also commonly known as OCR (Optical Character Recognition) for Music, was first attempted in the 1960s (Pruslin 1966). Standard OCR techniques cannot be used in music-score recognition, because music notation has a two-dimensional structure. In a staff, the horizontal position denotes different durations of notes, and the vertical position defines the height of the note (Roth 1994). Models for nonmusical OCR assessment have been proposed and largely used (Kanai et al. 1995; Ventzislav 2003). An ideal system that could reliably read and “understand” music notation could be used in music production for educational and entertainment applications. OMR is typically used today to accelerate the conversion from image music sheets into a symbolic music representation that can be manipulated, thus creating new and revised music editions. Other applications use OMR systems for educational purposes (e.g., IMUTUS; see www.exodus.gr/imutus), generating customized versions of music exercises. A different use involves the extraction of symbolic music representations to be used as incipits or as descriptors in music databases and related retrieval systems (Byrd 2001). OMR systems can be classified on the basis of the granularity chosen to recognize the music score’s symbols. The architecture of an OMR system is tightly related to the methods used for symbol extraction, segmentation, and recognition. Generally, the music-notation recognition process can be divided into four main phases: (1) the segmentation of the score image to detect and extract symbols; (2) the recognition of symbols; (3) the reconstruction of music information; and (4) the construction of the symbolic music notation model to represent the information (Bellini, Bruno, and Nesi 2004). Music notation may present very complex constructs and several styles. This problem has been recently addressed by the MUSICNETWORK and Motion Picture Experts Group (MPEG) in their work on Symbolic Music Representation (www .interactivemusicnetwork.org/mpeg-ahg). Many music-notation symbols exist, and they can be combined in different ways to realize several complex configurations, often without using well-defined formatting rules (Ross 1970; Heussenstamm 1987). Despite various research systems for OMR (e.g., Prerau 1970; Tojo and Aoyama 1982; Rumelhart, Hinton, and McClelland 1986; Fujinaga 1988, 1996; Carter 1989, 1994; Kato and Inokuchi 1990; Kobayakawa 1993; Selfridge-Field 1993; Ng and Boyle 1994, 1996; Coüasnon and Camillerapp 1995; Bainbridge and Bell 1996, 2003; Modayur 1996; Cooper, Ng, and Boyle 1997; Bellini and Nesi 2001; McPherson 2002; Bruno 2003; Byrd 2006) as well as commercially available products, optical music recognition—and more generally speaking, music recognition—is a research field affected by many open problems. The meaning of “music recognition” changes depending on the kind of applications and goals (Blostein and Carter 1992): audio generation from a musical score, music indexing and searching in a library database, music analysis, automatic transcription of a music score into parts, transcoding a score into interchange data formats, etc. For such applications, we must employ common tools to provide answers to questions such as “What does a particular percentagerecognition rate that is claimed by this particular algorithm really mean?” and “May I invoke a common methodology to compare different OMR tools on the basis of my music?” As mentioned in Blostein and Carter (1992) and Miyao and Haralick (2000), there is no standard for expressing the results of the OMR process. Assessing Optical Music Recognition Tools",
"title": ""
},
{
"docid": "f4422ff5d89e2035d6480f6bc6eb5fb2",
"text": "Hashing, or learning binary embeddings of data, is frequently used in nearest neighbor retrieval. In this paper, we develop learning to rank formulations for hashing, aimed at directly optimizing ranking-based evaluation metrics such as Average Precision (AP) and Normalized Discounted Cumulative Gain (NDCG). We first observe that the integer-valued Hamming distance often leads to tied rankings, and propose to use tie-aware versions of AP and NDCG to evaluate hashing for retrieval. Then, to optimize tie-aware ranking metrics, we derive their continuous relaxations, and perform gradient-based optimization with deep neural networks. Our results establish the new state-of-the-art for image retrieval by Hamming ranking in common benchmarks.",
"title": ""
},
{
"docid": "2031114bd1dc1a3ca94bdd8a13ad3a86",
"text": "Crude extracts of curcuminoids and essential oil of Curcuma longa varieties Kasur, Faisalabad and Bannu were studied for their antibacterial activity against 4 bacterial strains viz., Bacillus subtilis, Bacillus macerans, Bacillus licheniformis and Azotobacter using agar well diffusion method. Solvents used to determine antibacterial activity were ethanol and methanol. Ethanol was used for the extraction of curcuminoids. Essential oil was extracted by hydrodistillation and diluted in methanol by serial dilution method. Both Curcuminoids and oil showed zone of inhibition against all tested strains of bacteria. Among all the three turmeric varieties, Kasur variety had the most inhibitory effect on the growth of all bacterial strains tested as compared to Faisalabad and Bannu varieties. Among all the bacterial strains B. subtilis was the most sensitive to turmeric extracts of curcuminoids and oil. The MIC value for different strains and varieties ranged from 3.0 to 20.6 mm in diameter.",
"title": ""
},
{
"docid": "f1ce50e0b787c1d10af44252b3a7e656",
"text": "This paper proposes a scalable approach for distinguishing malicious files from clean files by investigating the behavioural features using logs of various API calls. We also propose, as an alternative to the traditional method of manually identifying malware files, an automated classification system using runtime features of malware files. For both projects, we use an automated tool running in a virtual environment to extract API call features from executables and apply pattern recognition algorithms and statistical methods to differentiate between files. Our experimental results, based on a dataset of 1368 malware and 456 cleanware files, provide an accuracy of over 97% in distinguishing malware from cleanware. Our techniques provide a similar accuracy for classifying malware into families. In both cases, our results outperform comparable previously published techniques.",
"title": ""
},
{
"docid": "9ae435f5169e867dc9d4dc0da56ec9fb",
"text": "Renewable energy is currently the main direction of development of electric power. Because of its own characteristics, the reliability of renewable energy generation is low. Renewable energy generation system needs lots of energy conversion devices which are made of power electronic devices. Too much power electronic components can damage power quality in microgrid. High Frequency AC (HFAC) microgrid is an effective way to solve the problems of renewable energy generation system. Transmitting electricity by means of HFAC is a novel idea in microgrid. Although the HFAC will cause more loss of power, it can improve the power quality in microgrid. HFAC can also reduce the impact of fluctuations of renewable energy in microgrid. This paper mainly simulates the HFAC with Matlab/Simulink and analyzes the feasibility of HFAC in microgrid.",
"title": ""
},
{
"docid": "aece5f900543384df4464c6c0cd431d0",
"text": "AIM\nThe aim of the study was to evaluate the bleaching effect, morphological changes, and variations in calcium (Ca) and phosphate (P) in the enamel with hydrogen peroxide (HP) and carbamide peroxide (CP) after the use of different application regimens.\n\n\nMATERIALS AND METHODS\nFour groups of five teeth were randomly assigned, according to the treatment protocol: HP 37.5% applied for 30 or 60 minutes (HP30, HP60), CP 16% applied for 14 or 28 hours (CP14, CP28). Changes in dental color were evaluated, according to the following formula: ΔE = [(La-Lb)2+(aa-ab)2 + (ba-bb)2]1/2. Enamel morphology and Ca and P compositions were evaluated by confocal laser scanning microscope and environmental scanning electron microscopy.\n\n\nRESULTS\nΔE HP30 was significantly greater than CP14 (10.37 ± 2.65/8.56 ± 1.40), but not between HP60 and CP28. HP60 shows greater morphological changes than HP30. No morphological changes were observed in the groups treated with CP. The reduction in Ca and P was significantly greater in HP60 than in CP28 (p < 0.05).\n\n\nCONCLUSION\nBoth formulations improved tooth color; HP produced morphological changes and Ca and P a gradual decrease, while CP produced no morphological changes, and the decrease in mineral component was smaller.\n\n\nCLINICAL SIGNIFICANCE\nCP 16% applied during 2 weeks could be equally effective and safer for tooth whitening than to administer two treatment sessions with HP 37.5%.",
"title": ""
},
{
"docid": "43a668b9f37492f8b6657929b679b6e5",
"text": "Wireless multimedia sensor networks (WMSNs) attracts significant attention in the field of agriculture where disease detection plays an important role. To improve the cultivation yield of plants it is necessary to detect the onset of diseases in plants and provide advice to farmers who will act based on the received suggestion. Due to the limitations of WMSN, it is necessary to design a simple system which can provide higher accuracy with less complexity. In this paper a novel disease detection system (DDS) is proposed to detect and classify the diseases in leaves. Statistical based thresholding strategy is proposed for segmentation which is less complex compared to k-means clustering method. The features extracted from the segmented image will be transmitted through sensor nodes to the monitoring site where the analysis and classification is done using Support Vector Machine classifier. The performance of the proposed DDS has been evaluated in terms of accuracy and is compared with the existing k-means clustering technique. The results show that the proposed method provides an overall accuracy of around 98%. The transmission energy is also analyzed in real time using TelosB nodes.",
"title": ""
},
{
"docid": "d3cfa1f05310b89067f85b115eb593e8",
"text": "NK fitness landscapes are stochastically generated fitness functions on bit strings, parameterized (with genes and interactions between genes) so as to make them tunably ‘rugged’. Under the ‘natural’ genetic operators of bit-flipping mutation or recombination, NK landscapes produce multiple domains of attraction for the evolutionary dynamics. NK landscapes have been used in models of epistatic gene interactions, coevolution, genome growth, and Wright’s shifting balance model of adaptation. Theory for adaptive walks on NK landscapes has been derived, and generalizations that extend beyond Kauffman’s original framework have been utilized in these applications.",
"title": ""
},
{
"docid": "cbbb2c0a9d2895c47c488bed46d8f468",
"text": "We propose a new generative language model for sentences that first samples a prototype sentence from the training corpus and then edits it into a new sentence. Compared to traditional language models that generate from scratch either left-to-right or by first sampling a latent sentence vector, our prototype-then-edit model improves perplexity on language modeling and generates higher quality outputs according to human evaluation. Furthermore, the model gives rise to a latent edit vector that captures interpretable semantics such as sentence similarity and sentence-level analogies.",
"title": ""
},
{
"docid": "831845dfb48d2bd9d7d86031f3862fa5",
"text": "This paper presents the analysis and implementation of an LCLC resonant converter working as maximum power point tracker (MPPT) in a PV system. This converter must guarantee a constant DC output voltage and must vary its effective input resistance in order to extract the maximum power of the PV generator. Preliminary analysis concludes that not all resonant load topologies can achieve the design conditions for a MPPT. Only the LCLC and LLC converter are suitable for this purpose.",
"title": ""
}
] | scidocsrr |
5eb55d47e3845b1d7aa97071b70fbeb5 | TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections | [
{
"docid": "f6266e5c4adb4fa24cc353dccccaf6db",
"text": "Clustering plays an important role in many large-scale data analyses providing users with an overall understanding of their data. Nonetheless, clustering is not an easy task due to noisy features and outliers existing in the data, and thus the clustering results obtained from automatic algorithms often do not make clear sense. To remedy this problem, automatic clustering should be complemented with interactive visualization strategies. This paper proposes an interactive visual analytics system for document clustering, called iVisClustering, based on a widelyused topic modeling method, latent Dirichlet allocation (LDA). iVisClustering provides a summary of each cluster in terms of its most representative keywords and visualizes soft clustering results in parallel coordinates. The main view of the system provides a 2D plot that visualizes cluster similarities and the relation among data items with a graph-based representation. iVisClustering provides several other views, which contain useful interaction methods. With help of these visualization modules, we can interactively refine the clustering results in various ways.",
"title": ""
},
{
"docid": "07846c1e97f72a02d876baf4c5435da6",
"text": "Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.",
"title": ""
}
] | [
{
"docid": "6c1a1e47ce91b2d9ae60a0cfc972b7e4",
"text": "We investigate automatic classification of speculative language (‘hedging’), in biomedical text using weakly supervised machine learning. Our contributions include a precise description of the task with annotation guidelines, analysis and discussion, a probabilistic weakly supervised learning model, and experimental evaluation of the methods presented. We show that hedge classification is feasible using weakly supervised ML, and point toward avenues for future research.",
"title": ""
},
{
"docid": "1613f8b73465d52a3e850c894578ef2a",
"text": "In this paper, we evaluate the performance of Multicarrier-Low Density Spreading Multiple Access (MC-LDSMA) as a multiple access technique for mobile communication systems. The MC-LDSMA technique is compared with current multiple access techniques, OFDMA and SC-FDMA. The performance is evaluated in terms of cubic metric, block error rate, spectral efficiency and fairness. The aim is to investigate the expected gains of using MC-LDSMA in the uplink for next generation cellular systems. The simulation results of the link and system-level performance evaluation show that MC-LDSMA has significant performance improvements over SC-FDMA and OFDMA. It is shown that using MC-LDSMA can considerably reduce the required transmission power and increase the spectral efficiency and fairness among the users.",
"title": ""
},
{
"docid": "4a22a7dbcd1515e2b1b6e7748ffa3e02",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "c1046ee16110438cb7d7bd0b5a9c4870",
"text": "Although integrating multiple levels of data into an analysis can often yield better inferences about the phenomenon under study, traditional methodologies used to combine multiple levels of data are problematic. In this paper, we discuss several methodologies under the rubric of multil evel analysis. Multil evel methods, we argue, provide researchers, particularly researchers using comparative data, substantial leverage in overcoming the typical problems associated with either ignoring multiple levels of data, or problems associated with combining lower-level and higherlevel data (including overcoming implicit assumptions of fixed and constant effects). The paper discusses several variants of the multil evel model and provides an application of individual-level support for European integration using comparative politi cal data from Western Europe.",
"title": ""
},
{
"docid": "71cf493e0026fe057b1100c5ad1118ad",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "628947fa49383b73eda8ad374423f8ce",
"text": "The proposed system for the cloud based automatic system involves the automatic updating of the data to the lighting system. It also reads the data from the base station in case of emergencies. Zigbee devices are used for wireless transmission of the data from the base station to the light system thus enabling an efficient street lamp control system. Infrared sensor and dimming control circuit is used to track the movement of human in a specific range and dims/bright the street lights accordingly hence saving a large amount of power. In case of emergencies data is sent from the particular light or light system and effective measures are taken accordingly.",
"title": ""
},
{
"docid": "068321516540ed9f5f05638bdfb7235a",
"text": "Cloud of Things (CoT) is a computing model that combines the widely popular cloud computing with Internet of Things (IoT). One of the major problems with CoT is the latency of accessing distant cloud resources from the devices, where the data is captured. To address this problem, paradigms such as fog computing and Cloudlets have been proposed to interpose another layer of computing between the clouds and devices. Such a three-layered cloud-fog-device computing architecture is touted as the most suitable approach for deploying many next generation ubiquitous computing applications. Programming applications to run on such a platform is quite challenging because disconnections between the different layers are bound to happen in a large-scale CoT system, where the devices can be mobile. This paper presents a programming language and system for a three-layered CoT system. We illustrate how our language and system addresses some of the key challenges in the three-layered CoT. A proof-of-concept prototype compiler and runtime have been implemented and several example applications are developed using it.",
"title": ""
},
{
"docid": "7abd63dac92df4b17fa1d7cd9e1ee039",
"text": "PURPOSE\nThis study aimed to prospectively analyze the outcomes of 304 feldspathic porcelain veneers prepared by the same operator, in 100 patients, that were in situ for up to 16 years.\n\n\nMATERIALS AND METHODS\nA total of 304 porcelain veneers on incisors, canines, and premolars in 100 patients completed by one prosthodontist between 1988 and 2003 were sequentially included. Preparations were designed with chamfer margins, incisal reduction, and palatal overlap. At least 80% of each preparation was in enamel. Feldspathic porcelain veneers from refractory dies were etched (hydrofluoric acid), silanated, and cemented (Vision 2, Mirage Dental Systems). Outcomes were expressed as percentages (success, survival, unknown, dead, repair, failure). The results were statistically analyzed using the chi-square test and Kaplan-Meier survival estimation. Statistical significance was set at P < .05.\n\n\nRESULTS\nThe cumulative survival for veneers was 96% +/- 1% at 5 to 6 years, 93% +/- 2% at 10 to 11 years, 91% +/- 3% at 12 to 13 years, and 73% +/- 16% at 15 to 16 years. The marked drop in survival between 13 and 16 years was the result of the death of 1 patient and the low number of veneers in that period. The cumulative survival was greater when different statistical methods were employed. Sixteen veneers in 14 patients failed. Failed veneers were associated with esthetics (31%), mechanical complications (31%), periodontal support (12.5%), loss of retention >2 (12.5%), caries (6%), and tooth fracture (6%). Statistically significantly fewer veneers survived as the time in situ increased.\n\n\nCONCLUSIONS\nFeldspathic porcelain veneers, when bonded to enamel substrate, offer a predictable long-term restoration with a low failure rate. The statistical methods used to calculate the cumulative survival can markedly affect the apparent outcome and thus should be clearly defined in outcome studies.",
"title": ""
},
{
"docid": "7a180e503a0b159d545047443524a05a",
"text": "We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone.",
"title": ""
},
{
"docid": "f463ee2dd3a9243ed7536d88d8c2c568",
"text": "A new silicon controlled rectifier-based power-rail electrostatic discharge (ESD) clamp circuit was proposed with a novel trigger circuit that has very low leakage current in a small layout area for implementation. This circuit was successfully verified in a 40-nm CMOS process by using only low-voltage devices. The novel trigger circuit uses a diode-string based level-sensing ESD detection circuit, but not using MOS capacitor, which has very large leakage current. Moreover, the leakage current on the ESD detection circuit is further reduced, adding a diode in series with the trigger transistor. By combining these two techniques, the total silicon area of the power-rail ESD clamp circuit can be reduced three times, whereas the leakage current is three orders of magnitude smaller than that of the traditional design.",
"title": ""
},
{
"docid": "7fc6b08b5ceea71503ac2b1da7a8bdcb",
"text": "This paper introduces a method for optimizing the tiles of a quad-mesh. Given a quad-based surface, the goal is to generate a set of K quads whose instances can produce a tiled surface that approximates the input surface. A solution to the problem is a K-set tilable surface, which can lead to an effective cost reduction in the physical construction of the given surface. Rather than molding lots of different building blocks, a K-set tilable surface requires the construction of K prefabricated components only. To realize the K-set tilable surface, we use a cluster-optimize approach. First, we iteratively cluster and analyze: clusters of similar shapes are merged, while edge connections between the K quads on the target surface are analyzed to learn the induced flexibility of the K-set tilable surface. Then, we apply a non-linear optimization model with constraints that maintain the K quads connections and shapes, and show how quad-based surfaces are optimized into K-set tilable surfaces. Our algorithm is demonstrated on various surfaces, including some that mimic the exteriors of certain renowned building landmarks.",
"title": ""
},
{
"docid": "14fcb5c784de5fcb6950212f5b3eabb4",
"text": "This paper presents a pure textile, capacitive pressure sensor designed for integration into clothing to measure pressure on human body. The applications fields cover all domains where a soft and bendable sensor with a high local resolution is needed, e.g. in rehabilitation, pressure-sore prevention or motion detection due to muscle activities. We developed several textile sensors with spatial resolution of 2 times 2 cm and an average error below 4 percent within the measurement range 0 to 10 N/cm2. Applied on the upper arm the textile pressure sensor determines the deflection of the forearm between 0 and 135 degrees due to the muscle bending.",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "3db6fc042a82319935bf5dd0d1491e89",
"text": "We present a piezoelectric-on-silicon Lorentz force magnetometer (LFM) based on a mechanically coupled array of clamped–clamped beam resonators for the detection of lateral ( $xy$ plane) magnetic fields with an extended operating bandwidth of 1.36 kHz. The proposed device exploits piezoelectric transduction to greatly enhance the electromechanical coupling efficiency, which benefits the device sensitivity. Coupling multiple clamped–clamped beams increases the area for piezoelectric transduction, which further increases the sensitivity. The reported device has the widest operating bandwidth among LFMs reported to date with comparable normalized sensitivity despite the quality factor being limited to 30 when operating at ambient pressure instead of vacuum as in most cases of existing LFMs.",
"title": ""
},
{
"docid": "6e4f0a770fe2a34f99957f252110b6bd",
"text": "Universal Dependencies (UD) provides a cross-linguistically uniform syntactic representation, with the aim of advancing multilingual applications of parsing and natural language understanding. Reddy et al. (2016) recently developed a semantic interface for (English) Stanford Dependencies, based on the lambda calculus. In this work, we introduce UDEPLAMBDA, a similar semantic interface for UD, which allows mapping natural language to logical forms in an almost language-independent framework. We evaluate our approach on semantic parsing for the task of question answering against Freebase. To facilitate multilingual evaluation, we provide German and Spanish translations of the WebQuestions and GraphQuestions datasets. Results show that UDEPLAMBDA outperforms strong baselines across languages and datasets. For English, it achieves the strongest result to date on GraphQuestions, with competitive results on WebQuestions.",
"title": ""
},
{
"docid": "dd170ec01ee5b969605dace70e283664",
"text": "This work discusses the regulation of the ball and plate system, the problemis to design a control laws which generates a voltage u for the servomotors to move the ball from the actual position to a desired one. The controllers are constructed by introducing nonlinear compensation terms into the traditional PD controller. In this paper, a complete physical system and controller design is explored from conception to modeling to testing and implementation. The stability of the control is presented. Experiment results are obtained via our prototype of the ball and plate system.",
"title": ""
},
{
"docid": "ae0d8d1dec27539502cd7e3030a3fe42",
"text": "Thee KL divergence is the most commonly used measure for comparing query and document language models in the language modeling framework to ad hoc retrieval. Since KL is rank equivalent to a specific weighted geometric mean, we examine alternative weighted means for language-model comparison, as well as alternative divergence measures. The study includes analysis of the inverse document frequency (IDF) effect of the language-model comparison methods. Empirical evaluation, performed with different types of queries (short and verbose) and query-model induction approaches, shows that there are methods that often outperform the KL divergence in some settings.",
"title": ""
},
{
"docid": "f91007844639e431b2f332f6f32df33b",
"text": "Moore type II Entire Condyle fractures of the tibia plateau represent a rare and highly unstable fracture pattern that usually results from high impact traumas. Specific recommendations regarding the surgical treatment of these fractures are sparse. We present a series of Moore type II fractures treated by open reduction and internal fixation through a direct dorsal approach. Five patients (3 females, 2 males) with Entire Condyle fractures were retrospectively analyzed after a mean follow-up period of 39 months (range 12–61 months). Patient mean age at the time of operation was 36 years (range 26–43 years). Follow-up included clinical and radiological examination. Furthermore, all patient finished a SF36 and Lysholm knee score questionnaire. Average range of motion was 127/0/1° with all patients reaching full extension at the time of last follow up. Patients reached a mean Lysholm score of 81.2 points (range 61–100 points) and an average SF36 of 82.36 points (range 53.75–98.88 points). One patient sustained deep wound infection after elective implant removal 1 year after the initial surgery. Overall all patients were highly satisfied with the postoperative result. The direct dorsal approach to the tibial plateau represents an adequate method to enable direct fracture exposure, open reduction, and internal fixation in posterior shearing medial Entire Condyle fractures and is especially valuable when also the dorso-lateral plateau is depressed.",
"title": ""
},
{
"docid": "11357967d7e83c45bb1a6ba3edfebac2",
"text": "We report a unique MEMS magnetometer based on a disk shaped radial contour mode thin-film piezoelectric on silicon (TPoS) CMOS-compatible resonator. This is the first device of its kind that targets operation under atmospheric pressure conditions as opposed that existing Lorentz force MEMS magnetometers that depend on vacuum. We exploit the chosen vibration mode to enhance coupling to deliver a field sensitivity of 10.92 mV/T while operating at a resonant frequency of 6.27 MHz, despite of a sub-optimal mechanical quality (Q) factor of 697 under ambient conditions in air.",
"title": ""
},
{
"docid": "cd23761c6e6eb8be8915612c995c29e4",
"text": "In this paper, we propose a novel representation learning framework, namely HIN2Vec, for heterogeneous information networks (HINs). The core of the proposed framework is a neural network model, also called HIN2Vec, designed to capture the rich semantics embedded in HINs by exploiting different types of relationships among nodes. Given a set of relationships specified in forms of meta-paths in an HIN, HIN2Vec carries out multiple prediction training tasks jointly based on a target set of relationships to learn latent vectors of nodes and meta-paths in the HIN. In addition to model design, several issues unique to HIN2Vec, including regularization of meta-path vectors, node type selection in negative sampling, and cycles in random walks, are examined. To validate our ideas, we learn latent vectors of nodes using four large-scale real HIN datasets, including Blogcatalog, Yelp, DBLP and U.S. Patents, and use them as features for multi-label node classification and link prediction applications on those networks. Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6% to 23.8% of $micro$-$f_1$ in multi-label node classification and 5% to 70.8% of $MAP$ in link prediction.",
"title": ""
}
] | scidocsrr |
c25516cd1ad53cdea15feb51571a2de6 | Suspecting Less and Doing Better: New Insights on Palmprint Identification for Faster and More Accurate Matching | [
{
"docid": "8fd5b3cead78b47e95119ac1a70e44db",
"text": "Two-dimensional (2-D) hand-geometry features carry limited discriminatory information and therefore yield moderate performance when utilized for personal identification. This paper investigates a new approach to achieve performance improvement by simultaneously acquiring and combining three-dimensional (3-D) and 2-D features from the human hand. The proposed approach utilizes a 3-D digitizer to simultaneously acquire intensity and range images of the presented hands of the users in a completely contact-free manner. Two new representations that effectively characterize the local finger surface features are extracted from the acquired range images and are matched using the proposed matching metrics. In addition, the characterization of 3-D palm surface using SurfaceCode is proposed for matching a pair of 3-D palms. The proposed approach is evaluated on a database of 177 users acquired in two sessions. The experimental results suggest that the proposed 3-D hand-geometry features have significant discriminatory information to reliably authenticate individuals. Our experimental results demonstrate that consolidating 3-D and 2-D hand-geometry features results in significantly improved performance that cannot be achieved with the traditional 2-D hand-geometry features alone. Furthermore, this paper also investigates the performance improvement that can be achieved by integrating five biometric features, i.e., 2-D palmprint, 3-D palmprint, finger texture, along with 3-D and 2-D hand-geometry features, that are simultaneously extracted from the user's hand presented for authentication.",
"title": ""
}
] | [
{
"docid": "978dd8a7f33df74d4a5cea149be6ebb0",
"text": "A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or toverify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speaker-recognition system is given. This recognizer performs with 98.9% correct identification. Last, the performances of various systems are compared.",
"title": ""
},
{
"docid": "77564f157ea8ab43d6d9f95a212e7948",
"text": "We consider the problem of mining association rules on a shared-nothing multiprocessor. We present three algorithms that explore a spectrum of trade-oos between computation, communication, memory usage, synchronization, and the use of problem-speciic information. The best algorithm exhibits near perfect scaleup behavior, yet requires only minimal overhead compared to the current best serial algorithm.",
"title": ""
},
{
"docid": "54a47a57296658ca0e8bae74fd99e8f0",
"text": "Road traffic accidents are among the top leading causes of deaths and injuries of various levels. Ethiopia is experiencing highest rate of such accidents resulting in fatalities and various levels of injuries. Addis Ababa, the capital city of Ethiopia, takes the lion’s share of the risk having higher number of vehicles and traffic and the cost of these fatalities and injuries has a great impact on the socio-economic development of a society. This research is focused on developing adaptive regression trees to build a decision support system to handle road traffic accident analysis for Addis Ababa city traffic office. The study focused on injury severity levels resulting from an accident using real data obtained from the Addis Ababa traffic office. Empirical results show that the developed models could classify accidents within reasonable accuracy.",
"title": ""
},
{
"docid": "4667b31c7ee70f7bc3709fc40ec6140f",
"text": "This article presents a method for rectifying and stabilising video from cell-phones with rolling shutter (RS) cameras. Due to size constraints, cell-phone cameras have constant, or near constant focal length, making them an ideal application for calibrated projective geometry. In contrast to previous RS rectification attempts that model distortions in the image plane, we model the 3D rotation of the camera. We parameterise the camera rotation as a continuous curve, with knots distributed across a short frame interval. Curve parameters are found using non-linear least squares over inter-frame correspondences from a KLT tracker. By smoothing a sequence of reference rotations from the estimated curve, we can at a small extra cost, obtain a high-quality image stabilisation. Using synthetic RS sequences with associated ground-truth, we demonstrate that our rectification improves over two other methods. We also compare our video stabilisation with the methods in iMovie and Deshaker.",
"title": ""
},
{
"docid": "da7f869037f40ab8666009d85d9540ff",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "da5ad61c492419515e8449b435b42e80",
"text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.",
"title": ""
},
{
"docid": "1edd6cb3c6ed4657021b6916efbc23d9",
"text": "Siamese-like networks, Streetscore-CNN (SS-CNN) and Ranking SS-CNN, to predict pairwise comparisons Figure 1: User Interface for Crowdsourced Online Game Performance Analysis • SS-CNN: We calculate the % of pairwise comparisons in test set predicted correctly by (1) Softmax of output neurons in final layer (2) comparing TrueSkill scores [2] obtained from synthetic pairwise comparisons from the CNN (3) extracting features from penultimate layer of CNN and feeding pairwise feature representations to a RankSVM [3] • RSS-CNN: We compare the ranking function outputs for both images in a test pair to decide which image wins, and calculate the binary prediction accuracy.",
"title": ""
},
{
"docid": "32977df591e90db67bf09b0412f56d7b",
"text": "In an electronic warfare (EW) battlefield environment, it is highly necessary for a fighter aircraft to intercept and identify the several interleaved radar signals that it receives from the surrounding emitters, so as to prepare itself for countermeasures. The main function of the Electronic Support Measure (ESM) receiver is to receive, measure, deinterleave pulses and then identify alternative threat emitters. Deinterleaving of radar signals is based on time of arrival (TOA) analysis and the use of the sequential difference (SDIF) histogram method for determining the pulse repetition interval (PRI), which is an important pulse parameter. Once the pulse repetition intervals are determined, check for the existence of staggered PRI (level-2) is carried out, implemented in MATLAB. Keywordspulse deinterleaving, pulse repetition interval, stagger PRI, sequential difference histogram, time of arrival.",
"title": ""
},
{
"docid": "8343f34186fc387bfe28db3f7b8bd5fc",
"text": "Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping them with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art.",
"title": ""
},
{
"docid": "ab6b26f6f1abf07aa91a0a933a7b6c43",
"text": "This paper describes a machine learningbased approach that uses word embedding features to recognize drug names from biomedical texts. As a starting point, we developed a baseline system based on Conditional Random Field (CRF) trained with standard features used in current Named Entity Recognition (NER) systems. Then, the system was extended to incorporate new features, such as word vectors and word clusters generated by the Word2Vec tool and a lexicon feature from the DINTO ontology. We trained the Word2vec tool over two different corpus: Wikipedia and MedLine. Our main goal is to study the effectiveness of using word embeddings as features to improve performance on our baseline system, as well as to analyze whether the DINTO ontology could be a valuable complementary data source integrated in a machine learning NER system. To evaluate our approach and compare it with previous work, we conducted a series of experiments on the dataset of SemEval-2013 Task 9.1 Drug Name Recognition.",
"title": ""
},
{
"docid": "a8d616897b7cbb1182d5f6e8cf4318a9",
"text": "User behaviour targeting is essential in online advertising. Compared with sponsored search keyword targeting and contextual advertising page content targeting, user behaviour targeting builds users’ interest profiles via tracking their online behaviour and then delivers the relevant ads according to each user’s interest, which leads to higher targeting accuracy and thus more improved advertising performance. The current user profiling methods include building keywords and topic tags or mapping users onto a hierarchical taxonomy. However, to our knowledge, there is no previous work that explicitly investigates the user online visits similarity and incorporates such similarity into their ad response prediction. In this work, we propose a general framework which learns the user profiles based on their online browsing behaviour, and transfers the learned knowledge onto prediction of their ad response. Technically, we propose a transfer learning model based on the probabilistic latent factor graphic models, where the users’ ad response profiles are generated from their online browsing profiles. The large-scale experiments based on real-world data demonstrate significant improvement of our solution over some strong baselines.",
"title": ""
},
{
"docid": "7cbe504e03ab802389c48109ed1f1802",
"text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.",
"title": ""
},
{
"docid": "b6a8f45bd10c30040ed476b9d11aa908",
"text": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://github.com/openai/pixel-cnn. Our implementation contains a number of modifications to the original model that both simplify its structure and improve its performance. 1) We use a discretized logistic mixture likelihood on the pixels, rather than a 256-way softmax, which we find to speed up training. 2) We condition on whole pixels, rather than R/G/B sub-pixels, simplifying the model structure. 3) We use downsampling to efficiently capture structure at multiple resolutions. 4) We introduce additional short-cut connections to further speed up optimization. 5) We regularize the model using dropout. Finally, we present state-of-the-art log likelihood results on CIFAR-10 to demonstrate the usefulness of these modifications.",
"title": ""
},
{
"docid": "3ec9c5459d08204025edb57e05583f29",
"text": "Cell-based sensing represents a new paradigm for performing direct and accurate detection of cell- or tissue-specific responses by incorporating living cells or tissues as an integral part of a sensor. Here we report a new magnetic cell-based sensing platform by combining magnetic sensors implemented in the complementary metal-oxide-semiconductor (CMOS) integrated microelectronics process with cardiac progenitor cells that are differentiated directly on-chip. We show that the pulsatile movements of on-chip cardiac progenitor cells can be monitored in a real-time manner. Our work provides a new low-cost approach to enable high-throughput screening systems as used in drug development and hand-held devices for point-of-care (PoC) biomedical diagnostic applications.",
"title": ""
},
{
"docid": "a8edc02eb78637f18fc948d81397fc75",
"text": "When we are investigating an object in a data set, which itself may or may not be an outlier, can we identify unusual (i.e., outlying) aspects of the object? In this paper, we identify the novel problem of mining outlying aspects on numeric data. Given a query object $$o$$ o in a multidimensional numeric data set $$O$$ O , in which subspace is $$o$$ o most outlying? Technically, we use the rank of the probability density of an object in a subspace to measure the outlyingness of the object in the subspace. A minimal subspace where the query object is ranked the best is an outlying aspect. Computing the outlying aspects of a query object is far from trivial. A naïve method has to calculate the probability densities of all objects and rank them in every subspace, which is very costly when the dimensionality is high. We systematically develop a heuristic method that is capable of searching data sets with tens of dimensions efficiently. Our empirical study using both real data and synthetic data demonstrates that our method is effective and efficient.",
"title": ""
},
{
"docid": "ddb804eec29ebb8d7f0c80223184305a",
"text": "Near Field Communication (NFC) enables physically proximate devices to communicate over very short ranges in a peer-to-peer manner without incurring complex network configuration overheads. However, adoption of NFC-enabled applications has been stymied by the low levels of penetration of NFC hardware. In this paper, we address the challenge of enabling NFC-like capability on the existing base of mobile phones. To this end, we develop Dhwani, a novel, acoustics-based NFC system that uses the microphone and speakers on mobile phones, thus eliminating the need for any specialized NFC hardware. A key feature of Dhwani is the JamSecure technique, which uses self-jamming coupled with self-interference cancellation at the receiver, to provide an information-theoretically secure communication channel between the devices. Our current implementation of Dhwani achieves data rates of up to 2.4 Kbps, which is sufficient for most existing NFC applications.",
"title": ""
},
{
"docid": "d31c6830ee11fc73b53c7930ad0e638f",
"text": "This paper proposes two rectangular ring planar monopole antennas for wideband and ultra-wideband applications. Simple planar rectangular rings are used to design the planar antennas. These rectangular rings are designed in a way to achieve the wideband operations. The operating frequency band ranges from 1.85 GHz to 4.95 GHz and 3.12 GHz to 14.15 GHz. The gain varies from 1.83 dBi to 2.89 dBi for rectangular ring wideband antenna and 1.89 dBi to 5.2 dBi for rectangular ring ultra-wideband antenna. The design approach and the results are discussed.",
"title": ""
},
{
"docid": "ac1e1d7daed4a960ff3a17a03155ddfa",
"text": "This paper explores the role of the business model in capturing value from early stage technology. A successful business model creates a heuristic logic that connects technical potential with the realization of economic value. The business model unlocks latent value from a technology, but its logic constrains the subsequent search for new, alternative models for other technologies later on—an implicit cognitive dimension overlooked in most discourse on the topic. We explore the intellectual roots of the concept, offer a working definition and show how the Xerox Corporation arose by employing an effective business model to commercialize a technology rejected by other leading companies of the day. We then show the long shadow that this model cast upon Xerox’s later management of selected spin-off companies from Xerox PARC. Xerox evaluated the technical potential of these spin-offs through its own business model, while those spin-offs that became successful did so through evolving business models that came to differ substantially from that of Xerox. The search and learning for an effective business model in failed ventures, by contrast, were quite limited.",
"title": ""
},
{
"docid": "12b075837d52d5c73a155466c28f2996",
"text": "Banks in Nigeria need to understand the perceptual difference in both male and female employees to better develop adequate policy on sexual harassment. This study investigated the perceptual differences on sexual harassment among male and female bank employees in two commercial cities (Kano and Lagos) of Nigeria.Two hundred and seventy five employees (149 males, 126 females) were conveniently sampled for this study. A survey design with a questionnaire adapted from Sexual Experience Questionnaire (SEQ) comprises of three dimension scalesof sexual harassment was used. The hypotheses were tested with independent samples t-test. The resultsindicated no perceptual differences in labelling sexual harassment clues between male and female bank employees in Nigeria. Thus, the study recommends that bank managers should support and establish the tone for sexual harassment-free workplace. KeywordsGender Harassment, Sexual Coercion, Unwanted Sexual Attention, Workplace.",
"title": ""
}
] | scidocsrr |
08f2b24f0b7bc1bc200f868e5fa932a7 | Facial volume restoration of the aging face with poly-l-lactic acid. | [
{
"docid": "41ac115647c421c44d7ef1600814dc3e",
"text": "PURPOSE\nThe bony skeleton serves as the scaffolding for the soft tissues of the face; however, age-related changes of bony morphology are not well defined. This study sought to compare the anatomic relationships of the facial skeleton and soft tissue structures between young and old men and women.\n\n\nMETHODS\nA retrospective review of CT scans of 100 consecutive patients imaged at Duke University Medical Center between 2004 and 2007 was performed using the Vitrea software package. The study population included 25 younger women (aged 18-30 years), 25 younger men, 25 older women (aged 55-65 years), and 25 older men. Using a standardized reference line, the distances from the anterior corneal plane to the superior orbital rim, lateral orbital rim, lower eyelid fat pad, inferior orbital rim, anterior cheek mass, and pyriform aperture were measured. Three-dimensional bony reconstructions were used to record the angular measurements of 4 bony regions: glabellar, orbital, maxillary, and pyriform aperture.\n\n\nRESULTS\nThe glabellar (p = 0.02), orbital (p = 0.0007), maxillary (p = 0.0001), and pyriform (p = 0.008) angles all decreased with age. The maxillary pyriform (p = 0.003) and infraorbital rim (p = 0.02) regressed with age. Anterior cheek mass became less prominent with age (p = 0.001), but the lower eyelid fat pad migrated anteriorly over time (p = 0.007).\n\n\nCONCLUSIONS\nThe facial skeleton appears to remodel throughout adulthood. Relative to the globe, the facial skeleton appears to rotate such that the frontal bone moves anteriorly and inferiorly while the maxilla moves posteriorly and superiorly. This rotation causes bony angles to become more acute and likely has an effect on the position of overlying soft tissues. These changes appear to be more dramatic in women.",
"title": ""
},
{
"docid": "0802735955b52c1dae64cf34a97a33fb",
"text": "Cutaneous facial aging is responsible for the increasingly wrinkled and blotchy appearance of the skin, whereas aging of the facial structures is attributed primarily to gravity. This article purports to show, however, that the primary etiology of structural facial aging relates instead to repeated contractions of certain facial mimetic muscles, the age marker fascicules, whereas gravity only secondarily abets an aging process begun by these muscle contractions. Magnetic resonance imaging (MRI) has allowed us to study the contrasts in the contour of the facial mimetic muscles and their associated deep and superficial fat pads in patients of different ages. The MRI model shows that the facial mimetic muscles in youth have a curvilinear contour presenting an anterior surface convexity. This curve reflects an underlying fat pad lying deep to these muscles, which acts as an effective mechanical sliding plane. The muscle’s anterior surface convexity constitutes the key evidence supporting the authors’ new aging theory. It is this youthful convexity that dictates a specific characteristic to the muscle contractions conveyed outwardly as youthful facial expression, a specificity of both direction and amplitude of facial mimetic movement. With age, the facial mimetic muscles (specifically, the age marker fascicules), as seen on MRI, gradually straighten and shorten. The authors relate this radiologic end point to multiple repeated muscle contractions over years that both expel underlying deep fat from beneath the muscle plane and increase the muscle resting tone. Hence, over time, structural aging becomes more evident as the facial appearance becomes more rigid.",
"title": ""
}
] | [
{
"docid": "ecca793cace7cbf6cc142f2412847df4",
"text": "The development of capacitive power transfer (CPT) as a competitive wireless/contactless power transfer solution over short distances is proving viable in both consumer and industrial electronic products/systems. The CPT is usually applied in low-power applications, due to small coupling capacitance. Recent research has increased the coupling capacitance from the pF to the nF scale, enabling extension of CPT to kilowatt power level applications. This paper addresses the need of efficient power electronics suitable for CPT at higher power levels, while remaining cost effective. Therefore, to reduce the cost and losses single-switch-single-diode topologies are investigated. Four single active switch CPT topologies based on the canonical Ćuk, SEPIC, Zeta, and Buck-boost converters are proposed and investigated. Performance tradeoffs within the context of a CPT system are presented and corroborated with experimental results. A prototype single active switch converter demonstrates 1-kW power transfer at a frequency of 200 kHz with >90% efficiency.",
"title": ""
},
{
"docid": "0fc3976820ca76c630476647761f9c21",
"text": "Paper Mechatronics is a novel interdisciplinary design medium, enabled by recent advances in craft technologies: the term refers to a reappraisal of traditional papercraft in combination with accessible mechanical, electronic, and computational elements. I am investigating the design space of paper mechatronics as a new hands-on medium by developing a series of examples and building a computational tool, FoldMecha, to support non-experts to design and construct their own paper mechatronics models. This paper describes how I used the tool to create two kinds of paper mechatronics models: walkers and flowers and discuss next steps.",
"title": ""
},
{
"docid": "4c9d20c4d264a950cb89bd41401ec99a",
"text": "The primary goal of a recommender system is to generate high quality user-centred recommendations. However, the traditional evaluation methods and metrics were developed before researchers understood all the factors that increase user satisfaction. This study is an introduction to a novel user and item classification framework. It is proposed that this framework should be used during user-centred evaluation of recommender systems and the need for this framework is justified through experiments. User profiles are constructed and matched against other users’ profiles to formulate neighbourhoods and generate top-N recommendations. The recommendations are evaluated to measure the success of the process. In conjunction with the framework, a new diversity metric is presented and explained. The accuracy, coverage, and diversity of top-N recommendations is illustrated and discussed for groups of users. It is found that in contradiction to common assumptions, not all users suffer as expected from the data sparsity problem. In fact, the group of users that receive the most accurate recommendations do not belong to the least sparse area of the dataset.",
"title": ""
},
{
"docid": "3da6c20ba154de6fbea24c3cbb9c8ebb",
"text": "The tourism industry is characterized by ever-increasing competition, causing destinations to seek new methods to attract tourists. Traditionally, a decision to visit a destination is interpreted, in part, as a rational calculation of the costs/benefits of a set of alternative destinations, which were derived from external information sources, including e-WOM (word-of-mouth) or travelers' blogs. There are numerous travel blogs available for people to share and learn about travel experiences. Evidence shows, however, that not every blog exerts the same degree of influence on tourists. Therefore, which characteristics of these travel blogs attract tourists' attention and influence their decisions, becomes an interesting research question. Based on the concept of information relevance, a model is proposed for interrelating various attributes specific to blog's content and perceived enjoyment, an intrinsic motivation of information systems usage, to mitigate the above-mentioned gap. Results show that novelty, understandability, and interest of blogs' content affect behavioral intention through blog usage enjoyment. Finally, theoretical and practical implications are proposed. Tourism is a popular activity in modern life and has contributed significantly to economic development for decades. However, competition in almost every sector of this industry has intensified during recent years & Pan, 2008); tourism service providers are now finding it difficult to acquire and keep customers (Echtner & Ritchie, 1991; Ho, 2007). Therefore, methods of attracting tourists to a destination are receiving greater attention from researchers, policy makers, and marketers. Before choosing a destination, tourists may search for information to support their decision-making By understanding the relationships between various information sources' characteristics and destination choice, tourism managers can improve their marketing efforts. Recently, personal blogs have become an important source for acquiring travel information With personal blogs, many tourists can share their travel experiences with others and potential tourists can search for and respond to others' experiences. Therefore, a blog can be seen as an asynchronous and many-to-many channel for conveying travel-related electronic word-of-mouth (e-WOM). By using these forms of inter-personal influence media, companies in this industry can create a competitive advantage (Litvin et al., 2008; Singh et al., 2008). Weblogs are now widely available; therefore, it is not surprising that the quantity of available e-WOM has increased (Xiang & Gret-zel, 2010) to an extent where information overload has become a Empirical evidence , however, indicates that people may not consult numerous blogs for advice; the degree of inter-personal influence varies from blog to blog (Zafiropoulos, 2012). Determining …",
"title": ""
},
{
"docid": "8a91835866267ef83ba245c12ce1283d",
"text": "Due to the increasing demand in the agricultural industry, the need to effectively grow a plant and increase its yield is very important. In order to do so, it is important to monitor the plant during its growth period, as well as, at the time of harvest. In this paper image processing is used as a tool to monitor the diseases on fruits during farming, right from plantation to harvesting. For this purpose artificial neural network concept is used. Three diseases of grapes and two of apple have been selected. The system uses two image databases, one for training of already stored disease images and the other for implementation of query images. Back propagation concept is used for weight adjustment of training database. The images are classified and mapped to their respective disease categories on basis of three feature vectors, namely, color, texture and morphology. From these feature vectors morphology gives 90% correct result and it is more than other two feature vectors. This paper demonstrates effective algorithms for spread of disease and mango counting. Practical implementation of neural networks has been done using MATLAB.",
"title": ""
},
{
"docid": "c9e47bfe0f1721a937ba503ed9913dba",
"text": "The Web contains a vast amount of structured information such as HTML tables, HTML lists and deep-web databases; there is enormous potential in combining and re-purposing this data in creative ways. However, integrating data from this relational web raises several challenges that are not addressed by current data integration systems or mash-up tools. First, the structured data is usually not published cleanly and must be extracted (say, from an HTML list) before it can be used. Second, due to the vastness of the corpus, a user can never know all of the potentially-relevant databases ahead of time (much less write a wrapper or mapping for each one); the source databases must be discovered during the integration process. Third, some of the important information regarding the data is only present in its enclosing web page and needs to be extracted appropriately. This paper describes Octopus, a system that combines search, extraction, data cleaning and integration, and enables users to create new data sets from those found on the Web. The key idea underlying Octopus is to offer the user a set of best-effort operators that automate the most labor-intensive tasks. For example, the Search operator takes a search-style keyword query and returns a set of relevance-ranked and similarity-clustered structured data sources on the Web; the Context operator helps the user specify the semantics of the sources by inferring attribute values that may not appear in the source itself, and the Extend operator helps the user find related sources that can be joined to add new attributes to a table. Octopus executes some of these operators automatically, but always allows the user to provide feedback and correct errors. We describe the algorithms underlying each of these operators and experiments that demonstrate their efficacy.",
"title": ""
},
{
"docid": "c32d61da51308397d889db143c3e6f9d",
"text": "Children’s neurological development is influenced by their experiences. Early experiences and the environments in which they occur can alter gene expression and affect long-term neural development. Today, discretionary screen time, often involving multiple devices, is the single main experience and environment of children. Various screen activities are reported to induce structural and functional brain plasticity in adults. However, childhood is a time of significantly greater changes in brain anatomical structure and connectivity. There is empirical evidence that extensive exposure to videogame playing during childhood may lead to neuroadaptation and structural changes in neural regions associated with addiction. Digital natives exhibit a higher prevalence of screen-related ‘addictive’ behaviour that reflect impaired neurological rewardprocessing and impulse-control mechanisms. Associations are emerging between screen dependency disorders such as Internet Addiction Disorder and specific neurogenetic polymorphisms, abnormal neural tissue and neural function. Although abnormal neural structural and functional characteristics may be a precondition rather than a consequence of addiction, there may also be a bidirectional relationship. As is the case with substance addictions, it is possible that intensive routine exposure to certain screen activities during critical stages of neural development may alter gene expression resulting in structural, synaptic and functional changes in the developing brain leading to screen dependency disorders, particularly in children with predisposing neurogenetic profiles. There may also be compound/secondary effects on neural development. Screen dependency disorders, even at subclinical levels, involve high levels of discretionary screen time, inducing greater child sedentary behaviour thereby reducing vital aerobic fitness, which plays an important role in the neurological health of children, particularly in brain structure and function. Child health policy must therefore adhere to the principle of precaution as a prudent approach to protecting child neurological integrity and well-being. This paper explains the basis of current paediatric neurological concerns surrounding screen dependency disorders and proposes preventive strategies for child neurology and allied professions.",
"title": ""
},
{
"docid": "910fdcf9e9af05b5d1cb70a9c88e4143",
"text": "We propose NEURAL ENQUIRER — a neural network architecture for answering natural language (NL) questions given a knowledge base (KB) table. Unlike previous work on end-to-end training of semantic parsers, NEURAL ENQUIRER is fully “neuralized”: it gives distributed representations of queries and KB tables, and executes queries through a series of differentiable operations. The model can be trained with gradient descent using both endto-end and step-by-step supervision. During training the representations of queries and the KB table are jointly optimized with the query execution logic. Our experiments show that the model can learn to execute complex NL queries on KB tables with rich structures.",
"title": ""
},
{
"docid": "c56c392e1a7d58912eeeb1718379fa37",
"text": "The changing face of technology has played an integral role in the development of the hotel and restaurant industry. The manuscript investigated the impact that technology has had on the hotel and restaurant industry. A detailed review of the literature regarding the growth of technology in the industry was linked to the development of strategic direction. The manuscript also looked at the strategic analysis methodology for evaluating and taking advantage of current and future technological innovations for the hospitality industry. Identification and implementation of these technologies can help in building a sustainable competitive advantage for hotels and restaurants.",
"title": ""
},
{
"docid": "1040e96ab179d5705eeb2983bdef31d3",
"text": "Neural network-based systems can now learn to locate the referents of words and phrases in images, answer questions about visual scenes, and even execute symbolic instructions as first-person actors in partially-observable worlds. To achieve this so-called grounded language learning, models must overcome certain well-studied learning challenges that are also fundamental to infants learning their first words. While it is notable that models with no meaningful prior knowledge overcome these learning obstacles, AI researchers and practitioners currently lack a clear understanding of exactly how they do so. Here we address this question as a way of achieving a clearer general understanding of grounded language learning, both to inform future research and to improve confidence in model predictions. For maximum control and generality, we focus on a simple neural network-based language learning agent trained via policy-gradient methods to interpret synthetic linguistic instructions in a simulated 3D world. We apply experimental paradigms from developmental psychology to this agent, exploring the conditions under which established human biases and learning effects emerge. We further propose a novel way to visualise and analyse semantic representation in grounded language learning agents that yields a plausible computational account of the observed effects.",
"title": ""
},
{
"docid": "b0d959bdb58fbcc5e324a854e9e07b81",
"text": "It is well known that the road signs play’s a vital role in road safety its ignorance results in accidents .This Paper proposes an Idea for road safety by using a RFID based traffic sign recognition system. By using it we can prevent the road risk up to a great extend.",
"title": ""
},
{
"docid": "659e71fb9274c47f369c37de751a91b2",
"text": "The Timed Up and Go (TUG) is a clinical test used widely to measure balance and mobility, e.g. in Parkinson's disease (PD). The test includes a sequence of functional activities, namely: sit-to-stand, 3-meters walk, 180° turn, walk back, another turn and sit on the chair. Meanwhile the stopwatch is used to score the test by measuring the time which the patients with PD need to perform the test. Here, the work presents an instrumented TUG using a wearable inertial sensor unit attached on the lower back of the person. The approach is used to automate the process of assessment compared with the manual evaluation by using visual observation and a stopwatch. The developed algorithm is based on the Dynamic Time Warping (DTW) for multi-dimensional time series and has been applied with the augmented feature for detection and duration assessment of turn state transitions, while a 1-dimensional DTW is used to detect the sit-to-stand and stand-to-sit phases. The feature set is a 3-dimensional vector which consists of the angular velocity, derived angle and features from Linear Discriminant Analysis (LDA). The algorithm was tested on 10 healthy individuals and 20 patients with PD (10 patients with early and late disease phases respectively). The test demonstrates that the developed technique can successfully extract the time information of the sit-to-stand, both turns and stand-to-sit transitions in the TUG test.",
"title": ""
},
{
"docid": "3e83f454f66e8aba14733205c8e19753",
"text": "BACKGROUND\nNormal-weight adults gain lower-body fat via adipocyte hyperplasia and upper-body subcutaneous (UBSQ) fat via adipocyte hypertrophy.\n\n\nOBJECTIVES\nWe investigated whether regional fat loss mirrors fat gain and whether the loss of lower-body fat is attributed to decreased adipocyte number or size.\n\n\nDESIGN\nWe assessed UBSQ, lower-body, and visceral fat gains and losses in response to overfeeding and underfeeding in 23 normal-weight adults (15 men) by using dual-energy X-ray absorptiometry and abdominal computed tomography scans. Participants gained ∼5% of weight in 8 wk and lost ∼80% of gained fat in 8 wk. We measured abdominal subcutaneous and femoral adipocyte sizes and numbers after weight gain and loss.\n\n\nRESULTS\nVolunteers gained 3.1 ± 2.1 (mean ± SD) kg body fat with overfeeding and lost 2.4 ± 1.7 kg body fat with underfeeding. Although UBSQ and visceral fat gains were completely reversed after 8 wk of underfeeding, lower-body fat had not yet returned to baseline values. Abdominal and femoral adipocyte sizes, but not numbers, decreased with weight loss. Decreases in abdominal adipocyte size and UBSQ fat mass were correlated (ρ = 0.76, P = 0.001), as were decreases in femoral adipocyte size and lower-body fat (ρ = 0.49, P = 0.05).\n\n\nCONCLUSIONS\nUBSQ and visceral fat increase and decrease proportionately with a short-term weight gain and loss, whereas a gain of lower-body fat does not relate to the loss of lower-body fat. The loss of lower-body fat is attributed to a reduced fat cell size, but not number, which may result in long-term increases in fat cell numbers.",
"title": ""
},
{
"docid": "2c8e50194e4b2238b9af86806323e2c5",
"text": "Previous research suggests a possible link between eveningness and general difficulties with self-regulation (e.g., evening types are more likely than other chronotypes to have irregular sleep schedules and social rhythms and use substances). Our study investigated the relationship between eveningness and self-regulation by using two standardized measures of self-regulation: the Self-Control Scale and the Procrastination Scale. We predicted that an eveningness preference would be associated with poorer self-control and greater procrastination than would an intermediate or morningness preference. Participants were 308 psychology students (mean age=19.92 yrs) at a small Canadian college. Students completed the self-regulation questionnaires and Morningness/Eveningness Questionnaire (MEQ) online. The mean MEQ score was 46.69 (SD=8.20), which is intermediate between morningness and eveningness. MEQ scores ranged from definite morningness to definite eveningness, but the dispersion of scores was skewed toward more eveningness. Pearson and partial correlations (controlling for age) were used to assess the relationship between MEQ score and the Self-Control Scale (global score and 5 subscale scores) and Procrastination Scale (global score). All correlations were significant. The magnitude of the effects was medium for all measures except one of the Self-Control subscales, which was small. A multiple regression analysis to predict MEQ score using the Self-Control Scale (global score), Procrastination Scale, and age as predictors indicated the Self-Control Scale was a significant predictor (accounting for 20% of the variance). A multiple regression analysis to predict MEQ scores using the five subscales of the Self-Control Scale and age as predictors showed the subscales for reliability and work ethic were significant predictors (accounting for 33% of the variance). Our study showed a relationship between eveningness and low self-control, but it did not address whether the relationship is a causal one.",
"title": ""
},
{
"docid": "81b3562907a19a12f02b82f927d89dc7",
"text": "Warehouse automation systems that use robots to save human labor are becoming increasingly common. In a previous study, a picking system using a multi-joint type robot was developed. However, articulated robots are not ideal in warehouse scenarios, since inter-shelf space can limit their freedom of motion. Although the use of linear motion-type robots has been suggested as a solution, their drawback is that an additional cable carrier is needed. The authors therefore propose a new configuration for a robot manipulator that uses wireless power transmission (WPT), which delivers power without physical contact except at the base of the robot arm. We describe here a WPT circuit design suitable for rotating and sliding-arm mechanisms. Overall energy efficiency was confirmed to be 92.0%.",
"title": ""
},
{
"docid": "3609f4923b9aebc3d18f31ac6ae78bea",
"text": "Cloud computing is playing an ever larger role in the IT infrastructure. The migration into the cloud means that we must rethink and adapt our security measures. Ultimately, both the cloud provider and the customer have to accept responsibilities to ensure security best practices are followed. Firewalls are one of the most critical security features. Most IaaS providers make firewalls available to their customers. In most cases, the customer assumes a best-case working scenario which is often not assured. In this paper, we studied the filtering behavior of firewalls provided by five different cloud providers. We found that three providers have firewalls available within their infrastructure. Based on our findings, we developed an open-ended firewall monitoring tool which can be used by cloud customers to understand the firewall's filtering behavior. This information can then be efficiently used for risk management and further security considerations. Measuring today's firewalls has shown that they perform well for the basics, although may not be fully featured considering fragmentation or stateful behavior.",
"title": ""
},
{
"docid": "b3f5d9335cccf62797c86b76fa2c9e7e",
"text": "For most families with elderly relatives, care within their own home is by far the most preferred option both for the elderly and their carers. However, frequently these carers are the partners of the person with long-term care needs, and themselves are elderly and in need of support to cope with the burdens and stress associated with these duties. When it becomes too much for them, they may have to rely on professional care services, or even use residential care for a respite. In order to support the carers as well as the elderly person, an ambient assisted living platform has been developed. The system records information about the activities of daily living using unobtrusive sensors within the home, and allows the carers to record their own wellbeing state. By providing facilities to schedule and monitor the activities of daily care, and providing orientation and advice to improve the care given and their own wellbeing, the system helps to reduce the burden on the informal carers. Received on 30 August 2016; accepted on 03 February 2017; published on 21 March 2017",
"title": ""
},
{
"docid": "60971d26877ef62b816526f13bd76c24",
"text": "Breast cancer is one of the leading causes of cancer death among women worldwide. In clinical routine, automatic breast ultrasound (BUS) image segmentation is very challenging and essential for cancer diagnosis and treatment planning. Many BUS segmentation approaches have been studied in the last two decades, and have been proved to be effective on private datasets. Currently, the advancement of BUS image segmentation seems to meet its bottleneck. The improvement of the performance is increasingly challenging, and only few new approaches were published in the last several years. It is the time to look at the field by reviewing previous approaches comprehensively and to investigate the future directions. In this paper, we study the basic ideas, theories, pros and cons of the approaches, group them into categories, and extensively review each category in depth by discussing the principles, application issues, and advantages/disadvantages. Keyword: breast ultrasound (BUS) images; breast cancer; segmentation; benchmark; early detection; computer-aided diagnosis (CAD)",
"title": ""
},
{
"docid": "da5562859bfed0057e0566679a4aca3d",
"text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.",
"title": ""
},
{
"docid": "72d74a0eaa768f46b17bf75f1a059d3f",
"text": "Cloud gaming represents a highly interactive service whereby game logic is rendered in the cloud and streamed as a video to end devices. While benefits include the ability to stream high-quality graphics games to practically any end user device, drawbacks include high bandwidth requirements and very low latency. Consequently, a challenge faced by cloud gaming service providers is the design of algorithms for adapting video streaming parameters to meet the end user system and network resource constraints. In this paper, we conduct an analysis of the commercial NVIDIA GeForce NOW game streaming platform adaptation mechanisms in light of variable network conditions. We further conduct an empirical user study involving the GeForce NOW platform to assess player Quality of Experience when such adaptation mechanisms are employed. The results provide insight into limitations of the currently deployed mechanisms, as well as aim to provide input for the proposal of designing future video encoding adaptation strategies.",
"title": ""
}
] | scidocsrr |
e4ba62e072c6b93ff2d661792496595b | Game theory based mitigation of Interest flooding in Named Data Network | [
{
"docid": "e253fe7f481dc9fbd14a69e4c7d3bf23",
"text": "Current Internet is reaching the limits of its capabilities due to its function transition from host-to-host communication to content dissemination. Named Data Networking (NDN) - an instantiation of Content-Centric Networking approach, embraces this shift by stressing the content itself, rather than where it locates. NDN tries to provide better security and privacy than current Internet does, and resilience to Distributed Denial of Service (DDoS) is a significant issue. In this paper, we present a specific and concrete scenario of DDoS attack in NDN, where perpetrators make use of NDN's packet forwarding rules to send out Interest packets with spoofed names as attacking packets. Afterwards, we identify the victims of NDN DDoS attacks include both the hosts and routers. But the largest victim is not the hosts, but the routers, more specifically, the Pending Interest Table (PIT) within the router. PIT brings NDN many elegant features, but it suffers from vulnerability. We propose Interest traceback as a counter measure against the studied NDN DDoS attacks, which traces back to the originator of the attacking Interest packets. At last, we assess the harmful consequences brought by these NDN DDoS attacks and evaluate the Interest traceback counter measure. Evaluation results reveal that the Interest traceback method effectively mitigates the NDN DDoS attacks studied in this paper.",
"title": ""
}
] | [
{
"docid": "2f201cd1fe90e0cd3182c672110ce96d",
"text": "BACKGROUND\nFor many years, high dose radiation therapy was the standard treatment for patients with locally or regionally advanced non-small-cell lung cancer (NSCLC), despite a 5-year survival rate of only 3%-10% following such therapy. From May 1984 through May 1987, the Cancer and Leukemia Group B (CALGB) conducted a randomized trial that showed that induction chemotherapy before radiation therapy improved survival during the first 3 years of follow-up.\n\n\nPURPOSE\nThis report provides data for 7 years of follow-up of patients enrolled in the CALGB trial.\n\n\nMETHODS\nThe patient population consisted of individuals who had clinical or surgical stage III, histologically documented NSCLC; a CALGB performance status of 0-1; less than 5% loss of body weight in the 3 months preceding diagnosis; and radiographically visible disease. Patients were randomly assigned to receive either 1) cisplatin (100 mg/m2 body surface area intravenously on days 1 and 29) and vinblastine (5 mg/m2 body surface area intravenously weekly on days 1, 8, 15, 22, and 29) followed by radiation therapy with 6000 cGy given in 30 fractions beginning on day 50 (CT-RT group) or 2) radiation therapy with 6000 cGy alone beginning on day 1 (RT group) for a maximum duration of 6-7 weeks. Patients were evaluated for tumor regression if they had measurable or evaluable disease and were monitored for toxic effects, disease progression, and date of death.\n\n\nRESULTS\nThere were 78 eligible patients randomly assigned to the CT-RT group and 77 randomly assigned to the RT group. Both groups were similar in terms of sex, age, histologic cell type, performance status, substage of disease, and whether staging had been clinical or surgical. All patients had measurable or evaluable disease at the time of random assignment to treatment groups. Both groups received a similar quantity and quality of radiation therapy. As previously reported, the rate of tumor response, as determined radiographically, was 56% for the CT-RT group and 43% for the RT group (P = .092). After more than 7 years of follow-up, the median survival remains greater for the CT-RT group (13.7 months) than for the RT group (9.6 months) (P = .012) as ascertained by the logrank test (two-sided). The percentages of patients surviving after years 1 through 7 were 54, 26, 24, 19, 17, 13, and 13 for the CT-RT group and 40, 13, 10, 7, 6, 6, and 6 for the RT group.\n\n\nCONCLUSIONS\nLong-term follow-up confirms that patients with stage III NSCLC who receive 5 weeks of chemotherapy with cisplatin and vinblastine before radiation therapy have a 4.1-month increase in median survival. The use of sequential chemotherapy-radiotherapy increases the projected proportion of 5-year survivors by a factor of 2.8 compared with that of radiotherapy alone. However, inasmuch as 80%-85% of such patients still die within 5 years and because treatment failure occurs both in the irradiated field and at distant sites in patients receiving either sequential chemotherapy-radiotherapy or radiotherapy alone, the need for further improvements in both the local and systemic treatment of this disease persists.",
"title": ""
},
{
"docid": "60d6869cadebea71ef549bb2a7d7e5c3",
"text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.",
"title": ""
},
{
"docid": "564c71ca08e39063f5de01fa5c8e74a3",
"text": "The Internet of Things (IoT) is a latest concept of machine-to-machine communication, that also gave birth to several information security problems. Many traditional software solutions fail to address these security issues such as trustworthiness of remote entities. Remote attestation is a technique given by Trusted Computing Group (TCG) to monitor and verify this trustworthiness. In this regard, various remote validation methods have been proposed. However, static techniques cannot provide resistance to recent attacks e.g. the latest Heartbleed bug, and the recent high profile glibc attack on Linux operating system. In this research, we have designed and implemented a lightweight Linux kernel security module for IoT devices that is scalable enough to monitor multiple applications in the kernel space. The newly built technique can measure and report multiple application’s static and dynamic behavior simultaneously. Verification of behavior of applications is performed via machine learning techniques. The result shows that deviating behavior can be detected successfully by the verifier.",
"title": ""
},
{
"docid": "51344373373bf04846ee40b049b086b9",
"text": "We present a new algorithm for real-time hand tracking on commodity depth-sensing devices. Our method does not require a user-specific calibration session, but rather learns the geometry as the user performs live in front of the camera, thus enabling seamless virtual interaction at the consumer level. The key novelty in our approach is an online optimization algorithm that jointly estimates pose and shape in each frame, and determines the uncertainty in such estimates. This knowledge allows the algorithm to integrate per-frame estimates over time, and build a personalized geometric model of the captured user. Our approach can easily be integrated in state-of-the-art continuous generative motion tracking software. We provide a detailed evaluation that shows how our approach achieves accurate motion tracking for real-time applications, while significantly simplifying the workflow of accurate hand performance capture. We also provide quantitative evaluation datasets at http://gfx.uvic.ca/datasets/handy",
"title": ""
},
{
"docid": "d67c9703ee45ad306384bbc8fe11b50e",
"text": "Approximately thirty-four percent of people who experience acute low back pain (LBP) will have recurrent episodes. It remains unclear why some people experience recurrences and others do not, but one possible cause is a loss of normal control of the back muscles. We investigated whether the control of the short and long fibres of the deep back muscles was different in people with recurrent unilateral LBP from healthy participants. Recurrent unilateral LBP patients, who were symptom free during testing, and a group of healthy volunteers, participated. Intramuscular and surface electrodes recorded the electromyographic activity (EMG) of the short and long fibres of the lumbar multifidus and the shoulder muscle, deltoid, during a postural perturbation associated with a rapid arm movement. EMG onsets of the short and long fibres, relative to that of deltoid, were compared between groups, muscles, and sides. In association with a postural perturbation, short fibre EMG onset occurred later in participants with recurrent unilateral LBP than in healthy participants (p=0.022). The short fibres were active earlier than long fibres on both sides in the healthy participants (p<0.001) and on the non-painful side in the LBP group (p=0.045), but not on the previously painful side in the LBP group. Activity of deep back muscles is different in people with a recurrent unilateral LBP, despite the resolution of symptoms. Because deep back muscle activity is critical for normal spinal control, the current results provide the first evidence of a candidate mechanism for recurrent episodes.",
"title": ""
},
{
"docid": "efc82cbdc904f03a93fd6797024bf3cf",
"text": "We propose a novel extension of the encoder-decoder framework, called a review network. The review network is generic and can enhance any existing encoderdecoder model: in this paper, we consider RNN decoders with both CNN and RNN encoders. The review network performs a number of review steps with attention mechanism on the encoder hidden states, and outputs a thought vector after each review step; the thought vectors are used as the input of the attention mechanism in the decoder. We show that conventional encoder-decoders are a special case of our framework. Empirically, we show that our framework improves over state-ofthe-art encoder-decoder systems on the tasks of image captioning and source code captioning.1",
"title": ""
},
{
"docid": "5fcb9873afd16e6705ab77d7e59aa453",
"text": "Charging PEVs (Plug-In Electric Vehicles) at public fast charging station can improve the public acceptance and increase their penetration level by solving problems related to vehicles' battery. However, the price for the impact of fast charging stations on the distribution grid has to be dealt with. The main purpose of this paper is to investigate the impacts of fast charging stations on a distribution grid using a stochastic fast charging model and to present the charging model with some of its results. The model is used to investigate the impacts on distribution transformer loading and system bus voltage profiles of the test distribution grid. Stochastic and deterministic modelling approaches are also compared. It is concluded that fast charging stations affect transformer loading and system bus voltage profiles. Hence, necessary measures such as using local energy storage and voltage conditioning devices, such as SVC (Static Var Compensator), have to be used at the charging station to handle the problems. It is also illustrated that stochastic modelling approach can produce a more sound and realistic results than deterministic approach.",
"title": ""
},
{
"docid": "107436d5f38f3046ef28495a14cc5caf",
"text": "There is a universal standard for facial beauty regardless of race, age, sex and other variables. Beautiful faces have ideal facial proportion. Ideal proportion is directly related to divine proportion, and that proportion is 1 to 1.618. All living organisms, including humans, are genetically encoded to develop to this proportion because there are extreme esthetic and physiologic benefits. The vast majority of us are not perfectly proportioned because of environmental factors. Establishment of a universal standard for facial beauty will significantly simplify the diagnosis and treatment of facial disharmonies and abnormalities. More important, treating to this standard will maximize facial esthetics, TMJ health, psychologic and physiologic health, fertility, and quality of life.",
"title": ""
},
{
"docid": "b88a79221efb5afc717cb2f97761271d",
"text": "BACKGROUND\nLymphangitic streaking, characterized by linear erythema on the skin, is most commonly observed in the setting of bacterial infection. However, a number of nonbacterial causes can result in lymphangitic streaking. We sought to elucidate the nonbacterial causes of lymphangitic streaking that may mimic bacterial infection to broaden clinicians' differential diagnosis for patients presenting with lymphangitic streaking.\n\n\nMETHODS\nWe performed a review of the literature, including all available reports pertaining to nonbacterial causes of lymphangitic streaking.\n\n\nRESULTS\nVarious nonbacterial causes can result in lymphangitic streaking, including viral and fungal infections, insect or spider bites, and iatrogenic etiologies.\n\n\nCONCLUSION\nAwareness of potential nonbacterial causes of superficial lymphangitis is important to avoid misdiagnosis and delay the administration of appropriate care.",
"title": ""
},
{
"docid": "3269b3574b19a976de305c99f9529fcd",
"text": "The objective of this master thesis is to identify \" key-drivers \" embedded in customer satisfaction data. The data was collected by a large transportation sector corporation during five years and in four different countries. The questionnaire involved several different sections of questions and ranged from demographical information to satisfaction attributes with the vehicle, dealer and several problem areas. Various regression, correlation and cooperative game theory approaches were used to identify the key satisfiers and dissatisfiers. The theoretical and practical advantages of using the Shapley value, Canonical Correlation Analysis and Hierarchical Logistic Regression has been demonstrated and applied to market research. ii iii Acknowledgements",
"title": ""
},
{
"docid": "18883fdb506d235fdf72b46e76923e41",
"text": "The Ponseti method for the management of idiopathic clubfoot has recently experienced a rise in popularity, with several centers reporting excellent outcomes. The challenge in achieving a successful outcome with this method lies not in correcting deformity but in preventing relapse. The most common cause of relapse is failure to adhere to the prescribed postcorrective bracing regimen. Socioeconomic status, cultural factors, and physician-parent communication may influence parental compliance with bracing. New, more user-friendly braces have been introduced in the hope of improving the rate of compliance. Strategies that may be helpful in promoting adherence include educating the family at the outset about the importance of bracing, encouraging calls and visits to discuss problems, providing clear written instructions, avoiding or promptly addressing skin problems, and refraining from criticism of the family when noncompliance is evident. A strong physician-family partnership and consideration of underlying cognitive, socioeconomic, and cultural issues may lead to improved adherence to postcorrective bracing protocols and better patient outcomes.",
"title": ""
},
{
"docid": "3021929187465029b9761aeb3eb20580",
"text": "We show that a deep convolutional network with an architecture inspired by the models used in image recognition can yield accuracy similar to a long-short term memory (LSTM) network, which achieves the state-of-the-art performance on the standard Switchboard automatic speech recognition task. Moreover, we demonstrate that merging the knowledge in the CNN and LSTM models via model compression further improves the accuracy of the convolutional model.",
"title": ""
},
{
"docid": "45c006e52bdb9cfa73fd4c0ebf692dfe",
"text": "Main memory capacities have grown up to a point where most databases fit into RAM. For main-memory database systems, index structure performance is a critical bottleneck. Traditional in-memory data structures like balanced binary search trees are not efficient on modern hardware, because they do not optimally utilize on-CPU caches. Hash tables, also often used for main-memory indexes, are fast but only support point queries. To overcome these shortcomings, we present ART, an adaptive radix tree (trie) for efficient indexing in main memory. Its lookup performance surpasses highly tuned, read-only search trees, while supporting very efficient insertions and deletions as well. At the same time, ART is very space efficient and solves the problem of excessive worst-case space consumption, which plagues most radix trees, by adaptively choosing compact and efficient data structures for internal nodes. Even though ART's performance is comparable to hash tables, it maintains the data in sorted order, which enables additional operations like range scan and prefix lookup.",
"title": ""
},
{
"docid": "11c106ac9e7002d138af49f1bf303c88",
"text": "The main purpose of Feature Subset Selection is to find a reduced subset of attributes from a data set described by a feature set. The task of a feature selection algorithm (FSA) is to provide with a computational solution motivated by a certain definition of relevance or by a reliable evaluation measure. In this paper several fundamental algorithms are studied to assess their performance in a controlled experimental scenario. A measure to evaluate FSAs is devised that computes the degree of matching between the output given by a FSA and the known optimal solutions. An extensive experimental study on synthetic problems is carried out to assess the behaviour of the algorithms in terms of solution accuracy and size as a function of the relevance, irrelevance, redundancy and size of the data samples. The controlled experimental conditions facilitate the derivation of better-supported and meaningful conclusions.",
"title": ""
},
{
"docid": "f8093849e9157475149d00782c60ae60",
"text": "Social media use, potential and challenges in innovation have received little attention in literature, especially from the standpoint of the business-to-business sector. Therefore, this paper focuses on bridging this gap with a survey of social media use, potential and challenges, combined with a social media - focused innovation literature review of state-of-the-art. The study also studies the essential differences between business-to-consumer and business-to-business in the above respects. The paper starts by defining of social media and web 2.0, and then characterizes social media in business, social media in business-to-business sector and social media in business-to-business innovation. Finally we present and analyze the results of our empirical survey of 122 Finnish companies. This paper suggests that there is a significant gap between perceived potential of social media and social media use in innovation activity in business-to-business companies, recognizes potentially effective ways to reduce the gap, and clarifies the found differences between B2B's and B2C's.",
"title": ""
},
{
"docid": "79fd1db13ce875945c7e11247eb139c8",
"text": "This paper provides a comprehensive review of outcome studies and meta-analyses of effectiveness studies of psychodynamic therapy (PDT) for the major categories of mental disorders. Comparisons with inactive controls (waitlist, treatment as usual and placebo) generally but by no means invariably show PDT to be effective for depression, some anxiety disorders, eating disorders and somatic disorders. There is little evidence to support its implementation for post-traumatic stress disorder, obsessive-compulsive disorder, bulimia nervosa, cocaine dependence or psychosis. The strongest current evidence base supports relatively long-term psychodynamic treatment of some personality disorders, particularly borderline personality disorder. Comparisons with active treatments rarely identify PDT as superior to control interventions and studies are generally not appropriately designed to provide tests of statistical equivalence. Studies that demonstrate inferiority of PDT to alternatives exist, but are small in number and often questionable in design. Reviews of the field appear to be subject to allegiance effects. The present review recommends abandoning the inherently conservative strategy of comparing heterogeneous \"families\" of therapies for heterogeneous diagnostic groups. Instead, it advocates using the opportunities provided by bioscience and computational psychiatry to creatively explore and assess the value of protocol-directed combinations of specific treatment components to address the key problems of individual patients.",
"title": ""
},
{
"docid": "6902e1604957fa21adbe90674bf5488d",
"text": "State-of-the-art distributed RDF systems partition data across multiple computer nodes (workers). Some systems perform cheap hash partitioning, which may result in expensive query evaluation. Others try to minimize inter-node communication, which requires an expensive data preprocessing phase, leading to a high startup cost. Apriori knowledge of the query workload has also been used to create partitions, which, however, are static and do not adapt to workload changes. In this paper, we propose AdPart, a distributed RDF system, which addresses the shortcomings of previous work. First, AdPart applies lightweight partitioning on the initial data, which distributes triples by hashing on their subjects; this renders its startup overhead low. At the same time, the locality-aware query optimizer of AdPart takes full advantage of the partitioning to (1) support the fully parallel processing of join patterns on subjects and (2) minimize data communication for general queries by applying hash distribution of intermediate results instead of broadcasting, wherever possible. Second, AdPart monitors the data access patterns and dynamically redistributes and replicates the instances of the most frequent ones among workers. As a result, the communication cost for future queries is drastically reduced or even eliminated. To control replication, AdPart implements an eviction policy for the redistributed patterns. Our experiments with synthetic and real data verify that AdPart: (1) starts faster than all existing systems; (2) processes thousands of queries before other systems become online; and (3) gracefully adapts to the query load, being able to evaluate queries on billion-scale RDF data in subseconds.",
"title": ""
},
{
"docid": "f3467adcca693e015c9dcc85db04d492",
"text": "For urban driving, knowledge of ego-vehicle’s position is a critical piece of information that enables advanced driver-assistance systems or self-driving cars to execute safety-related, autonomous driving maneuvers. This is because, without knowing the current location, it is very hard to autonomously execute any driving maneuvers for the future. The existing solutions for localization rely on a combination of a Global Navigation Satellite System, an inertial measurement unit, and a digital map. However, in urban driving environments, due to poor satellite geometry and disruption of radio signal reception, their longitudinal and lateral errors are too significant to be used for an autonomous system. To enhance the existing system’s localization capability, this work presents an effort to develop a vision-based lateral localization algorithm. The algorithm aims at reliably counting, with or without observations of lane-markings, the number of road-lanes and identifying the index of the road-lane on the roadway upon which our vehicle happens to be driving. Tests of the proposed algorithms against intercity and interstate highway videos showed promising results in terms of counting the number of road-lanes and the indices of the current road-lanes. C © 2015 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "5536f306c3633874299be57a19e35c01",
"text": "0957-4174/$ see front matter 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.eswa.2013.04.023 ⇑ Corresponding author. Tel.: +55 8197885665. E-mail addresses: [email protected] (Rafael Ferreira), [email protected] (L. de Souza Cabral), [email protected] (R.D. Lins), [email protected] (G. Pereira e Silva), [email protected] (F. Freitas), [email protected] (G.D.C. Cavalcanti), rjlima01@gmail. com (R. Lima), [email protected] (S.J. Simske), [email protected] (L. Favaro). Rafael Ferreira a,⇑, Luciano de Souza Cabral , Rafael Dueire Lins , Gabriel Pereira e Silva , Fred Freitas , George D.C. Cavalcanti , Rinaldo Lima , Steven J. Simske , Luciano Favaro c",
"title": ""
}
] | scidocsrr |
0055f77f1266c96c41d00c41c17015df | Query Rewriting for Horn-SHIQ Plus Rules | [
{
"docid": "205a5a9a61b6ac992f01c8c2fc09678a",
"text": "We present the OWL API, a high level Application Programming Interface (API) for working with OWL ontologies. The OWL API is closely aligned with the OWL 2 structural specification. It supports parsing and rendering in the syntaxes defined in the W3C specification (Functional Syntax, RDF/XML, OWL/XML and the Manchester OWL Syntax); manipulation of ontological structures; and the use of reasoning engines. The reference implementation of the OWL API, written in Java, includes validators for the various OWL 2 profiles OWL 2 QL, OWL 2 EL and OWL 2 RL. The OWL API has widespread usage in a variety of tools and applications.",
"title": ""
},
{
"docid": "de53086ad6d2f3a2c69aa37dde35bee7",
"text": "Towards the integration of rules and ontologies in the Semantic Web, we propose a combination of logic programming under the answer set semantics with the description logics SHIF(D) and SHOIN (D), which underly the Web ontology languages OWL Lite and OWL DL, respectively. This combination allows for building rules on top of ontologies but also, to a limited extent, building ontologies on top of rules. We introduce description logic programs (dl-programs), which consist of a description logic knowledge base L and a finite set of description logic rules (dl-rules) P . Such rules are similar to usual rules in logic programs with negation as failure, but may also contain queries to L, possibly default-negated, in their bodies. We define Herbrand models for dl-programs, and show that satisfiable positive dl-programs have a unique least Herbrand model. More generally, consistent stratified dl-programs can be associated with a unique minimal Herbrand model that is characterized through iterative least Herbrand models. We then generalize the (unique) minimal Herbrand model semantics for positive and stratified dl-programs to a strong answer set semantics for all dl-programs, which is based on a reduction to the least model semantics of positive dl-programs. We also define a weak answer set semantics based on a reduction to the answer sets of ordinary logic programs. Strong answer sets are weak answer sets, and both properly generalize answer sets of ordinary normal logic programs. We then give fixpoint characterizations for the (unique) minimal Herbrand model semantics of positive and stratified dl-programs, and show how to compute these models by finite fixpoint iterations. Furthermore, we give a precise picture of the complexity of deciding strong and weak answer set existence for a dl-program. 1Institut für Informationssysteme, Technische Universität Wien, Favoritenstraße 9-11, A-1040 Vienna, Austria; e-mail: {eiter, lukasiewicz, roman, tompits}@kr.tuwien.ac.at. 2Dipartimento di Informatica e Sistemistica, Università di Roma “La Sapienza”, Via Salaria 113, I-00198 Rome, Italy; e-mail: [email protected]. Acknowledgements: This work has been partially supported by the Austrian Science Fund project Z29N04 and a Marie Curie Individual Fellowship of the European Community programme “Human Potential” under contract number HPMF-CT-2001-001286 (disclaimer: The authors are solely responsible for information communicated and the European Commission is not responsible for any views or results expressed). We would like to thank Ian Horrocks and Ulrike Sattler for providing valuable information on complexityrelated issues during the preparation of this paper. Copyright c © 2004 by the authors INFSYS RR 1843-03-13 I",
"title": ""
}
] | [
{
"docid": "b2deb2c8ca5d03a2bd4651846c5a6d7c",
"text": "With the increasing user demand for elastic provisioning of resources coupled with ubiquitous and on-demand access to data, cloud computing has been recognized as an emerging technology to meet such dynamic user demands. In addition, with the introduction and rising use of mobile devices, the Internet of Things (IoT) has recently received considerable attention since the IoT has brought physical devices and connected them to the Internet, enabling each device to share data with surrounding devices and virtualized technologies in real-time. Consequently, the exploding data usage requires a new, innovative computing platform that can provide robust real-time data analytics and resource provisioning to clients. As a result, fog computing has recently been introduced to provide computation, storage and networking services between the end-users and traditional cloud computing data centers. This paper proposes a policy-based management of resources in fog computing, expanding the current fog computing platform to support secure collaboration and interoperability between different user-requested resources in fog computing.",
"title": ""
},
{
"docid": "7b6cf139cae3e9dae8a2886ddabcfef0",
"text": "An enhanced automated material handling system (AMHS) that uses a local FOUP buffer at each tool is presented as a method of enabling lot size reduction and parallel metrology sampling in the photolithography (litho) bay. The local FOUP buffers can be integrated with current OHT AMHS systems in existing fabs with little or no change to the AMHS or process equipment. The local buffers enhance the effectiveness of the OHT by eliminating intermediate moves to stockers, increasing the move rate capacity by 15-20%, and decreasing the loadport exchange time to 30 seconds. These enhancements can enable the AMHS to achieve the high move rates compatible with lot size reduction down to 12-15 wafers per FOUP. The implementation of such a system in a photolithography bay could result in a 60-74% reduction in metrology delay time, which is the time between wafer exposure at a litho tool and collection of metrology and inspection data.",
"title": ""
},
{
"docid": "4cf6a69833d7e553f0818aa72c99c938",
"text": "Work on the semantics of questions has argued that the relation between a question and its answer(s) can be cast in terms of logical entailment. In this paper, we demonstrate how computational systems designed to recognize textual entailment can be used to enhance the accuracy of current open-domain automatic question answering (Q/A) systems. In our experiments, we show that when textual entailment information is used to either filter or rank answers returned by a Q/A system, accuracy can be increased by as much as 20% overall.",
"title": ""
},
{
"docid": "1011879d0447a1e1ce2bd9b449daf15b",
"text": "Coreless substrates have been used in more and more advanced package designs for their benefits in electrical performance and reduction in thickness. However, coreless substrate causes severe package warpage due to the lack of a rigid and low CTE core. In this paper, both experimental measured warpage data and model simulation data are presented and illustrate that asymmetric designs in substrate thickness direction are capable of improving package warpage when compared to the traditional symmetric design. A few asymmetric design options are proposed, including Cu layer thickness asymmetric design, dielectric layer thickness asymmetric design and dielectric material property asymmetric design. These design options are then studied in depth by simulation to understand their mechanism and quantify their effectiveness for warpage improvement. From the results, it is found that the dielectric material property asymmetric design is the most effective option to improve package warpage, especially when using a lower CTE dielectric in the bottom layers of the substrate and a high CTE dielectric in top layers. Cu layer thickness asymmetric design is another effective way for warpage reduction. The bottom Cu layers should be thinner than the top Cu layers. It is also found that the dielectric layer thickness asymmetric design is only effective for high layer count substrate. It is not effective for low layer count substrate. In this approach, the bottom dielectric layers should be thicker than the top dielectric layers. Furthermore, the results show the asymmetric substrate designs are usually more effective for warpage improvement at high temperature than at room temperature. They are also more effective for a high layer count substrate than a low layer count substrate.",
"title": ""
},
{
"docid": "9dd75e407c25d46aa0eb303a948985b1",
"text": "Being a corner stone of the New testament and Christian religion, the evangelical narration about Jesus Christ crucifixion had been drawing attention of many millions people, both Christians and representatives of other religions and convictions, almost for two thousand years.If in the last centuries the crucifixion was considered mainly from theological and historical positions, the XX century was marked by surge of medical and biological researches devoted to investigation of thanatogenesis of the crucifixion. However the careful analysis of the suggested concepts of death at the crucifixion shows that not all of them are well-founded. Moreover, some authors sometimes do not consider available historic facts.Not only the analysis of the original Greek text of the Gospel is absent in the published works but authors ignore the Gospel itself at times.",
"title": ""
},
{
"docid": "cb9a54b8eeb6ca14bdbdf8ee3faa8bdb",
"text": "The problem of auto-focusing has been studied for long, but most techniques found in literature do not always work well for low-contrast images. In this paper, a robust focus measure based on the energy of the image is proposed. It performs equally well on ordinary and low-contrast images. In addition, it is computationally efficient.",
"title": ""
},
{
"docid": "a0b8475e0f50bc603d2280c4dcea8c0f",
"text": "We provide data on the extent to which computer-related audit procedures are used and whether two factors, control risk assessment and audit firm size, influence computer-related audit procedures use. We used a field-based questionnaire to collect data from 181 auditors representing Big 4, national, regional, and local firms. Results indicate that computer-related audit procedures are generally used when obtaining an understanding of the client system and business processes and testing computer controls. Furthermore, 42.9 percent of participants indicate that they relied on internal controls; however, this percentage increases significantly for auditors at Big 4 firms. Finally, our results raise questions for future research regarding computer-related audit procedure use.",
"title": ""
},
{
"docid": "aa1a97f8f6f9f1c2627f63e1ec13e8cf",
"text": "In this paper, we review recent emerging theoretical and technological advances of artificial intelligence (AI) in the big data settings. We conclude that integrating data-driven machine learning with human knowledge (common priors or implicit intuitions) can effectively lead to explainable, robust, and general AI, as follows: from shallow computation to deep neural reasoning; from merely data-driven model to data-driven with structured logic rules models; from task-oriented (domain-specific) intelligence (adherence to explicit instructions) to artificial general intelligence in a general context (the capability to learn from experience). Motivated by such endeavors, the next generation of AI, namely AI 2.0, is positioned to reinvent computing itself, to transform big data into structured knowledge, and to enable better decision-making for our society.",
"title": ""
},
{
"docid": "3205d04f2f5648397ee1524b682ad938",
"text": "Sequential models achieve state-of-the-art results in audio, visual and textual domains with respect to both estimating the data distribution and generating high-quality samples. Efficient sampling for this class of models has however remained an elusive problem. With a focus on text-to-speech synthesis, we describe a set of general techniques for reducing sampling time while maintaining high output quality. We first describe a single-layer recurrent neural network, the WaveRNN, with a dual softmax layer that matches the quality of the state-of-the-art WaveNet model. The compact form of the network makes it possible to generate 24 kHz 16-bit audio 4× faster than real time on a GPU. Second, we apply a weight pruning technique to reduce the number of weights in the WaveRNN. We find that, for a constant number of parameters, large sparse networks perform better than small dense networks and this relationship holds for sparsity levels beyond 96%. The small number of weights in a Sparse WaveRNN makes it possible to sample high-fidelity audio on a mobile CPU in real time. Finally, we propose a new generation scheme based on subscaling that folds a long sequence into a batch of shorter sequences and allows one to generate multiple samples at once. The Subscale WaveRNN produces 16 samples per step without loss of quality and offers an orthogonal method for increasing sampling efficiency.",
"title": ""
},
{
"docid": "8470245ef870eb5246d65fa3eb1e760a",
"text": "Educational spaces play an important role in enhancing learning productivity levels of society people as the most important places to human train. Considering the cost, time and energy spending on these spaces, trying to design efficient and optimized environment is a necessity. Achieving efficient environments requires changing environmental criteria so that they can have a positive impact on the activities and learning in users. Therefore, creating suitable conditions for promoting learning in users requires full utilization of the comprehensive knowledge of architecture and the design of the physical environment with respect to the environmental, social and aesthetic dimensions; Which will naturally increase the usefulness of people in space and make optimal use of the expenses spent on building schools and the time spent on education and training.The main aim of this study was to find physical variables affecting on increasing productivity in learning environments. This study is quantitative-qualitative and was done in two research methods: a) survey research methods (survey) b) correlation method. The samples were teachers and students in secondary schools’ in Zahedan city, the sample size was 310 people. Variables were extracted using the literature review and deep interviews with professors and experts. The questionnaire was obtained using variables and it is used to collect the views of teachers and students. Cronbach’s alpha coefficient was 0.89 which indicates that the information gathering tool is acceptable. The findings shows that there are four main physical factor as: 1. Physical comfort, 2. Space layouts, 3. Psychological factors and 4. Visual factors thet they are affecting positively on space productivity. Each of the environmental factors play an important role in improving the learning quality and increasing interest in attending learning environments; therefore, the desired environment improves the productivity of the educational spaces by improving the components of productivity.",
"title": ""
},
{
"docid": "ad004dd47449b977cd30f2454c5af77a",
"text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.",
"title": ""
},
{
"docid": "b89d42f836730a782a9b0f5df5bbd5bd",
"text": "This paper proposes a new usability evaluation checklist, UseLearn, and a related method for eLearning systems. UseLearn is a comprehensive checklist which incorporates both quality and usability evaluation perspectives in eLearning systems. Structural equation modeling is deployed to validate the UseLearn checklist quantitatively. The experimental results show that the UseLearn method supports the determination of usability problems by criticality metric analysis and the definition of relevant improvement strategies. The main advantage of the UseLearn method is the adaptive selection of the most influential usability problems, and thus significant reduction of the time and effort for usability evaluation can be achieved. At the sketching and/or design stage of eLearning systems, it will provide an effective guidance to usability analysts as to what problems should be focused on in order to improve the usability perception of the end-users. Relevance to industry: During the sketching or design stage of eLearning platforms, usability problems should be revealed and eradicated to create more usable and quality eLearning systems to satisfy the end-users. The UseLearn checklist along with its quantitative methodology proposed in this study would be helpful for usability experts to achieve this goal. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1768ecf6a2d8a42ea701d7f242edb472",
"text": "Satisfaction prediction is one of the prime concerns in search performance evaluation. It is a non-trivial task for two major reasons: (1) The definition of satisfaction is rather subjective and different users may have different opinions in satisfaction judgement. (2) Most existing studies on satisfaction prediction mainly rely on users' click-through or query reformulation behaviors but there are many sessions without such kind of interactions. To shed light on these research questions, we construct an experimental search engine that could collect users' satisfaction feedback as well as mouse click-through/movement data. Different from existing studies, we compare for the first time search users' and external assessors' opinions on satisfaction. We find that search users pay more attention to the utility of results while external assessors emphasize on the efforts spent in search sessions. Inspired by recent studies in predicting result relevance based on mouse movement patterns (namely motifs), we propose to estimate the utilities of search results and the efforts in search sessions with motifs extracted from mouse movement data on search result pages (SERPs). Besides the existing frequency-based motif selection method, two novel selection strategies (distance-based and distribution-based) are also adopted to extract high quality motifs for satisfaction prediction. Experimental results on over 1,000 user sessions show that the proposed strategies outperform existing methods and also have promising generalization capability for different users and queries.",
"title": ""
},
{
"docid": "8574612823cccbb5f8bcc80532dae74e",
"text": "The decentralized cryptocurrency Bitcoin has experienced great success but also encountered many challenges. One of the challenges has been the long confirmation time and low transaction throughput. Another challenge is the lack of incentives at certain steps of the protocol, raising concerns for transaction withholding, selfish mining, etc. To address these challenges, we propose Solidus, a decentralized cryptocurrency based on permissionless Byzantine consensus. A core technique in Solidus is to use proof of work for leader election to adapt the Practical Byzantine Fault Tolerance (PBFT) protocol to a permissionless setting. We also design Solidus to be incentive compatible and to mitigate selfish mining. Solidus improves on Bitcoin in confirmation time, and provides safety and liveness assuming Byzantine players and the largest coalition of rational players collectively control less than one-third of the computation power.",
"title": ""
},
{
"docid": "4f2112175c5d8175c5c0f8cb4d9185a2",
"text": "It is difficult to fully assess the quality of software inhouse, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports inhouse debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",
"title": ""
},
{
"docid": "aeb039a1e5ae76bf8e928e6b8cbfdf7f",
"text": "ZHENG, Traditional Chinese Medicine syndrome, is an integral and essential part of Traditional Chinese Medicine theory. It defines the theoretical abstraction of the symptom profiles of individual patients and thus, used as a guideline in disease classification in Chinese medicine. For example, patients suffering from gastritis may be classified as Cold or Hot ZHENG, whereas patients with different diseases may be classified under the same ZHENG. Tongue appearance is a valuable diagnostic tool for determining ZHENG in patients. In this paper, we explore new modalities for the clinical characterization of ZHENG using various supervised machine learning algorithms. We propose a novel-color-space-based feature set, which can be extracted from tongue images of clinical patients to build an automated ZHENG classification system. Given that Chinese medical practitioners usually observe the tongue color and coating to determine a ZHENG type and to diagnose different stomach disorders including gastritis, we propose using machine-learning techniques to establish the relationship between the tongue image features and ZHENG by learning through examples. The experimental results obtained over a set of 263 gastritis patients, most of whom suffering Cold Zheng or Hot ZHENG, and a control group of 48 healthy volunteers demonstrate an excellent performance of our proposed system.",
"title": ""
},
{
"docid": "9d7f80a70838f2ea9962de95e2b71827",
"text": "In this paper, one new machine family, i.e. named as flux-modulation machines which produce steady torque based on flux-modulation effect is proposed. The typical model including three components-one flux modulator, one armature and one excitation field exciters of flux-modulation machines is built. The torque relationships among the three components are developed based on the principle of electromechanical energy conversion. Then, some structure and performance features of flux-modulation machines are summarized, through which the flux-modulation topology distinguish criterion is proposed for the first time. Flux-modulation topologies can be further classified into stationary flux modulator, stationary excitation field, stationary armature field and dual-mechanical port flux-modulation machines. Many existed topologies, such as vernier, switched flux, flux reversal and transverse machines, are demonstrated that they can be classified into the flux-modulation family based on the criterion, and the processes how to convert typical models of flux-modulation machines to these machines are also given in this paper. Furthermore, in this new machine family, developed and developing theories on the vernier, switched flux, flux reversal and transverse machines can be shared with each other as well as some novel topologies in such a machine category. Based on the flux modulation principle, the nature and general theory, such as torque, power factor expressions and so on, of the flux-modulation machines are investigated. In additions, flux-modulation induction and electromagnetic transmission topologies are predicted and analyzed to enrich the flux-modulation electromagnetic topology family and the prospective applications are highlighted. Finally, one vernier permanent magnet prototype has been built and tested to verify the analysis results.",
"title": ""
},
{
"docid": "836eb904c483cd157807302997dd1aac",
"text": "Recent improvements in both the performance and scalability of shared-nothing, transactional, in-memory NewSQL databases have reopened the research question of whether distributed metadata for hierarchical file systems can be managed using commodity databases. In this paper, we introduce HopsFS, a next generation distribution of the Hadoop Distributed File System (HDFS) that replaces HDFS’ single node in-memory metadata service, with a distributed metadata service built on a NewSQL database. By removing the metadata bottleneck, HopsFS enables an order of magnitude larger and higher throughput clusters compared to HDFS. Metadata capacity has been increased to at least 37 times HDFS’ capacity, and in experiments based on a workload trace from Spotify, we show that HopsFS supports 16 to 37 times the throughput of Apache HDFS. HopsFS also has lower latency for many concurrent clients, and no downtime during failover. Finally, as metadata is now stored in a commodity database, it can be safely extended and easily exported to external systems for online analysis and free-text search.",
"title": ""
},
{
"docid": "a7135b1e6b9f5a506791915c2344c8b2",
"text": "There has been extensive research focusing on developing smart environments by integrating data mining techniques into environments that are equipped with sensors and actuators. The ultimate goal is to reduce the energy consumption in buildings while maintaining a maximum comfort level for occupants. However, there are few studies successfully demonstrating energy savings from occupancy behavioural patterns that have been learned in a smart environment because of a lack of a formal connection to building energy management systems. In this study, the objective is to develop and implement algorithms for sensor-based modelling and prediction of user behaviour in intelligent buildings and connect the behavioural patterns to building energy and comfort management systems through simulation tools. The results are tested on data from a room equipped with a distributed set of sensors, and building simulations through EnergyPlus suggest potential energy savings of 30% while maintaining an indoor comfort level when compared with other basic energy savings HVAC control strategies.",
"title": ""
},
{
"docid": "e7ac73f581ae7799021374ddd3e4d3a2",
"text": "Table: Coherence evaluation results on Discrimination and Insertion tasks. † indicates a neural model is significantly superior to its non-neural counterpart with p-value < 0.01. Discr. Ins. Acc F1 Random 50.00 50.00 12.60 Graph-based (G&S) 64.23 65.01 11.93 Dist. sentence (L&H) 77.54 77.54 19.32 Grid-all nouns (E&C) 81.58 81.60 22.13 Extended Grid (E&C) 84.95 84.95 23.28 Grid-CNN 85.57† 85.57† 23.12 Extended Grid-CNN 88.69† 88.69† 25.95†",
"title": ""
}
] | scidocsrr |
2b59c3f8ca29f7ebafd26cf004517e8c | Chainsaw: Chained Automated Workflow-based Exploit Generation | [
{
"docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1",
"text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.",
"title": ""
}
] | [
{
"docid": "279c377e12cdb8aec7242e0e9da2dd26",
"text": "It is well accepted that pain is a multidimensional experience, but little is known of how the brain represents these dimensions. We used positron emission tomography (PET) to indirectly measure pain-evoked cerebral activity before and after hypnotic suggestions were given to modulate the perceived intensity of a painful stimulus. These techniques were similar to those of a previous study in which we gave suggestions to modulate the perceived unpleasantness of a noxious stimulus. Ten volunteers were scanned while tonic warm and noxious heat stimuli were presented to the hand during four experimental conditions: alert control, hypnosis control, hypnotic suggestions for increased-pain intensity and hypnotic suggestions for decreased-pain intensity. As shown in previous brain imaging studies, noxious thermal stimuli presented during the alert and hypnosis-control conditions reliably activated contralateral structures, including primary somatosensory cortex (S1), secondary somatosensory cortex (S2), anterior cingulate cortex, and insular cortex. Hypnotic modulation of the intensity of the pain sensation led to significant changes in pain-evoked activity within S1 in contrast to our previous study in which specific modulation of pain unpleasantness (affect), independent of pain intensity, produced specific changes within the ACC. This double dissociation of cortical modulation indicates a relative specialization of the sensory and the classical limbic cortical areas in the processing of the sensory and affective dimensions of pain.",
"title": ""
},
{
"docid": "da7f869037f40ab8666009d85d9540ff",
"text": "A boomerang-shaped alar base excision is described to narrow the nasal base and correct the excessive alar flare. The boomerang excision combined the external alar wedge resection with an internal vestibular floor excision. The internal excision was inclined 30 to 45 degrees laterally to form the inner limb of the boomerang. The study included 46 patients presenting with wide nasal base and excessive alar flaring. All cases were followed for a mean period of 18 months (range, 8 to 36 months). The laterally oriented vestibular floor excision allowed for maximum preservation of the natural curvature of the alar rim where it meets the nostril floor and upon its closure resulted in a considerable medialization of alar lobule, which significantly reduced the amount of alar flare and the amount of external alar excision needed. This external alar excision measured, on average, 3.8 mm (range, 2 to 8 mm), which is significantly less than that needed when a standard vertical internal excision was used ( P < 0.0001). Such conservative external excisions eliminated the risk of obliterating the natural alar-facial crease, which did not occur in any of our cases. No cases of postoperative bleeding, infection, or vestibular stenosis were encountered. Keloid or hypertrophic scar formation was not encountered; however, dermabrasion of the scars was needed in three (6.5%) cases to eliminate apparent suture track marks. The boomerang alar base excision proved to be a safe and effective technique for narrowing the nasal base and elimination of the excessive flaring and resulted in a natural, well-proportioned nasal base with no obvious scarring.",
"title": ""
},
{
"docid": "9a0b6db90dc15e04f4b860e4355996f2",
"text": "This work discusses a mix of challenges arising from Watson Discovery Advisor (WDA), an industrial strength descendant of the Watson Jeopardy! Question Answering system currently used in production in industry settings. Typical challenges include generation of appropriate training questions, adaptation to new industry domains, and iterative improvement of the system through manual error analyses.",
"title": ""
},
{
"docid": "cac081006bb1a7daefe3c62b6c80fe10",
"text": "A novel chaotic time-series prediction method based on support vector machines (SVMs) and echo-state mechanisms is proposed. The basic idea is replacing \"kernel trick\" with \"reservoir trick\" in dealing with nonlinearity, that is, performing linear support vector regression (SVR) in the high-dimension \"reservoir\" state space, and the solution benefits from the advantages from structural risk minimization principle, and we call it support vector echo-state machines (SVESMs). SVESMs belong to a special kind of recurrent neural networks (RNNs) with convex objective function, and their solution is global, optimal, and unique. SVESMs are especially efficient in dealing with real life nonlinear time series, and its generalization ability and robustness are obtained by regularization operator and robust loss function. The method is tested on the benchmark prediction problem of Mackey-Glass time series and applied to some real life time series such as monthly sunspots time series and runoff time series of the Yellow River, and the prediction results are promising",
"title": ""
},
{
"docid": "1e18f23ad8ddc4333406c4703d51d92b",
"text": "from its introductory beginning and across its 446 pages, centered around the notion that computer simulations and games are not at all disparate but very much aligning concepts. This not only makes for an interesting premise but also an engaging book overall which offers a resource into an educational subject (for it is educational simulations that the authors predominantly address) which is not overly saturated. The aim of the book as a result of this decision, which is explained early on, but also because of its subsequent structure, is to enlighten its intended audience in the way that effective and successful simulations/games operate (on a theoretical/conceptual and technical level, although in the case of the latter the book intentionally never delves into the realms of software programming specifics per se), can be designed, built and, finally, evaluated. The book is structured in three different and distinct parts, with four chapters in the first, six chapters in the second and six chapters in the third and final one. The first chapter is essentially a \" teaser \" , according to the authors. There are a couple of more traditional simulations described, a couple of well-known mainstream games (Mario Kart and Portal 2, interesting choices, especially the first one) and then the authors proceed to present applications which show the simulation and game convergence. These applications have a strong educational outlook (covering on this occasion very diverse topics, from flood prevention to drink driving awareness, amongst others). This chapter works very well in initiating the audience in the subject matter and drawing the necessary parallels. With all of the simula-tions/games/educational applications included BOOK REVIEW",
"title": ""
},
{
"docid": "9593712906aa8272716a7fe5b482b91d",
"text": "User stories are a widely used notation for formulating requirements in agile development projects. Despite their popularity in industry, little to no academic work is available on assessing their quality. The few existing approaches are too generic or employ highly qualitative metrics. We propose the Quality User Story Framework, consisting of 14 quality criteria that user story writers should strive to conform to. Additionally, we introduce the conceptual model of a user story, which we rely on to design the AQUSA software tool. AQUSA aids requirements engineers in turning raw user stories into higher-quality ones by exposing defects and deviations from good practice in user stories. We evaluate our work by applying the framework and a prototype implementation to three user story sets from industry.",
"title": ""
},
{
"docid": "511991822f427c3f62a4c091594e89e3",
"text": "Reinforcement learning has recently gained popularity due to its many successful applications in various fields. In this project reinforcement learning is implemented in a simple warehouse situation where robots have to learn to interact with each other while performing specific tasks. The aim is to study whether reinforcement learning can be used to train multiple agents. Two different methods have been used to achieve this aim, Q-learning and deep Q-learning. Due to practical constraints, this paper cannot provide a comprehensive review of real life robot interactions. Both methods are tested on single-agent and multi-agent models in Python computer simulations. The results show that the deep Q-learning model performed better in the multiagent simulations than the Q-learning model and it was proven that agents can learn to perform their tasks to some degree. Although, the outcome of this project cannot yet be considered sufficient for moving the simulation into reallife, it was concluded that reinforcement learning and deep learning methods can be seen as suitable for modelling warehouse robots and their interactions.",
"title": ""
},
{
"docid": "a6097c9898acd91feac6792251e77285",
"text": "Pregabalin is a substance which modulates monoamine release in \"hyper-excited\" neurons. It binds potently to the α2-δ subunit of calcium channels. Pilotstudies on alcohol- and benzodiazepine dependent patients reported a reduction of withdrawal symptoms through Pregabalin. To our knowledge, no studies have been conducted so far assessing this effect in opiate dependent patients. We report the case of a 43-year-old patient with Pregabalin intake during opiate withdrawal. Multiple inpatient and outpatient detoxifications from maintenance replacement therapy with Buprenorphine in order to reach complete abstinence did not show success because of extended withdrawal symptoms and repeated drug intake. Finally he disrupted his heroine intake with a simultaneously self administration of 300 mg Pregabaline per day and was able to control the withdrawal symptoms. In this time we did control the Pregabalin level in serum and urine in our outpatient clinic. In the course the patient reported that he could treat further relapse with opiate or opioids with Pregabalin successful. This case shows first details for Pregabalin to relief withdrawal symptoms in opiate withdrawal.",
"title": ""
},
{
"docid": "2eba092d19cc8fb35994e045f826e950",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "5a573ae9fad163c6dfe225f59b246b7f",
"text": "The sharp increase of plastic wastes results in great social and environmental pressures, and recycling, as an effective way currently available to reduce the negative impacts of plastic wastes, represents one of the most dynamic areas in the plastics industry today. Froth flotation is a promising method to solve the key problem of recycling process, namely separation of plastic mixtures. This review surveys recent literature on plastics flotation, focusing on specific features compared to ores flotation, strategies, methods and principles, flotation equipments, and current challenges. In terms of separation methods, plastics flotation is divided into gamma flotation, adsorption of reagents, surface modification and physical regulation.",
"title": ""
},
{
"docid": "b999fe9bd7147ef9c555131d106ea43e",
"text": "This paper presents the DeepCD framework which learns a pair of complementary descriptors jointly for image patch representation by employing deep learning techniques. It can be achieved by taking any descriptor learning architecture for learning a leading descriptor and augmenting the architecture with an additional network stream for learning a complementary descriptor. To enforce the complementary property, a new network layer, called data-dependent modulation (DDM) layer, is introduced for adaptively learning the augmented network stream with the emphasis on the training data that are not well handled by the leading stream. By optimizing the proposed joint loss function with late fusion, the obtained descriptors are complementary to each other and their fusion improves performance. Experiments on several problems and datasets show that the proposed method1 is simple yet effective, outperforming state-of-the-art methods.",
"title": ""
},
{
"docid": "82e5d8a3ee664f36afec3aa1b2e976f9",
"text": "Real-world tasks are often highly structured. Hierarchical reinforcement learning (HRL) has attracted research interest as an approach for leveraging the hierarchical structure of a given task in reinforcement learning (RL). However, identifying the hierarchical policy structure that enhances the performance of RL is not a trivial task. In this paper, we propose an HRL method that learns a latent variable of a hierarchical policy using mutual information maximization. Our approach can be interpreted as a way to learn a discrete and latent representation of the state-action space. To learn option policies that correspond to modes of the advantage function, we introduce advantage-weighted importance sampling. In our HRL method, the gating policy learns to select option policies based on an option-value function, and these option policies are optimized based on the deterministic policy gradient method. This framework is derived by leveraging the analogy between a monolithic policy in standard RL and a hierarchical policy in HRL by using a deterministic option policy. Experimental results indicate that our HRL approach can learn a diversity of options and that it can enhance the performance of RL in continuous control tasks.",
"title": ""
},
{
"docid": "44017678b3da8c8f4271a9832280201e",
"text": "Data warehouses are users driven; that is, they allow end-users to be in control of the data. As user satisfaction is commonly acknowledged as the most useful measurement of system success, we identify the underlying factors of end-user satisfaction with data warehouses and develop an instrument to measure these factors. The study demonstrates that most of the items in classic end-user satisfaction measure are still valid in the data warehouse environment, and that end-user satisfaction with data warehouses depends heavily on the roles and performance of organizational information centers. # 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "d97669811124f3c6f4cef5b2a144a46c",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "058340d519ade55db4d6db879df95253",
"text": "Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.",
"title": ""
},
{
"docid": "d145bad318d074f036cf1aa1a49066b8",
"text": "Based on imbalanced data, the predictive models for 5year survivability of breast cancer using decision tree are proposed. After data preprocessing from SEER breast cancer datasets, it is obviously that the category of data distribution is imbalanced. Under-sampling is taken to make up the disadvantage of the performance of models caused by the imbalanced data. The performance of the models is evaluated by AUC under ROC curve, accuracy, specificity and sensitivity with 10-fold stratified cross-validation. The performance of models is best while the distribution of data is approximately equal. Bagging algorithm is used to build an integration decision tree model for predicting breast cancer survivability. Keywords-imbalanced data;decision tree;predictive breast cancer survivability;10-fold stratified cross-validation;bagging algorithm",
"title": ""
},
{
"docid": "406e06e00799733c517aff88c9c85e0b",
"text": "Matrix rank minimization problem is in general NP-hard. The nuclear norm is used to substitute the rank function in many recent studies. Nevertheless, the nuclear norm approximation adds all singular values together and the approximation error may depend heavily on the magnitudes of singular values. This might restrict its capability in dealing with many practical problems. In this paper, an arctangent function is used as a tighter approximation to the rank function. We use it on the challenging subspace clustering problem. For this nonconvex minimization problem, we develop an effective optimization procedure based on a type of augmented Lagrange multipliers (ALM) method. Extensive experiments on face clustering and motion segmentation show that the proposed method is effective for rank approximation.",
"title": ""
},
{
"docid": "1c78424b85b5ffd29e04e34639548bc8",
"text": "Datasets in the LOD cloud are far from being static in their nature and how they are exposed. As resources are added and new links are set, applications consuming the data should be able to deal with these changes. In this paper we investigate how LOD datasets change and what sensible measures there are to accommodate dataset dynamics. We compare our findings with traditional, document-centric studies concerning the “freshness” of the document collections and propose metrics for LOD datasets.",
"title": ""
},
{
"docid": "002acd845aa9776840dfe9e8755d7732",
"text": "A detailed study on the mechanism of band-to-band tunneling in carbon nanotube field-effect transistors (CNFETs) is presented. Through a dual-gated CNFET structure tunneling currents from the valence into the conduction band and vice versa can be enabled or disabled by changing the gate potential. Different from a conventional device where the Fermi distribution ultimately limits the gate voltage range for switching the device on or off, current flow is controlled here by the valence and conduction band edges in a bandpass-filter-like arrangement. We discuss how the structure of the nanotube is the key enabler of this particular one-dimensional tunneling effect.",
"title": ""
}
] | scidocsrr |
80ca22a5818ededa1b9e2126bf539f34 | Dataset, Ground-Truth and Performance Metrics for Table Detection Evaluation | [
{
"docid": "bd963a55c28304493118028fe5f47bab",
"text": "Tables are a common structuring element in many documents, s uch as PDF files. To reuse such tables, appropriate methods need to b e develop, which capture the structure and the content information. We have d e loped several heuristics which together recognize and decompose tables i n PDF files and store the extracted data in a structured data format (XML) for easi er reuse. Additionally, we implemented a prototype, which gives the user the ab ility of making adjustments on the extracted data. Our work shows that purel y heuristic-based approaches can achieve good results, especially for lucid t ables.",
"title": ""
},
{
"docid": "823c0e181286d917a610f90d1c9db0c3",
"text": "Table characteristics vary widely. Consequently, a great variety of computational approaches have been applied to table recognition. In this survey, the table recognition literature is presented as an interaction of table models, observations, transformations and inferences. A table model defines the physical and logical structure of tables; the model is used to detect tables, and to analyze and decompose the detected tables. Observations perform feature measurements and data lookup, transformations alter or restructure data, and inferences generate and test hypotheses. This presentation clarifies the decisions that are made by a table recognizer, and the assumptions and inferencing techniques that underlie these decisions.",
"title": ""
}
] | [
{
"docid": "56642ffad112346186a5c3f12133e59b",
"text": "The Skills for Inclusive Growth (S4IG) program is an initiative of the Australian Government’s aid program and implemented with the Sri Lankan Ministry of Skills Development and Vocational Training, Tourism Authorities, Provincial and District Level Government, Industry and Community Organisations. The Program will demonstrate how an integrated approach to skills development can support inclusive economic growth opportunities along the tourism value chain in the four districts of Trincomalee, Ampara, Batticaloa (Eastern Province) and Polonnaruwa (North Central Province). In doing this the S4IG supports sustainable job creation and increased incomes and business growth for the marginalised and the disadvantaged, particularly women and people with disabilities.",
"title": ""
},
{
"docid": "f3ca98a8e0600f0c80ef539cfc58e77e",
"text": "In this paper, we address a real life waste collection vehicle routing problem with time windows (VRPTW) with consideration of multiple disposal trips and drivers’ lunch breaks. Solomon’s well-known insertion algorithm is extended for the problem. While minimizing the number of vehicles and total traveling time is the major objective of vehicle routing problems in the literature, here we also consider the route compactness and workload balancing of a solution since they are very important aspects in practical applications. In order to improve the route compactness and workload balancing, a capacitated clustering-based waste collection VRPTW algorithm is developed. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems at Waste Management, Inc. A set of waste collection VRPTW benchmark problems is also presented in this paper. Waste collection problems are frequently considered as arc routing problems without time windows. However, that point of view can be applied only to residential waste collection problems. In the waste collection industry, there are three major areas: commercial waste collection, residential waste collection and roll-on-roll-off. In this paper, we mainly focus on the commercial waste collection problem. The problem can be characterized as a variant of VRPTW since commercial waste collection stops may have time windows. The major variation from a standard VRPTW is due to disposal operations and driver’s lunch break. When a vehicle is full, it needs to go to one of the disposal facilities (landfill or transfer station). Each vehicle can, and typically does, make multiple disposal trips per day. The purpose of this paper is to introduce the waste collection VRPTW, benchmark problem sets, and a solution approach for the problem. The proposed algorithms have been successfully implemented and deployed for the real life waste collection problems of Waste Management, the leading provider of comprehensive waste management services in North America with nearly 26,000 collection and transfer vehicles. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1655b927fa07bed8bf3769bf2dba01b6",
"text": "The non-central chi-square distribution plays an important role in communications, for example in the analysis of mobile and wireless communication systems. It not only includes the important cases of a squared Rayleigh distribution and a squared Rice distribution, but also the generalizations to a sum of independent squared Gaussian random variables of identical variance with or without mean, i.e., a \"squared MIMO Rayleigh\" and \"squared MIMO Rice\" distribution. In this paper closed-form expressions are derived for the expectation of the logarithm and for the expectation of the n-th power of the reciprocal value of a non-central chi-square random variable. It is shown that these expectations can be expressed by a family of continuous functions gm(ldr) and that these families have nice properties (monotonicity, convexity, etc.). Moreover, some tight upper and lower bounds are derived that are helpful in situations where the closed-form expression of gm(ldr) is too complex for further analysis.",
"title": ""
},
{
"docid": "a27d4083741f75f44cd85a8161f1b8b1",
"text": "Graves’ disease (GD) and Hashimoto's thyroiditis (HT) represent the commonest forms of autoimmune thyroid disease (AITD) each presenting with distinct clinical features. Progress has been made in determining association of HLA class II DRB1, DQB1 and DQA1 loci with GD demonstrating a predisposing effect for DR3 (DRB1*03-DQB1*02-DQA1*05) and a protective effect for DR7 (DRB1*07-DQB1*02-DQA1*02). Small data sets have hindered progress in determining HLA class II associations with HT. The aim of this study was to investigate DRB1-DQB1-DQA1 in the largest UK Caucasian HT case control cohort to date comprising 640 HT patients and 621 controls. A strong association between HT and DR4 (DRB1*04-DQB1*03-DQA1*03) was detected (P=6.79 × 10−7, OR=1.98 (95% CI=1.51–2.59)); however, only borderline association of DR3 was found (P=0.050). Protective effects were also detected for DR13 (DRB1*13-DQB1*06-DQA1*01) (P=0.001, OR=0.61 (95% CI=0.45–0.83)) and DR7 (P=0.013, OR=0.70 (95% CI=0.53–0.93)). Analysis of our unique cohort of subjects with well characterized AITD has demonstrated clear differences in association within the HLA class II region between HT and GD. Although HT and GD share a number of common genetic markers this study supports the suggestion that differences in HLA class II genotype may, in part, contribute to the different immunopathological processes and clinical presentation of these related diseases.",
"title": ""
},
{
"docid": "df10984391cfb52e8ece9ae3766754c1",
"text": "A major challenge that arises in Weakly Supervised Object Detection (WSOD) is that only image-level labels are available, whereas WSOD trains instance-level object detectors. A typical approach to WSOD is to 1) generate a series of region proposals for each image and assign the image-level label to all the proposals in that image; 2) train a classifier using all the proposals; and 3) use the classifier to select proposals with high confidence scores as the positive instances for another round of training. In this way, the image-level labels are iteratively transferred to instance-level labels.\n We aim to resolve the following two fundamental problems within this paradigm. First, existing proposal generation algorithms are not yet robust, thus the object proposals are often inaccurate. Second, the selected positive instances are sometimes noisy and unreliable, which hinders the training at subsequent iterations. We adopt two separate neural networks, one to focus on each problem, to better utilize the specific characteristic of region proposal refinement and positive instance selection. Further, to leverage the mutual benefits of the two tasks, the two neural networks are jointly trained and reinforced iteratively in a progressive manner, starting with easy and reliable instances and then gradually incorporating difficult ones at a later stage when the selection classifier is more robust. Extensive experiments on the PASCAL VOC dataset show that our method achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "3f569eccc71c6186d6163a2cc40be0fc",
"text": "Deep Packet Inspection (DPI) is the state-of-the-art technology for traffic classification. According to the conventional wisdom, DPI is the most accurate classification technique. Consequently, most popular products, either commercial or open-source, rely on some sort of DPI for traffic classification. However, the actual performance of DPI is still unclear to the research community, since the lack of public datasets prevent the comparison and reproducibility of their results. This paper presents a comprehensive comparison of 6 well-known DPI tools, which are commonly used in the traffic classification literature. Our study includes 2 commercial products (PACE and NBAR) and 4 open-source tools (OpenDPI, L7-filter, nDPI, and Libprotoident). We studied their performance in various scenarios (including packet and flow truncation) and at different classification levels (application protocol, application and web service). We carefully built a labeled dataset with more than 750 K flows, which contains traffic from popular applications. We used the Volunteer-Based System (VBS), developed at Aalborg University, to guarantee the correct labeling of the dataset. We released this dataset, including full packet payloads, to the research community. We believe this dataset could become a common benchmark for the comparison and validation of network traffic classifiers. Our results present PACE, a commercial tool, as the most accurate solution. Surprisingly, we find that some open-source tools, such as nDPI and Libprotoident, also achieve very high accuracy.",
"title": ""
},
{
"docid": "eea8a23547ea5a29be036285034fc0a0",
"text": "Co-fabrication of a nanoscale vacuum field emission transistor (VFET) and a metal-oxide-semiconductor field effect transistor (MOSFET) is demonstrated on a silicon-on-insulator wafer. The insulated-gate VFET with a gap distance of 100 nm is achieved by using a conventional 0.18-μm process technology and subsequent photoresist ashing process. The VFET shows a turn-on voltage of 2 V at a cell current of 2 nA and a cell current of 3 μA at the operation voltage of 10 V with an ON/OFF current ratio of 104. The gap distance between the cathode and anode in the VFET is defined to be less than the mean free path of electrons in air, and consequently, the operation voltage is reduced to be less than the ionization potential of air molecules. This allows the relaxation of the vacuum requirement. The present integration scheme can be useful as it combines the advantages of both structures on the same chip.",
"title": ""
},
{
"docid": "0ad47e79e9bea44a76029e1f24f0a16c",
"text": "The requirements for OLTP database systems are becoming ever more demanding. New OLTP applications require high degrees of scalability with controlled transaction latencies in in-memory databases. Deployments of these applications require low-level control of database system overhead and program-to-data affinity to maximize resource utilization in modern machines. Unfortunately, current solutions fail to meet these requirements. First, existing database solutions fail to expose a high-level programming abstraction in which latency of transactions can be reasoned about by application developers. Second, these solutions limit infrastructure engineers in exercising low-level control on the deployment of the system on a target infrastructure, further impacting performance. In this paper, we propose a relational actor programming model for in-memory databases. Conceptually, relational actors, or reactors for short, are application-defined, isolated logical actors encapsulating relations that process function calls asynchronously. Reactors ease reasoning about correctness by guaranteeing serializability of application-level function calls. In contrast to classic transactional models, however, reactors allow developers to take advantage of intra-transaction parallelism to reduce latency and improve performance. Moreover, reactors enable a new degree of flexibility in database deployment. We present REACTDB, a novel system design exposing reactors that allows for flexible virtualization of database architecture between the extremes of shared-nothing and shared-everything without changes to application code. Our experiments with REACTDB illustrate performance predictability, multi-core scalability, and low overhead in OLTP benchmarks.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "97f2e2ceeb4c1e2b8d8fbc8a46159730",
"text": "Novel scientific knowledge is constantly produced by the scientific community. Understanding the level of novelty characterized by scientific literature is key for modeling scientific dynamics and analyzing the growth mechanisms of scientific knowledge. Metrics derived from bibliometrics and citation analysis were effectively used to characterize the novelty in scientific development. However, time is required before we can observe links between documents such as citation links or patterns derived from the links, which makes these techniques more effective for retrospective analysis than predictive analysis. In this study, we present a new approach to measuring the novelty of a research topic in a scientific community over a specific period by tracking semantic changes of the terms and characterizing the research topic in their usage context. The semantic changes are derived from the text data of scientific literature by temporal embedding learning techniques. We validated the effects of the proposed novelty metric on predicting the future growth of scientific publications and investigated the relations between novelty and growth by panel data analysis applied in a largescale publication dataset (MEDLINE/PubMed). Key findings based on the statistical investigation indicate that the novelty metric has significant predictive effects on the growth of scientific literature and the predictive effects may last for more than ten years. We demonstrated the effectiveness and practical implications of the novelty metric in three case studies. ∗[email protected], [email protected]. Department of Information Science, Drexel University. 1 ar X iv :1 80 1. 09 12 1v 1 [ cs .D L ] 2 7 Ja n 20 18",
"title": ""
},
{
"docid": "39a63943fdc69942088fab0e5e7131f2",
"text": "Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data. Videos of these experiments can be found at www.goo.gl/b57WTs.",
"title": ""
},
{
"docid": "99c1ad04419fa0028724a26e757b1b90",
"text": "Contrary to popular belief, despite decades of research in fingerprints, reliable fingerprint recognition is still an open problem. Extracting features out of poor quality prints is the most challenging problem faced in this area. This paper introduces a new approach for fingerprint enhancement based on Short Time Fourier Transform(STFT) Analysis. STFT is a well known technique in signal processing to analyze non-stationary signals. Here we extend its application to 2D fingerprint images. The algorithm simultaneously estimates all the intrinsic properties of the fingerprints such as the foreground region mask, local ridge orientation and local ridge frequency. Furthermore we propose a probabilistic approach of robustly estimating these parameters. We experimentally compare the proposed approach to other filtering approaches in literature and show that our technique performs favorably.",
"title": ""
},
{
"docid": "4c49cebd579b2fef196d7ce600b1a044",
"text": "A GPU cluster is a cluster equipped with GPU devices. Excellent acceleration is achievable for computation-intensive tasks (e. g. matrix multiplication and LINPACK) and bandwidth-intensive tasks with data locality (e. g. finite-difference simulation). Bandwidth-intensive tasks such as large-scale FFTs without data locality are harder to accelerate, as the bottleneck often lies with the PCI between main memory and GPU device memory or the communication network between workstation nodes. That means optimizing the performance of FFT for a single GPU device will not improve the overall performance. This paper uses large-scale FFT as an example to show how to achieve substantial speedups for these more challenging tasks on a GPU cluster. Three GPU-related factors lead to better performance: firstly the use of GPU devices improves the sustained memory bandwidth for processing large-size data; secondly GPU device memory allows larger subtasks to be processed in whole and hence reduces repeated data transfers between memory and processors; and finally some costly main-memory operations such as matrix transposition can be significantly sped up by GPUs if necessary data adjustment is performed during data transfers. This technique of manipulating array dimensions during data transfer is the main technical contribution of this paper. These factors (as well as the improved communication library in our implementation) attribute to 24.3x speedup with respect to FFTW and 7x speedup with respect to Intel MKL for 4096 3D single-precision FFT on a 16-node cluster with 32 GPUs. Around 5x speedup with respect to both standard libraries are achieved for double precision.",
"title": ""
},
{
"docid": "61051ddfb877064e477bea0131bddef4",
"text": "Portfolio diversification in capital markets is an accepted investment strategy. On the other hand corporate diversification has drawn many opponents especially the agency theorists who argue that executives must not diversify on behalf of share holders. Diversification is a strategic option used by many managers to improve their firm’s performance. While extensive literature investigates the diversification performance linkage, little agreements exist concerning the nature of this relationship. Both theoretical and empirical disagreements abound as the extensive research has neither reached a consensus nor any interpretable and acceptable findings. This paper looked at diversification as a corporate strategy and its effect on firm performance using Conglomerates in the Food and Beverages Sector listed on the ZSE. The study used a combination of primary and secondary data. Primary data was collected through interviews while secondary data were gathered from financial statements and management accounts. Data was analyzed using SPSS computer package. Three competing models were derived from literature (the linear model, Inverted U model and Intermediate model) and these were empirically assessed and tested.",
"title": ""
},
{
"docid": "f28472c17234096fa73d6bee95d99498",
"text": "The class average accuracies of different methods on the NYU V2: The Proposed Network Structure The model has a convolutional network and deconvolutional network for each modality, as well as a feature transformation network. In this structure, 1. The RGB and depth convolutional network have the same structure; 2. The deconvolutional networks are the mirrored version of the convolutional networks; 3. The feature transformation network extracts common features and modality specific features; 4. One modality can borrow the common features learned from the other modality.",
"title": ""
},
{
"docid": "bf1bcf55307b02adca47ff696be6f801",
"text": "INTRODUCTION\nMobile phones are ubiquitous in society and owned by a majority of psychiatric patients, including those with severe mental illness. Their versatility as a platform can extend mental health services in the areas of communication, self-monitoring, self-management, diagnosis, and treatment. However, the efficacy and reliability of publicly available applications (apps) have yet to be demonstrated. Numerous articles have noted the need for rigorous evaluation of the efficacy and clinical utility of smartphone apps, which are largely unregulated. Professional clinical organizations do not provide guidelines for evaluating mobile apps.\n\n\nMATERIALS AND METHODS\nGuidelines and frameworks are needed to evaluate medical apps. Numerous frameworks and evaluation criteria exist from the engineering and informatics literature, as well as interdisciplinary organizations in similar fields such as telemedicine and healthcare informatics.\n\n\nRESULTS\nWe propose criteria for both patients and providers to use in assessing not just smartphone apps, but also wearable devices and smartwatch apps for mental health. Apps can be evaluated by their usefulness, usability, and integration and infrastructure. Apps can be categorized by their usability in one or more stages of a mental health provider's workflow.\n\n\nCONCLUSIONS\nUltimately, leadership is needed to develop a framework for describing apps, and guidelines are needed for both patients and mental health providers.",
"title": ""
},
{
"docid": "42aa520e1c46749e7abc924c0f56442d",
"text": "Internet of Things is evolving heavily in these times. One of the major obstacle is energy consumption in the IoT devices (sensor nodes and wireless gateways). The IoT devices are often battery powered wireless devices and thus reducing the energy consumption in these devices is essential to lengthen the lifetime of the device without battery change. It is possible to lengthen battery lifetime by efficient but lightweight sensor data analysis in close proximity of the sensor. Performing part of the sensor data analysis in the end device can reduce the amount of data needed to transmit wirelessly. Transmitting data wirelessly is very energy consuming task. At the same time, the privacy and security should not be compromised. It requires effective but computationally lightweight encryption schemes. This survey goes thru many aspects to consider in edge and fog devices to minimize energy consumption and thus lengthen the device and the network lifetime.",
"title": ""
},
{
"docid": "7bd0d55e08ff4d94c021dd53142ef5aa",
"text": "From smart homes that prepare coffee when we wake, to phones that know not to interrupt us during important conversations, our collective visions of HCI imagine a future in which computers understand a broad range of human behaviors. Today our systems fall short of these visions, however, because this range of behaviors is too large for designers or programmers to capture manually. In this paper, we instead demonstrate it is possible to mine a broad knowledge base of human behavior by analyzing more than one billion words of modern fiction. Our resulting knowledge base, Augur, trains vector models that can predict many thousands of user activities from surrounding objects in modern contexts: for example, whether a user may be eating food, meeting with a friend, or taking a selfie. Augur uses these predictions to identify actions that people commonly take on objects in the world and estimate a user's future activities given their current situation. We demonstrate Augur-powered, activity-based systems such as a phone that silences itself when the odds of you answering it are low, and a dynamic music player that adjusts to your present activity. A field deployment of an Augur-powered wearable camera resulted in 96% recall and 71% precision on its unsupervised predictions of common daily activities. A second evaluation where human judges rated the system's predictions over a broad set of input images found that 94% were rated sensible.",
"title": ""
},
{
"docid": "0cbc2eb794f44b178a54d97aeff69c19",
"text": "Automatic identification of predatory conversations i chat logs helps the law enforcement agencies act proactively through early detection of predatory acts in cyberspace. In this paper, we describe the novel application of a deep learnin g method to the automatic identification of predatory chat conversations in large volumes of ch at logs. We present a classifier based on Convolutional Neural Network (CNN) to address this problem domain. The proposed CNN architecture outperforms other classification techn iques that are common in this domain including Support Vector Machine (SVM) and regular Neural Network (NN) in terms of classification performance, which is measured by F 1-score. In addition, our experiments show that using existing pre-trained word vectors are no t suitable for this specific domain. Furthermore, since the learning algorithm runs in a m ssively parallel environment (i.e., general-purpose GPU), the approach can benefit a la rge number of computation units (neurons) compared to when CPU is used. To the best of our knowledge, this is the first tim e that CNNs are adapted and applied to this application do main.",
"title": ""
},
{
"docid": "6ec4c9e6b3e2a9fd4da3663a5b21abcd",
"text": "In order to ensure the service quality, modern Internet Service Providers (ISPs) invest tremendously on their network monitoring and measurement infrastructure. Vast amount of network data, including device logs, alarms, and active/passive performance measurement across different network protocols and layers, are collected and stored for analysis. As network measurement grows in scale and sophistication, it becomes increasingly challenging to effectively “search” for the relevant information that best support the needs of network operations. In this paper, we look into techniques that have been widely applied in the information retrieval and search engine domain and explore their applicability in network management domain. We observe that unlike the textural information on the Internet, network data are typically annotated with time and location information, which can be further augmented using information based on network topology, protocol and service dependency. We design NetSearch, a system that pre-processes various network data sources on data ingestion, constructs index that matches both the network spatial hierarchy model and the inherent timing/textual information contained in the data, and efficiently retrieves the relevant information that network operators search for. Through case study, we demonstrate that NetSearch is an important capability for many critical network management functions such as complex impact analysis.",
"title": ""
}
] | scidocsrr |
a6bed6910aac2ca61a0877886423bd01 | Structured Sequence Modeling with Graph Convolutional Recurrent Networks | [
{
"docid": "8d83568ca0c89b1a6e344341bb92c2d0",
"text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"title": ""
}
] | [
{
"docid": "73270e8140d763510d97f7bd2fdd969e",
"text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.",
"title": ""
},
{
"docid": "32d235c450be47d9f5bca03cb3d40f82",
"text": "Recent empirical results on long-term dependency tasks have shown that neural networks augmented with an external memory can learn the long-term dependency tasks more easily and achieve better generalization than vanilla recurrent neural networks (RNN). We suggest that memory augmented neural networks can reduce the effects of vanishing gradients by creating shortcut (or wormhole) connections. Based on this observation, we propose a novel memory augmented neural network model called TARDIS (Temporal Automatic Relation Discovery in Sequences). The controller of TARDIS can store a selective set of embeddings of its own previous hidden states into an external memory and revisit them as and when needed. For TARDIS, memory acts as a storage for wormhole connections to the past to propagate the gradients more effectively and it helps to learn the temporal dependencies. The memory structure of TARDIS has similarities to both Neural Turing Machines (NTM) and Dynamic Neural Turing Machines (D-NTM), but both read and write operations of TARDIS are simpler and more efficient. We use discrete addressing for read/write operations which helps to substantially to reduce the vanishing gradient problem with very long sequences. Read and write operations in TARDIS are tied with a heuristic once the memory becomes full, and this makes the learning problem simpler when compared to NTM or D-NTM type of architectures. We provide a detailed analysis on the gradient propagation in general for MANNs. We evaluate our models on different long-term dependency tasks and report competitive results in all of them.",
"title": ""
},
{
"docid": "890b1ed209b3e34c5b460dce310ee08f",
"text": "INTRODUCTION\nThe adequate use of compression in venous leg ulcer treatment is equally important to patients as well as clinicians. Currently, there is a lack of clarity on contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients.\n\n\nMETHODS\nThe project aimed to optimize prevention, treatment and maintenance approaches by recognizing contraindications, risk factors, adverse events and complications, when applying compression therapy for venous leg ulcer patients. A literature review was conducted of current guidelines on venous leg ulcer prevention, management and maintenance.\n\n\nRESULTS\nSearches took place from 29th February 2016 to 30th April 2016 and were prospectively limited to publications in the English and German languages and publication dates were between January 2009 and April 2016. Twenty Guidelines, clinical pathways and consensus papers on compression therapy for venous leg ulcer treatment and for venous disease, were included. Guidelines agreed on the following absolute contraindications: Arterial occlusive disease, heart failure and ankle brachial pressure index (ABPI) <0.5, but gave conflicting recommendations on relative contraindications, risks and adverse events. Moreover definitions were unclear and not consistent.\n\n\nCONCLUSIONS\nEvidence-based guidance is needed to inform clinicians on risk factor, adverse effects, complications and contraindications. ABPI values need to be specified and details should be given on the type of compression that is safe to use. Ongoing research challenges the present recommendations, shifting some contraindications into a list of potential indications. Complications of compression can be prevented when adequate assessment is performed and clinicians are skilled in applying compression.",
"title": ""
},
{
"docid": "c3a7d3fa13bed857795c4cce2e992b87",
"text": "Healthcare consumers, researchers, patients and policy makers increasingly use systematic reviews (SRs) to aid their decision-making process. However, the conduct of SRs can be a time-consuming and resource-intensive task. Often, clinical practice guideline developers or other decision-makers need to make informed decisions in a timely fashion (e.g. outbreaks of infection, hospital-based health technology assessments). Possible approaches to address the issue of timeliness in the production of SRs are to (a) implement process parallelisation, (b) adapt and apply innovative technologies, and/or (c) modify SR processes (e.g. study eligibility criteria, search sources, data extraction or quality assessment). Highly parallelised systematic reviewing requires substantial resources to support a team of experienced information specialists, reviewers and methodologists working alongside with clinical content experts to minimise the time for completing individual review steps while maximising the parallel progression of multiple steps. Effective coordination and management within the team and across external stakeholders are essential elements of this process. Emerging innovative technologies have a great potential for reducing workload and improving efficiency of SR production. The most promising areas of application would be to allow automation of specific SR tasks, in particular if these tasks are time consuming and resource intensive (e.g. language translation, study selection, data extraction). Modification of SR processes involves restricting, truncating and/or bypassing one or more SR steps, which may risk introducing bias to the review findings. Although the growing experiences in producing various types of rapid reviews (RR) and the accumulation of empirical studies exploring potential bias associated with specific SR tasks have contributed to the methodological development for expediting SR production, there is still a dearth of research examining the actual impact of methodological modifications and comparing the findings between RRs and SRs. This evidence would help to inform as to which SR tasks can be accelerated or truncated and to what degree, while maintaining the validity of review findings. Timely delivered SRs can be of value in informing healthcare decisions and recommendations, especially when there is practical urgency and there is no other relevant synthesised evidence.",
"title": ""
},
{
"docid": "7c5a80b0fef3e0e1fe5ce314b6e5aaf4",
"text": "OBJECTIVES\nGiven the large-scale adoption and deployment of mobile phones by health services and frontline health workers (FHW), we aimed to review and synthesise the evidence on the feasibility and effectiveness of mobile-based services for healthcare delivery.\n\n\nMETHODS\nFive databases - MEDLINE, EMBASE, Global Health, Google Scholar and Scopus - were systematically searched for relevant peer-reviewed articles published between 2000 and 2013. Data were extracted and synthesised across three themes as follows: feasibility of use of mobile tools by FHWs, training required for adoption of mobile tools and effectiveness of such interventions.\n\n\nRESULTS\nForty-two studies were included in this review. With adequate training, FHWs were able to use mobile phones to enhance various aspects of their work activities. Training of FHWs to use mobile phones for healthcare delivery ranged from a few hours to about 1 week. Five key thematic areas for the use of mobile phones by FHWs were identified as follows: data collection and reporting, training and decision support, emergency referrals, work planning through alerts and reminders, and improved supervision of and communication between healthcare workers. Findings suggest that mobile based data collection improves promptness of data collection, reduces error rates and improves data completeness. Two methodologically robust studies suggest that regular access to health information via SMS or mobile-based decision-support systems may improve the adherence of the FHWs to treatment algorithms. The evidence on the effectiveness of the other approaches was largely descriptive and inconclusive.\n\n\nCONCLUSIONS\nUse of mHealth strategies by FHWs might offer some promising approaches to improving healthcare delivery; however, the evidence on the effectiveness of such strategies on healthcare outcomes is insufficient.",
"title": ""
},
{
"docid": "d7bc62e7fca922f9b97e42deff85d010",
"text": "In this paper, we propose an extractive multi-document summarization (MDS) system using joint optimization and active learning for content selection grounded in user feedback. Our method interactively obtains user feedback to gradually improve the results of a state-of-the-art integer linear programming (ILP) framework for MDS. Our methods complement fully automatic methods in producing highquality summaries with a minimum number of iterations and feedbacks. We conduct multiple simulation-based experiments and analyze the effect of feedbackbased concept selection in the ILP setup in order to maximize the user-desired content in the summary.",
"title": ""
},
{
"docid": "c7f23ddb60394659cdf48ea4df68ae6b",
"text": "OBJECTIVES\nWe hypothesized reduction of 30 days' in-hospital morbidity, mortality, and length of stay postimplementation of the World Health Organization's Surgical Safety Checklist (SSC).\n\n\nBACKGROUND\nReductions of morbidity and mortality have been reported after SSC implementation in pre-/postdesigned studies without controls. Here, we report a randomized controlled trial of the SSC.\n\n\nMETHODS\nA stepped wedge cluster randomized controlled trial was conducted in 2 hospitals. We examined effects on in-hospital complications registered by International Classification of Diseases, Tenth Revision codes, length of stay, and mortality. The SSC intervention was sequentially rolled out in a random order until all 5 clusters-cardiothoracic, neurosurgery, orthopedic, general, and urologic surgery had received the Checklist. Data were prospectively recorded in control and intervention stages during a 10-month period in 2009-2010.\n\n\nRESULTS\nA total of 2212 control procedures were compared with 2263 SCC procedures. The complication rates decreased from 19.9% to 11.5% (P < 0.001), with absolute risk reduction 8.4 (95% confidence interval, 6.3-10.5) from the control to the SSC stages. Adjusted for possible confounding factors, the SSC effect on complications remained significant with odds ratio 1.95 (95% confidence interval, 1.59-2.40). Mean length of stay decreased by 0.8 days with SCC utilization (95% confidence interval, 0.11-1.43). In-hospital mortality decreased significantly from 1.9% to 0.2% in 1 of the 2 hospitals post-SSC implementation, but the overall reduction (1.6%-1.0%) across hospitals was not significant.\n\n\nCONCLUSIONS\nImplementation of the WHO SSC was associated with robust reduction in morbidity and length of in-hospital stay and some reduction in mortality.",
"title": ""
},
{
"docid": "0a929fa28caa0138c1283d7f54ecccc9",
"text": "While predictions abound that electronic books will supplant traditional paper-based books, many people bemoan the coming loss of the book as cultural artifact. In this project we deliberately keep the affordances of paper books while adding electronic augmentation. The Listen Reader combines the look and feel of a real book - a beautiful binding, paper pages and printed images and text - with the rich, evocative quality of a movie soundtrack. The book's multi-layered interactive soundtrack consists of music and sound effects. Electric field sensors located in the book binding sense the proximity of the reader's hands and control audio parameters, while RFID tags embedded in each page allow fast, robust page identification.\nThree different Listen Readers were built as part of a six-month museum exhibit, with more than 350,000 visitors. This paper discusses design, implementation, and lessons learned through the iterative design process, observation, and visitor interviews.",
"title": ""
},
{
"docid": "bc1efec6824aae80c9cae7ea2b2c4842",
"text": "State-of-the-art natural language processing systems rely on supervision in the form of annotated data to learn competent models. These models are generally trained on data in a single language (usually English), and cannot be directly used beyond that language. Since collecting data in every language is not realistic, there has been a growing interest in crosslingual language understanding (XLU) and low-resource cross-language transfer. In this work, we construct an evaluation set for XLU by extending the development and test sets of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15 languages, including low-resource languages such as Swahili and Urdu. We hope that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence understanding by providing an informative standard evaluation task. In addition, we provide several baselines for multilingual sentence understanding, including two based on machine translation systems, and two that use parallel data to train aligned multilingual bag-of-words and LSTM encoders. We find that XNLI represents a practical and challenging evaluation suite, and that directly translating the test data yields the best performance among available baselines.",
"title": ""
},
{
"docid": "45986bb7bb041f50fac577e562347b61",
"text": "In this paper, we study the human locomotor adaptation to the action of a powered exoskeleton providing assistive torque at the user's hip during walking. To this end, we propose a controller that provides the user's hip with a fraction of the nominal torque profile, adapted to the specific gait features of the user from Winter's reference data . The assistive controller has been implemented on the ALEX II exoskeleton and tested on ten healthy subjects. Experimental results show that when assisted by the exoskeleton, users can reduce the muscle effort compared to free walking. Despite providing assistance only to the hip joint, both hip and ankle muscles significantly reduced their activation, indicating a clear tradeoff between hip and ankle strategy to propel walking.",
"title": ""
},
{
"docid": "7916a261319dad5f257a0b8e0fa97fec",
"text": "INTRODUCTION\nPreliminary research has indicated that recreational ketamine use may be associated with marked cognitive impairments and elevated psychopathological symptoms, although no study to date has determined how these are affected by differing frequencies of use or whether they are reversible on cessation of use. In this study we aimed to determine how variations in ketamine use and abstention from prior use affect neurocognitive function and psychological wellbeing.\n\n\nMETHOD\nWe assessed a total of 150 individuals: 30 frequent ketamine users, 30 infrequent ketamine users, 30 ex-ketamine users, 30 polydrug users and 30 controls who did not use illicit drugs. Cognitive tasks included spatial working memory, pattern recognition memory, the Stockings of Cambridge (a variant of the Tower of London task), simple vigilance and verbal and category fluency. Standardized questionnaires were used to assess psychological wellbeing. Hair analysis was used to verify group membership.\n\n\nRESULTS\nFrequent ketamine users were impaired on spatial working memory, pattern recognition memory, Stockings of Cambridge and category fluency but exhibited preserved verbal fluency and prose recall. There were no differences in the performance of the infrequent ketamine users or ex-users compared to the other groups. Frequent users showed increased delusional, dissociative and schizotypal symptoms which were also evident to a lesser extent in infrequent and ex-users. Delusional symptoms correlated positively with the amount of ketamine used currently by the frequent users.\n\n\nCONCLUSIONS\nFrequent ketamine use is associated with impairments in working memory, episodic memory and aspects of executive function as well as reduced psychological wellbeing. 'Recreational' ketamine use does not appear to be associated with distinct cognitive impairments although increased levels of delusional and dissociative symptoms were observed. As no performance decrements were observed in the ex-ketamine users, it is possible that the cognitive impairments observed in the frequent ketamine group are reversible upon cessation of ketamine use, although delusional symptoms persist.",
"title": ""
},
{
"docid": "611f7b5564c9168f73f778e7466d1709",
"text": "A fold-back current-limit circuit, with load-insensitive quiescent current characteristic for CMOS low dropout regulator (LDO), is proposed in this paper. This method has been designed in 0.35 µm CMOS technology and verified by Hspice simulation. The quiescent current of the LDO is 5.7 µA at 100-mA load condition. It is only 2.2% more than it in no-load condition, 5.58 µA. The maximum current limit is set to be 197 mA, and the short-current limit is 77 mA. Thus, the power consumption can be saved up to 61% at the short-circuit condition, which also decreases the risk of damaging the power transistor. Moreover, the thermal protection can be simplified and the LDO will be more reliable.",
"title": ""
},
{
"docid": "9dc4da444e4df3f63f37b1928e36464c",
"text": "This paper presents and studies various selected literature primarily from conference proceedings, journals and clinical tests of the robotic, mechatronics, neurology and biomedical engineering of rehabilitation robotic systems. The present paper focuses of three main categories: types of rehabilitation robots, key technologies with current issues and future challenges. Literature on fundamental research with some examples from commercialized robots and new robot development projects related to rehabilitation are introduced. Most of the commercialized robots presented in this paper are well known especially to robotics engineers and scholars in the robotic field, but are less known to humanities scholars. The field of rehabilitation robot research is expanding; in light of this, some of the current issues and future challenges in rehabilitation robot engineering are recalled, examined and clarified with future directions. This paper is concluded with some recommendations with respect to rehabilitation robots.",
"title": ""
},
{
"docid": "43cf9c485c541afa84e3ee5ce4d39376",
"text": "With the tremendous popularity of PDF format, recognizing mathematical formulas in PDF documents becomes a new and important problem in document analysis field. In this paper, we present a method of embedded mathematical formula identification in PDF documents, based on Support Vector Machine (SVM). The method first segments text lines into words, and then classifies each word into two classes, namely formula or ordinary text. Various features of embedded formulas, including geometric layout, character and context content, are utilized to build a robust and adaptable SVM classifier. Embedded formulas are then extracted through merging the words labeled as formulas. Experimental results show good performance of the proposed method. Furthermore, the method has been successfully incorporated into a commercial software package for large-scale e-Book production.",
"title": ""
},
{
"docid": "cfa58ab168beb2d52fe6c2c47488e93a",
"text": "In this paper we present our approach to automatically identify the subjectivity, polarity and irony of Italian Tweets. Our system which reaches and outperforms the state of the art in Italian is well adapted for different domains since it uses abstract word features instead of bag of words. We also present experiments carried out to study how Italian Sentiment Analysis systems react to domain changes. We show that bag of words approaches commonly used in Sentiment Analysis do not adapt well to domain changes.",
"title": ""
},
{
"docid": "2215fd5b4f1e884a66b62675c8c92d33",
"text": "In the context of structural optimization we propose a new numerical method based on a combination of the classical shape derivative and of the level-set method for front propagation. We implement this method in two and three space dimensions for a model of linear or nonlinear elasticity. We consider various objective functions with weight and perimeter constraints. The shape derivative is computed by an adjoint method. The cost of our numerical algorithm is moderate since the shape is captured on a fixed Eulerian mesh. Although this method is not specifically designed for topology optimization, it can easily handle topology changes. However, the resulting optimal shape is strongly dependent on the initial guess. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a7dddda96d65147c6d3e47df2757e329",
"text": "Today, a large number of audio features exists in audio retrieval for different purposes, such as automatic speech recognition, music information retrieval, audio segmentation, and environmental sound retrieval. The goal of this paper is to review latest research in the context of audio feature extraction and to give an application-independent overview of the most important existing techniques. We survey state-of-the-art features from various domains and propose a novel taxonomy for the organization of audio features. Additionally, we identify the building blocks of audio features and propose a scheme that allows for the description of arbitrary features. We present an extensive literature survey and provide more than 200 references to relevant high quality publications.",
"title": ""
},
{
"docid": "c5cb4f6b5bc524bad610e855105c1b99",
"text": "The authors examined how an applicant's handshake influences hiring recommendations formed during the employment interview. A sample of 98 undergraduate students provided personality measures and participated in mock interviews during which the students received ratings of employment suitability. Five trained raters independently evaluated the quality of the handshake for each participant. Quality of handshake was related to interviewer hiring recommendations. Path analysis supported the handshake as mediating the effect of applicant extraversion on interviewer hiring recommendations, even after controlling for differences in candidate physical appearance and dress. Although women received lower ratings for the handshake, they did not on average receive lower assessments of employment suitability. Exploratory analysis suggested that the relationship between a firm handshake and interview ratings may be stronger for women than for men.",
"title": ""
},
{
"docid": "d15e27ef0225d1f178b034534b57856b",
"text": "We introduce a novel joint sparse representation based multi-view automatic target recognition (ATR) method, which can not only handle multi-view ATR without knowing the pose but also has the advantage of exploiting the correlations among the multiple views of the same physical target for a single joint recognition decision. Extensive experiments have been carried out on moving and stationary target acquisition and recognition (MSTAR) public database to evaluate the proposed method compared with several state-of-the-art methods such as linear support vector machine (SVM), kernel SVM, as well as a sparse representation based classifier (SRC). Experimental results demonstrate that the proposed joint sparse representation ATR method is very effective and performs robustly under variations such as multiple joint views, depression, azimuth angles, target articulations, as well as configurations.",
"title": ""
},
{
"docid": "2b7a8590fe5e73d254a5be2ba3c1ee5b",
"text": "High resolution magnetic resonance (MR) imaging is desirable in many clinical applications due to its contribution to more accurate subsequent analyses and early clinical diagnoses. Single image super resolution (SISR) is an effective and cost efficient alternative technique to improve the spatial resolution of MR images. In the past few years, SISR methods based on deep learning techniques, especially convolutional neural networks (CNNs), have achieved state-of-the-art performance on natural images. However, the information is gradually weakened and training becomes increasingly difficult as the network deepens. The problem is more serious for medical images because lacking high quality and effective training samples makes deep models prone to underfitting or overfitting. Nevertheless, many current models treat the hierarchical features on different channels equivalently, which is not helpful for the models to deal with the hierarchical features discriminatively and targetedly. To this end, we present a novel channel splitting network (CSN) to ease the representational burden of deep models. The proposed CSN model divides the hierarchical features into two branches, i.e., residual branch and dense branch, with different information transmissions. The residual branch is able to promote feature reuse, while the dense branch is beneficial to the exploration of new features. Besides, we also adopt the merge-and-run mapping to facilitate information integration between different branches. Extensive experiments on various MR images, including proton density (PD), T1 and T2 images, show that the proposed CSN model achieves superior performance over other state-of-the-art SISR methods.",
"title": ""
}
] | scidocsrr |
507b1e11ba8732940248cb59695056c6 | Dimensions of peri-implant mucosa: an evaluation of maxillary anterior single implants in humans. | [
{
"docid": "42faf2c0053c9f6a0147fc66c8e4c122",
"text": "IN 1921, Gottlieb's discovery of the epithelial attachment of the gingiva opened new horizons which served as the basis for a better understanding of the biology of the dental supporting tissues in health and disease. Three years later his pupils, Orban and Kohler (1924), undertook the task of measuring the epithelial attachment as well as the surrounding tissue relations during the four phases of passive eruption of the tooth. Gottlieb and Orban's descriptions of the epithelial attachment unveiled the exact morphology of this epithelial structure, and clarified the relation of this",
"title": ""
}
] | [
{
"docid": "9308c1dfdf313f6268db9481723f533d",
"text": "We report the discovery of a highly active Ni-Co alloy electrocatalyst for the oxidation of hydrazine (N(2)H(4)) and provide evidence for competing electrochemical (faradaic) and chemical (nonfaradaic) reaction pathways. The electrochemical conversion of hydrazine on catalytic surfaces in fuel cells is of great scientific and technological interest, because it offers multiple redox states, complex reaction pathways, and significantly more favorable energy and power densities compared to hydrogen fuel. Structure-reactivity relations of a Ni(60)Co(40) alloy electrocatalyst are presented with a 6-fold increase in catalytic N(2)H(4) oxidation activity over today's benchmark catalysts. We further study the mechanistic pathways of the catalytic N(2)H(4) conversion as function of the applied electrode potential using differentially pumped electrochemical mass spectrometry (DEMS). At positive overpotentials, N(2)H(4) is electrooxidized into nitrogen consuming hydroxide ions, which is the fuel cell-relevant faradaic reaction pathway. In parallel, N(2)H(4) decomposes chemically into molecular nitrogen and hydrogen over a broad range of electrode potentials. The electroless chemical decomposition rate was controlled by the electrode potential, suggesting a rare example of a liquid-phase electrochemical promotion effect of a chemical catalytic reaction (\"EPOC\"). The coexisting electrocatalytic (faradaic) and heterogeneous catalytic (electroless, nonfaradaic) reaction pathways have important implications for the efficiency of hydrazine fuel cells.",
"title": ""
},
{
"docid": "bdb41d1633c603f4b68dfe0191eb822b",
"text": "Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts-even concepts about action and perception-are symbolic and abstract, and therefore must be implemented outside the brain's sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called \"abstract\" concepts that constitute the meanings of grammatical constructions and general inference patterns.",
"title": ""
},
{
"docid": "72196b0a2eed5e9747d90593cdd0684d",
"text": "Advanced silicon (Si) node technology development is moving to 10/7nm technology and pursuing die size reduction, efficiency enhancement and lower power consumption for mobile applications in the semiconductor industry. The flip chip chip scale package (fcCSP) has been viewed as an attractive solution to achieve the miniaturization of die size, finer bump pitch, finer line width and spacing (LW/LS) substrate requirements, and is widely adopted in mobile devices to satisfy the increasing demands of higher performance, higher bandwidth, and lower power consumption as well as multiple functions. The utilization of mass reflow (MR) chip attach process in a fcCSP with copper (Cu) pillar bumps, embedded trace substrate (ETS) technology and molded underfill (MUF) is usually viewed as the cost-efficient solution. However, when finer bump pitch and LW/LS with an escaped trace are designed in flip chip MR process, a higher risk of a bump to trace short can occur. In order to reduce the risk of bump to trace short as well as extremely low-k (ELK) damage in a fcCSP with advanced Si node, the thermo-compression bonding (TCB) and TCB with non-conductive paste (TCNCP) have been adopted, although both methodologies will cause a higher assembly cost due to the lower units per hour (UPH) assembly process. For the purpose of delivering a cost-effective chip attach process as compared to TCB/TCNCP methodologies as well as reducing the risk of bump to trace as compared to the MR process, laser assisted bonding (LAB) chip attach methodology was studied in a 15x15mm fcCSP with 10nm backend process daisy-chain die for this paper. Using LAB chip attach technology can increase the UPH by more than 2-times over TCB and increase the UPH 5-times compared to TCNCP. To realize the ELK performance of a 10nm fcCSP with fine bump pitch of $60 \\mu \\mathrm{m}$ and $90 \\mu \\mathrm{m}$ as well as 2-layer ETS with two escaped traces design, the quick temperature cycling (QTC) test was performed after the LAB chip attach process. The comparison of polyimide (PI) layer Cu pillar bumps to non-PI Cu pillar bumps (without a PI layer) will be discussed to estimate the 10nm ELK performance. The evaluated result shows that the utilization of LAB can not only achieve a bump pitch reduction with a finer LW/LS substrate with escaped traces in the design, but it also validates ELK performance and Si node reduction. Therefore, the illustrated LAB chip attach processes examined here can guarantee the assembly yield with less ELK damage risk in a 10nm fcCSP with finer bump pitch and substrate finer LW/LS design in the future.",
"title": ""
},
{
"docid": "bc8950644ded24618a65c4fcef302044",
"text": "Child maltreatment is a pervasive problem in our society that has long-term detrimental consequences to the development of the affected child such as future brain growth and functioning. In this paper, we surveyed empirical evidence on the neuropsychological effects of child maltreatment, with a special emphasis on emotional, behavioral, and cognitive process–response difficulties experienced by maltreated children. The alteration of the biochemical stress response system in the brain that changes an individual’s ability to respond efficiently and efficaciously to future stressors is conceptualized as the traumatic stress response. Vulnerable brain regions include the hypothalamic–pituitary–adrenal axis, the amygdala, the hippocampus, and prefrontal cortex and are linked to children’s compromised ability to process both emotionally-laden and neutral stimuli in the future. It is suggested that information must be garnered from varied literatures to conceptualize a research framework for the traumatic stress response in maltreated children. This research framework suggests an altered developmental trajectory of information processing and emotional dysregulation, though much debate still exists surrounding the correlational nature of empirical studies, the potential of resiliency following childhood trauma, and the extent to which early interventions may facilitate recovery.",
"title": ""
},
{
"docid": "8d070d8506d8a83ce78bde0e19f28031",
"text": "Although amyotrophic lateral sclerosis and its variants are readily recognised by neurologists, about 10% of patients are misdiagnosed, and delays in diagnosis are common. Prompt diagnosis, sensitive communication of the diagnosis, the involvement of the patient and their family, and a positive care plan are prerequisites for good clinical management. A multidisciplinary, palliative approach can prolong survival and maintain quality of life. Treatment with riluzole improves survival but has a marginal effect on the rate of functional deterioration, whereas non-invasive ventilation prolongs survival and improves or maintains quality of life. In this Review, we discuss the diagnosis, management, and how to cope with impaired function and end of life on the basis of our experience, the opinions of experts, existing guidelines, and clinical trials. We highlight the need for research on the effectiveness of gastrostomy, access to non-invasive ventilation and palliative care, communication between the care team, the patient and his or her family, and recognition of the clinical and social effects of cognitive impairment. We recommend that the plethora of evidence-based guidelines should be compiled into an internationally agreed guideline of best practice.",
"title": ""
},
{
"docid": "c3365370cdbf4afe955667f575d1fbb6",
"text": "One of the overriding interests of the literature on health care economics is to discover where personal choice in market economies end and corrective government intervention should begin. Our study addresses this question in the context of John Stuart Mill's utilitarian principle of harm. Our primary objective is to determine whether public policy interventions concerning more than 35,000 online pharmacies worldwide are necessary and efficient compared to traditional market-oriented approaches. Secondly, we seek to determine whether government interference could enhance personal utility maximization, despite its direct and indirect (unintended) costs on medical e-commerce. This study finds that containing the negative externalities of medical e-commerce provides the most compelling raison d'etre of government interference. It asserts that autonomy and paternalism need not be mutually exclusive, despite their direct and indirect consequences on individual choice and decision-making processes. Valuable insights derived from Mill's principle should enrich theory-building in health care economics and policy.",
"title": ""
},
{
"docid": "72eceddfa08e73739022df7c0dc89a3a",
"text": "The empirical mode decomposition (EMD) proposed by Huang et al. in 1998 shows remarkably effective in analyzing nonlinear signals. It adaptively represents nonstationary signals as sums of zero-mean amplitude modulation-frequency modulation (AM-FM) components by iteratively conducting the sifting process. How to determine the boundary conditions of the cubic spline when constructing the envelopes of data is the critical issue of the sifting process. A simple bound hit process technique is presented in this paper which constructs two periodic series from the original data by even and odd extension and then builds the envelopes using cubic spline with periodic boundary condition. The EMD is conducted fluently without any assumptions of the processed data by this approach. An example is presented to pick out the weak modulation of internal waves from an Envisat ASAR image by EMD with the boundary process technique",
"title": ""
},
{
"docid": "e2ea8ec9139837feb95ac432a63afe88",
"text": "Augmented and virtual reality have the potential of being indistinguishable from the real world. Holographic displays, including head mounted units, support this vision by creating rich stereoscopic scenes, with objects that appear to float in thin air - often within arm's reach. However, one has but to reach out and grasp nothing but air to destroy the suspension of disbelief. Snake-charmer is an attempt to provide physical form to virtual objects by revisiting the concept of Robotic Graphics or Encountered-type Haptic interfaces with current commodity hardware. By means of a robotic arm, Snake-charmer brings physicality to a virtual scene and explores what it means to truly interact with an object. We go beyond texture and position simulation and explore what it means to have a physical presence inside a virtual scene. We demonstrate how to render surface characteristics beyond texture and position, including temperature; how to physically move objects; and how objects can physically interact with the user's hand. We analyze our implementation, present the performance characteristics, and provide guidance for the construction of future physical renderers.",
"title": ""
},
{
"docid": "03368de546daf96d5111325f3d08fd3d",
"text": "Despite the widespread use of social media by students and its increased use by instructors, very little empirical evidence is available concerning the impact of social media use on student learning and engagement. This paper describes our semester-long experimental study to determine if using Twitter – the microblogging and social networking platform most amenable to ongoing, public dialogue – for educationally relevant purposes can impact college student engagement and grades. A total of 125 students taking a first year seminar course for pre-health professional majors participated in this study (70 in the experimental group and 55 in the control group). With the experimental group, Twitter was used for various types of academic and co-curricular discussions. Engagement was quantified by using a 19-item scale based on the National Survey of Student Engagement. To assess differences in engagement and grades, we used mixed effects analysis of variance (ANOVA) models, with class sections nested within treatment groups. We also conducted content analyses of samples of Twitter exchanges. The ANOVA results showed that the experimental group had a significantly greater increase in engagement than the control group, as well as higher semester grade point averages. Analyses of Twitter communications showed that students and faculty were both highly engaged in the learning process in ways that transcended traditional classroom activities. This study provides experimental evidence that Twitter can be used as an educational tool to help engage students and to mobilize faculty into a more active and participatory role.",
"title": ""
},
{
"docid": "ed3b4ace00c68e9ad2abe6d4dbdadfcb",
"text": "With decreasing costs of high-quality surveillance systems, human activity detection and tracking has become increasingly practical. Accordingly, automated systems have been designed for numerous detection tasks, but the task of detecting illegally parked vehicles has been left largely to the human operators of surveillance systems. We propose a methodology for detecting this event in real time by applying a novel image projection that reduces the dimensionality of the data and, thus, reduces the computational complexity of the segmentation and tracking processes. After event detection, we invert the transformation to recover the original appearance of the vehicle and to allow for further processing that may require 2-D data. We evaluate the performance of our algorithm using the i-LIDS vehicle detection challenge datasets as well as videos we have taken ourselves. These videos test the algorithm in a variety of outdoor conditions, including nighttime video and instances of sudden changes in weather.",
"title": ""
},
{
"docid": "14cc3608216dd17e7bcbc3e6acba66db",
"text": "Fluorescamine is a new reagent for the detection of primary amines in the picomole range. Its reaction with amines is almost instantaneous at room temperature in aqueous media. The products are highly fluorescent, whereas the reagent and its degradation products are nonfluorescent. Applications are discussed.",
"title": ""
},
{
"docid": "5e6994d8e9cc3af1371a24ac73058a82",
"text": "The first method that was developed to deal with the SLAM problem is based on the extended Kalman filter, EKF SLAM. However this approach cannot be applied to a large environments because of the quadratic complexity and data association problem. The second approach to address the SLAM problem is based on the Rao-Blackwellized Particle filter FastSLAM, which follows a large number of hypotheses that represent the different possible trajectories, each trajectory carries its own map, its complexity increase logarithmically with the number of landmarks in the map. In this paper we will present the result of an implementation of the FastSLAM 2.0 on an open multimedia applications processor, based on a monocular camera as an exteroceptive sensor. A parallel implementation of this algorithm was achieved. Results aim to demonstrate that an optimized algorithm implemented on a low cost architecture is suitable to design an embedded system for SLAM applications.",
"title": ""
},
{
"docid": "ad0a69f92d511e02a24b8d77d3a17641",
"text": "Requirement engineering is an integral part of the software development lifecycle since the basis for developing successful software depends on comprehending its requirements in the first place. Requirement engineering involves a number of processes for gathering requirements in accordance with the needs and demands of users and stakeholders of the software product. In this paper, we have reviewed the prominent processes, tools and technologies used in the requirement gathering phase. The study is useful to perceive the current state of the affairs pertaining to the requirement engineering research and to understand the strengths and limitations of the existing requirement engineering techniques. The study also summarizes the best practices and how to use a blend of the requirement engineering techniques as an effective methodology to successfully conduct the requirement engineering task. The study also highlights the importance of security requirements as though they are part of the nonfunctional requirement, yet are naturally considered fundamental to secure software development.",
"title": ""
},
{
"docid": "04fc127c1b6e915060c2f3035aa5067b",
"text": "Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing–emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user’s emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and pshysiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.",
"title": ""
},
{
"docid": "d603e92c3f3c8ab6a235631ee3a55d52",
"text": "This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. We rst show how to extend the standard notion of classiication by allowing each instance to be associated with multiple labels. We then discuss our approach for multiclass multi-label text categorization which is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other text-categorization algorithms on a variety of tasks. We conclude by describing the application of our system to automatic call-type identiication from unconstrained spoken customer responses.",
"title": ""
},
{
"docid": "cc5d670e751090b29ee9365e840a70c2",
"text": "The Web today provides a corpus of design examples unparalleled in human history. However, leveraging existing designs to produce new pages is currently difficult. This paper introduces the Bricolage algorithm for automatically transferring design and content between Web pages. Bricolage introduces a novel structuredprediction technique that learns to create coherent mappings between pages by training on human-generated exemplars. The produced mappings can then be used to automatically transfer the content from one page into the style and layout of another. We show that Bricolage can learn to accurately reproduce human page mappings, and that it provides a general, efficient, and automatic technique for retargeting content between a variety of real Web pages.",
"title": ""
},
{
"docid": "6c72d16c788509264f573a322c9ebaf6",
"text": "A 5-year clinical and laboratory study of Nigerian children with renal failure (RF) was performed to determine the factors that limited their access to dialysis treatment and what could be done to improve access. There were 48 boys and 33 girls (aged 20 days to 15 years). Of 81 RF patients, 55 were eligible for dialysis; 33 indicated ability to afford dialysis, but only 6 were dialyzed, thus giving a dialysis access rate of 10.90% (6/55). Ability to bear dialysis cost/dialysis accessibility ratio was 5.5:1 (33/6). Factors that limited access to dialysis treatment in our patients included financial restrictions from parents (33%), no parental consent for dialysis (6%), lack or failure of dialysis equipment (45%), shortage of dialysis personnel (6%), reluctance of renal staff to dialyze (6%), and late presentation in hospital (4%). More deaths were recorded among undialyzed than dialyzed patients (P<0.01); similarly, undialyzed patients had more deaths compared with RF patients who required no dialysis (P<0.025). Since most of our patients could not be dialyzed owing to a range of factors, preventive nephrology is advocated to reduce the morbidity and mortality from RF due to preventable diseases.",
"title": ""
},
{
"docid": "22cc9e5487975f8b7ca400ad69504107",
"text": "IMSI Catchers are tracking devices that break the privacy of the subscribers of mobile access networks, with disruptive effects to both the communication services and the trust and credibility of mobile network operators. Recently, we verified that IMSI Catcher attacks are really practical for the state-of-the-art 4G/LTE mobile systems too. Our IMSI Catcher device acquires subscription identities (IMSIs) within an area or location within a few seconds of operation and then denies access of subscribers to the commercial network. Moreover, we demonstrate that these attack devices can be easily built and operated using readily available tools and equipment, and without any programming. We describe our experiments and procedures that are based on commercially available hardware and unmodified open source software.",
"title": ""
},
{
"docid": "dbd504abdff9b5bd80a88f19c3cd7715",
"text": "L'hamartome lipomateux superficiel de Hoffmann-Zurhelle est une tumeur bénigne souvent congénitale. Histologiquement, il est caractérisé par la présence hétérotopique de cellules adipeuses quelquefois lipoblastiques autour des trajets vasculaires dermiques. Nous rapportons une nouvelle observation de forme multiple à révélation tardive chez une femme âgée de 31 ans sans antécédents pathologiques notables qui a été adressée à la consultation pour des papules et tumeurs asymptomatiques de couleur chaire se regroupent en placards à disposition linéaire et zostèriforme au niveau de la face externe de la cuisse droite depuis l'âge de 13 ans, augmentant progressivement de taille. L'étude histologique d'un fragment biopsique avait montré un épiderme régulier, plicaturé et kératinisant, soulevé par un tissu fibro-adipeux abondant incluant quelques vaisseaux sanguins aux dépens du derme moyen. Ces données cliniques et histologiques ont permis de retenir le diagnostic d'hamartome lipomateux superficiel. Une exérèse chirurgicale des tumeurs de grande taille a été proposée complété par le laser CO2 pour le reste de lésions cutanées. L'hamartome lipomateux superficiel est une lésion bénigne sans potentiel de malignité. L'exérèse chirurgicale peut être proposée si la lésion est gênante ou dans un but essentiellement esthétique. Pan African Medical Journal. 2015; 21:31 doi:10.11604/pamj.2015.21.31.4773 This article is available online at: http://www.panafrican-med-journal.com/content/article/21/31/full/ © Sanaa Krich et al. The Pan African Medical Journal ISSN 1937-8688. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Pan African Medical Journal – ISSN: 19378688 (www.panafrican-med-journal.com) Published in partnership with the African Field Epidemiology Network (AFENET). (www.afenet.net) Case report Open Access",
"title": ""
},
{
"docid": "058a89e44689faa0a2545b5b75fd8cb9",
"text": "cplint on SWISH is a web application that allows users to perform reasoning tasks on probabilistic logic programs. Both inference and learning systems can be performed: conditional probabilities with exact, rejection sampling and Metropolis-Hasting methods. Moreover, the system now allows hybrid programs, i.e., programs where some of the random variables are continuous. To perform inference on such programs likelihood weighting and particle filtering are used. cplint on SWISH is also able to sample goals’ arguments and to graph the results. This paper reports on advances and new features of cplint on SWISH, including the capability of drawing the binary decision diagrams created during the inference processes.",
"title": ""
}
] | scidocsrr |
31917eed92437862154233d7239c1af1 | 3D Point Cloud Semantic Modelling: Integrated Framework for Indoor Spaces and Furniture | [
{
"docid": "1dcae3f9b4680725d2c7f5aa1736967c",
"text": "Two less addressed issues of deep reinforcement learning are (1) lack of generalization capability to new goals, and (2) data inefficiency, i.e., the model requires several (and often costly) episodes of trial and error to converge, which makes it impractical to be applied to real-world scenarios. In this paper, we address these two issues and apply our model to target-driven visual navigation. To address the first issue, we propose an actor-critic model whose policy is a function of the goal as well as the current state, which allows better generalization. To address the second issue, we propose the AI2-THOR framework, which provides an environment with high-quality 3D scenes and a physics engine. Our framework enables agents to take actions and interact with objects. Hence, we can collect a huge number of training samples efficiently. We show that our proposed method (1) converges faster than the state-of-the-art deep reinforcement learning methods, (2) generalizes across targets and scenes, (3) generalizes to a real robot scenario with a small amount of fine-tuning (although the model is trained in simulation), (4) is end-to-end trainable and does not need feature engineering, feature matching between frames or 3D reconstruction of the environment.",
"title": ""
}
] | [
{
"docid": "72b25e72706720f71ebd6fe8cf769df5",
"text": "This paper reports our recent result in designing a function for autonomous APs to estimate throughput and delay of its clients in 2.4GHz WiFi channels to support those APs' dynamic channel selection. Our function takes as inputs the traffic volume and strength of signals emitted from nearby interference APs as well as the target AP's traffic volume. By this function, the target AP can estimate throughput and delay of its clients without actually moving to each channel, it is just required to monitor IEEE802.11 MAC frames sent or received by the interference APs. The function is composed of an SVM-based classifier to estimate capacity saturation and a regression function to estimate both throughput and delay in case of saturation in the target channel. The training dataset for the machine learning is created by a highly-precise network simulator. We have conducted over 10,000 simulations to train the model, and evaluated using additional 2,000 simulation results. The result shows that the estimated throughput error is less than 10%.",
"title": ""
},
{
"docid": "b50c010e8606de8efb7a9e861ca31059",
"text": "A Software Defined Network (SDN) is a new network architecture that provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) Attack. To mitigate this threat, this paper proposes to use the central control of SDN for attack detection and introduces a solution that is effective and lightweight in terms of the resources that it uses. More precisely, this paper shows how DDoS attacks can exhaust controller resources and provides a solution to detect such attacks based on the entropy variation of the destination IP address. This method is able to detect DDoS within the first five hundred packets of the attack traffic.",
"title": ""
},
{
"docid": "bf2746e237446a477919b3d6c2940237",
"text": "In this paper, we first introduce the RF performance of Globalfoundries 45RFSOI process. NFET Ft > 290GHz and Fmax >380GHz. Then we present several mm-Wave circuit block designs, i.e., Switch, Power Amplifier, and LNA, based on 45RFSOI process for 5G Front End Module (FEM) applications. For the SPDT switch, insertion loss (IL) < 1dB at 30GHz with 32dBm P1dB and > 25dBm Pmax. For the PA, with a 2.9V power supply, the PA achieves 13.1dB power gain and a saturated output power (Psat) of 16.2dBm with maximum power-added efficiency (PAE) of 41.5% at 24Ghz continuous-wave (CW). With 960Mb/s 64QAM signal, 22.5% average PAE, −29.6dB EVM, and −30.5dBc ACLR are achieved with 9.5dBm average output power.",
"title": ""
},
{
"docid": "c00a29466c82f972a662b0e41b724928",
"text": "We introduce the type theory ¿µv, a call-by-value variant of Parigot's ¿µ-calculus, as a Curry-Howard representation theory of classical propositional proofs. The associated rewrite system is Church-Rosser and strongly normalizing, and definitional equality of the type theory is consistent, compatible with cut, congruent and decidable. The attendant call-by-value programming language µPCFv is obtained from ¿µv by augmenting it by basic arithmetic, conditionals and fixpoints. We study the behavioural properties of µPCFv and show that, though simple, it is a very general language for functional computation with control: it can express all the main control constructs such as exceptions and first-class continuations. Proof-theoretically the dual ¿µv-constructs of naming and µ-abstraction witness the introduction and elimination rules of absurdity respectively. Computationally they give succinct expression to a kind of generic (forward) \"jump\" operator, which may be regarded as a unifying control construct for functional computation. Our goal is that ¿µv and µPCFv respectively should be to functional computation with first-class access to the flow of control what ¿-calculus and PCF respectively are to pure functional programming: ¿µv gives the logical basis via the Curry-Howard correspondence, and µPCFv is a prototypical language albeit in purified form.",
"title": ""
},
{
"docid": "f52cde20377d4b8b7554f9973c220d0a",
"text": "A typical method to obtain valuable information is to extract the sentiment or opinion from a message. Machine learning technologies are widely used in sentiment classification because of their ability to “learn” from the training dataset to predict or support decision making with relatively high accuracy. However, when the dataset is large, some algorithms might not scale up well. In this paper, we aim to evaluate the scalability of Naïve Bayes classifier (NBC) in large datasets. Instead of using a standard library (e.g., Mahout), we implemented NBC to achieve fine-grain control of the analysis procedure. A Big Data analyzing system is also design for this study. The result is encouraging in that the accuracy of NBC is improved and approaches 82% when the dataset size increases. We have demonstrated that NBC is able to scale up to analyze the sentiment of millions movie reviews with increasing throughput.",
"title": ""
},
{
"docid": "281323234970e764eff59579220be9b4",
"text": "Methods based on kernel density estimation have been successfully applied for various data mining tasks. Their natural interpretation together with suitable properties make them an attractive tool among others in clustering problems. In this paper, the Complete Gradient Clustering Algorithm has been used to investigate a real data set of grains. The wheat varieties, Kama, Rosa and Canadian, characterized by measurements of main grain geometric features obtained by X-ray technique, have been analyzed. The proposed algorithm is expected to be an effective tool for recognizing wheat varieties. A comparison between the clustering results obtained from this method and the classical k-means clustering algorithm shows positive practical features of the Complete Gradient Clustering Algorithm.",
"title": ""
},
{
"docid": "e872a91433539301a857eab518cacb38",
"text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present Arnold, a completely autonomous agent to play First-Person Shooter Games using only screen pixel data and demonstrate its effectiveness on Doom, a classical firstperson shooter game. Arnold is trained with deep reinforcement learning using a recent Action-Navigation architecture, which uses separate deep neural networks for exploring the map and fighting enemies. Furthermore, it utilizes a lot of techniques such as augmenting high-level game features, reward shaping and sequential updates for efficient training and effective performance. Arnold outperforms average humans as well as in-built game bots on different variations of the deathmatch. It also obtained the highest kill-to-death ratio in both the tracks of the Visual Doom AI Competition and placed second in terms of the number of frags.",
"title": ""
},
{
"docid": "5374ed153eb37e5680f1500fea5b9dbe",
"text": "Social media have become dominant in everyday life during the last few years where users share their thoughts and experiences about their enjoyable events in posts. Most of these posts are related to different categories related to: activities, such as dancing, landscapes, such as beach, people, such as a selfie, and animals such as pets. While some of these posts become popular and get more attention, others are completely ignored. In order to address the desire of users to create popular posts, several researches have studied post popularity prediction. Existing works focus on predicting the popularity without considering the category type of the post. In this paper we propose category specific post popularity prediction using visual and textual content for action, scene, people and animal categories. In this way we aim to answer the question What makes a post belonging to a specific action, scene, people or animal category popular? To answer to this question we perform several experiments on a collection of 65K posts crawled from Instagram.",
"title": ""
},
{
"docid": "1af028a0cf88d0ac5c52e84019554d51",
"text": "Robots exhibit life-like behavior by performing intelligent actions. To enhance human-robot interaction it is necessary to investigate and understand how end-users perceive such animate behavior. In this paper, we report an experiment to investigate how people perceived different robot embodiments in terms of animacy and intelligence. iCat and Robovie II were used as the two embodiments in this experiment. We conducted a between-subject experiment where robot type was the independent variable, and perceived animacy and intelligence of the robot were the dependent variables. Our findings suggest that a robots perceived intelligence is significantly correlated with animacy. The correlation between the intelligence and the animacy of a robot was observed to be stronger in the case of the iCat embodiment. Our results also indicate that the more animated the face of the robot, the more likely it is to attract the attention of a user. We also discuss the possible and probable explanations of the results obtained.",
"title": ""
},
{
"docid": "c2fc81074ceed3d7c3690a4b23f7624e",
"text": "The diffusion model for 2-choice decisions (R. Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables--accuracy, correct and error response times, and their distributions--and provided a description of how the component processes involved in the lexical decision task were affected by experimental variables. All of the variables investigated affected the rate at which information was accumulated from the stimuli--called drift rate in the model. The different drift rates observed for the various classes of stimuli can all be explained by a 2-dimensional signal-detection representation of stimulus information. The authors discuss how this representation and the diffusion model's decision process might be integrated with current models of lexical access.",
"title": ""
},
{
"docid": "e3a2b7d38a777c0e7e06d2dc443774d5",
"text": "The area under the ROC (Receiver Operating Characteristic) curve, or simply AUC, has been widely used to measure model performance for binary classification tasks. It can be estimated under parametric, semiparametric and nonparametric assumptions. The non-parametric estimate of the AUC, which is calculated from the ranks of predicted scores of instances, does not always sufficiently take advantage of the predicted scores. This problem is tackled in this paper. On the basis of the ranks and the original values of the predicted scores, we introduce a new metric, called a scored AUC or sAUC. Experimental results on 20 UCI data sets empirically demonstrate the validity of the new metric for classifier evaluation and selection.",
"title": ""
},
{
"docid": "fb1d1c291b175c1fc788832fec008664",
"text": "In Vehicular Ad Hoc Networks (VANETs), anonymity of the nodes sending messages should be preserved, while at the same time the law enforcement agencies should be able to trace the messages to the senders when necessary. It is also necessary that the messages sent are authenticated and delivered to the vehicles in the relevant areas quickly. In this paper, we present an efficient protocol for fast dissemination of authenticated messages in VANETs. It ensures the anonymity of the senders and also provides mechanism for law enforcement agencies to trace the messages to their senders, when necessary.",
"title": ""
},
{
"docid": "45940a48b86645041726120fb066a1fa",
"text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"title": ""
},
{
"docid": "1e6c2319e7c9e51cd4e31107d56bce91",
"text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.",
"title": ""
},
{
"docid": "b9a214ad1b6a97eccf6c14d3d778b2ff",
"text": "In this paper a morphological tagging approach for document image invoice analysis is described. Tokens close by their morphology and confirmed in their location within different similar contexts make apparent some parts of speech representative of the structure elements. This bottom up approach avoids the use of an priori knowledge provided that there are redundant and frequent contexts in the text. The approach is applied on the invoice body text roughly recognized by OCR and automatically segmented. The method makes possible the detection of the invoice articles and their different fields. The regularity of the article composition and its redundancy in the invoice is a good help for its structure. The recognition rate of 276 invoices and 1704 articles, is over than 91.02% for articles and 92.56% for fields.",
"title": ""
},
{
"docid": "caf1a9d9b00e7d2c79a2869b17aa7292",
"text": "Human activity recognition using mobile device sensors is an active area of research in pervasive computing. In our work, we aim at implementing activity recognition approaches that are suitable for real life situations. This paper focuses on the problem of recognizing the on-body position of the mobile device which in a real world setting is not known a priori. We present a new real world data set that has been collected from 15 participants for 8 common activities were they carried 7 wearable devices in different positions. Further, we introduce a device localization method that uses random forest classifiers to predict the device position based on acceleration data. We perform the most complete experiment in on-body device location that includes all relevant device positions for the recognition of a variety of different activities. We show that the method outperforms other approaches achieving an F-Measure of 89% across different positions. We also show that the detection of the device position consistently improves the result of activity recognition for common activities.",
"title": ""
},
{
"docid": "52e0f106480635b84339c21d1a24dcde",
"text": "We propose a fast, parallel, maximum clique algorithm for large, sparse graphs that is designed to exploit characteristics of social and information networks. We observe roughly linear runtime scaling over graphs between 1000 vertices and 100M vertices. In a test with a 1.8 billion-edge social network, the algorithm finds the largest clique in about 20 minutes. For social networks, in particular, we found that using the core number of a vertex in combination with a good heuristic clique finder efficiently removes the vast majority of the search space. In addition, we parallelize the exploration of the search tree. In the algorithm, processes immediately communicate changes to upper and lower bounds on the size of maximum clique, which occasionally results in a super-linear speedup because vertices with especially large search spaces can be pruned by other processes. We use this clique finder to investigate the size of the largest temporal strong components in dynamic networks, which requires finding the largest clique in a particular temporal reachability graph.",
"title": ""
},
{
"docid": "673cf83a9e08ed4e70b6cb706e0ffc5b",
"text": "Conversation systems are of growing importance since they enable an easy interaction interface between humans and computers: using natural languages. To build a conversation system with adequate intelligence is challenging, and requires abundant resources including an acquisition of big data and interdisciplinary techniques, such as information retrieval and natural language processing. Along with the prosperity of Web 2.0, the massive data available greatly facilitate data-driven methods such as deep learning for human-computer conversation systems. Owing to the diversity of Web resources, a retrieval-based conversation system will come up with at least some results from the immense repository for any user inputs. Given a human issued message, i.e., query, a traditional conversation system would provide a response after adequate training and learning of how to respond. In this paper, we propose a new task for conversation systems: joint learning of response ranking featured with next utterance suggestion. We assume that the new conversation mode is more proactive and keeps user engaging. We examine the assumption in experiments. Besides, to address the joint learning task, we propose a novel Dual-LSTM Chain Model to couple response ranking and next utterance suggestion simultaneously. From the experimental results, we demonstrate the usefulness of the proposed task and the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "cef4c47b512eb4be7dcadcee35f0b2ca",
"text": "This paper presents a project that allows the Baxter humanoid robot to play chess against human players autonomously. The complete solution uses three main subsystems: computer vision based on a single camera embedded in Baxter's arm to perceive the game state, an open-source chess engine to compute the next move, and a mechatronics subsystem with a 7-DOF arm to manipulate the pieces. Baxter can play chess successfully in unconstrained environments by dynamically responding to changes in the environment. This implementation demonstrates Baxter's capabilities of vision-based adaptive control and small-scale manipulation, which can be applicable to numerous applications, while also contributing to the computer vision chess analysis literature.",
"title": ""
},
{
"docid": "f74ccd06a302b70980d7b3ba2ee76cfb",
"text": "As the world becomes more connected to the cyber world, attackers and hackers are becoming increasingly sophisticated to penetrate computer systems and networks. Intrusion Detection System (IDS) plays a vital role in defending a network against intrusion. Many commercial IDSs are available in marketplace but with high cost. At the same time open source IDSs are also available with continuous support and upgradation from large user community. Each of these IDSs adopts a different approaches thus may target different applications. This paper provides a quick review of six Open Source IDS tools so that one can choose the appropriate Open Source IDS tool as per their organization requirements.",
"title": ""
}
] | scidocsrr |
94fbd608b3c21fe9a47e5c6c42ad18ad | Recorded Behavior as a Valuable Resource for Diagnostics in Mobile Phone Addiction: Evidence from Psychoinformatics | [
{
"docid": "2acbfab9d69f3615930c1960a2e6dda9",
"text": "OBJECTIVE\nThe aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale) and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS) was demonstrated.\n\n\nMETHODS\nA total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale), visual analogue scale (VAS), and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96). Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS.\n\n\nRESULTS\nBased on the factor analysis results, the subscale \"disturbance of reality testing\" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967). SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (p<0.05), education (p<0.05), and self-reported smartphone addiction scores (p<0.001) in SAS.\n\n\nCONCLUSIONS\nThis study developed the first scale of the smartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.",
"title": ""
},
{
"docid": "2fe2f83fa9a0dca9f01fd9e5e80ca515",
"text": "For the first time in history, it is possible to study human behavior on great scale and in fine detail simultaneously. Online services and ubiquitous computational devices, such as smartphones and modern cars, record our everyday activity. The resulting Big Data offers unprecedented opportunities for tracking and analyzing behavior. This paper hypothesizes the applicability and impact of Big Data technologies in the context of psychometrics both for research and clinical applications. It first outlines the state of the art, including the severe shortcomings with respect to quality and quantity of the resulting data. It then presents a technological vision, comprised of (i) numerous data sources such as mobile devices and sensors, (ii) a central data store, and (iii) an analytical platform, employing techniques from data mining and machine learning. To further illustrate the dramatic benefits of the proposed methodologies, the paper then outlines two current projects, logging and analyzing smartphone usage. One such study attempts to thereby quantify severity of major depression dynamically; the other investigates (mobile) Internet Addiction. Finally, the paper addresses some of the ethical issues inherent to Big Data technologies. In summary, the proposed approach is about to induce the single biggest methodological shift since the beginning of psychology or psychiatry. The resulting range of applications will dramatically shape the daily routines of researches and medical practitioners alike. Indeed, transferring techniques from computer science to psychiatry and psychology is about to establish Psycho-Informatics, an entire research direction of its own.",
"title": ""
}
] | [
{
"docid": "192c4c695f543e79f0d3e41f5920f637",
"text": "A boosted convolutional neural network (BCNN) system is proposed to enhance the pedestrian detection performance in this work. Being inspired by the classic boosting idea, we develop a weighted loss function that emphasizes challenging samples in training a convolutional neural network (CNN). Two types of samples are considered challenging: 1) samples with detection scores falling in the decision boundary, and 2) temporally associated samples with inconsistent scores. A weighting scheme is designed for each of them. Finally, we train a boosted fusion layer to benefit from the integration of these two weighting schemes. We use the Fast-RCNN as the baseline, and test the corresponding BCNN on the Caltech pedestrian dataset in the experiment, and show a significant performance gain of the BCNN over its baseline.",
"title": ""
},
{
"docid": "f8275a80021312a58c9cd52bbcd4c431",
"text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.",
"title": ""
},
{
"docid": "a6b4ee8a6da7ba240b7365cf1a70669d",
"text": "Received: 2013-04-15 Accepted: 2013-05-13 Accepted after one revision by Prof. Dr. Sinz. Published online: 2013-06-14 This article is also available in German in print and via http://www. wirtschaftsinformatik.de: Blohm I, Leimeister JM (2013) Gamification. Gestaltung IT-basierter Zusatzdienstleistungen zur Motivationsunterstützung und Verhaltensänderung. WIRTSCHAFTSINFORMATIK. doi: 10.1007/s11576-013-0368-0.",
"title": ""
},
{
"docid": "f92f0a3d46eaf14e478a41f87b8ad369",
"text": "The agricultural productivity of India is gradually declining due to destruction of crops by various natural calamities and the crop rotation process being affected by irregular climate patterns. Also, the interest and efforts put by farmers lessen as they grow old which forces them to sell their agricultural lands, which automatically affects the production of agricultural crops and dairy products. This paper mainly focuses on the ways by which we can protect the crops during an unavoidable natural disaster and implement technology induced smart agro-environment, which can help the farmer manage large fields with less effort. Three common issues faced during agricultural practice are shearing furrows in case of excess rain or flood, manual watering of plants and security against animal grazing. This paper provides a solution for these problems by helping farmer monitor and control various activities through his mobile via GSM and DTMF technology in which data is transmitted from various sensors placed in the agricultural field to the controller and the status of the agricultural parameters are notified to the farmer using which he can take decisions accordingly. The main advantage of this system is that it is semi-automated i.e. the decision is made by the farmer instead of fully automated decision that results in precision agriculture. It also overcomes the existing traditional practices that require high money investment, energy, labour and time.",
"title": ""
},
{
"docid": "0b51889817aca2afd7c1c754aa47f7de",
"text": "OBJECTIVE\nThis study aims to compare how national guidelines approach the management of obesity in reproductive age women.\n\n\nSTUDY DESIGN\nWe conducted a search for national guidelines in the English language on the topic of obesity surrounding the time of a pregnancy. We identified six primary source documents and several secondary source documents from five countries. Each document was then reviewed to identify: (1) statements acknowledging increased health risks related to obesity and reproductive outcomes, (2) recommendations for the management of obesity before, during, or after pregnancy.\n\n\nRESULTS\nAll guidelines cited an increased risk for miscarriage, birth defects, gestational diabetes, hypertension, fetal growth abnormalities, cesarean sections, difficulty with anesthesia, postpartum hemorrhage, and obesity in offspring. Counseling on the risks of obesity and weight loss before pregnancy were universal recommendations. There were substantial differences in the recommendations pertaining to gestational weight gain goals, nutrient and vitamin supplements, screening for gestational diabetes, and thromboprophylaxis among the guidelines.\n\n\nCONCLUSION\nStronger evidence from randomized trials is needed to devise consistent recommendations for obese reproductive age women. This research may also assist clinicians in overcoming one of the many obstacles they encounter when providing care to obese women.",
"title": ""
},
{
"docid": "625177f221163e38ecf91b884cf4bcd2",
"text": "Equivalent time oscilloscopes are widely used as an alternative to real-time oscilloscopes when high timing resolution is needed. For their correct operation, they need the trigger signal to be accurately aligned to the incoming data, which is achieved by the use of a clock and data recovery circuit (CDR). In this paper, a new multilevel bang-bang phase detector (BBPD) for CDRs is presented; the proposed phase detection scheme disregards samples taken close to the data transitions for the calculation of the phase difference between the inputs, thus eliminating metastability, one of the main issues hindering the performance of BBPDs.",
"title": ""
},
{
"docid": "bb72e4d6f967fb88473756cdcbb04252",
"text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.",
"title": ""
},
{
"docid": "667bca62dd6a9e755b4bae25e2670bb8",
"text": "This paper presents a Phantom Go program. It is based on a MonteCarlo approach. The program plays Phantom Go at an intermediate level.",
"title": ""
},
{
"docid": "c3eca8a83161a19c77406dc6393aa5b0",
"text": "Cell division in eukaryotes requires extensive architectural changes of the nuclear envelope (NE) to ensure that segregated DNA is finally enclosed in a single cell nucleus in each daughter cell. Higher eukaryotic cells have evolved 'open' mitosis, the most extreme mechanism to solve the problem of nuclear division, in which the NE is initially completely disassembled and then reassembled in coordination with DNA segregation. Recent progress in the field has now started to uncover mechanistic and molecular details that underlie the changes in NE reorganization during open mitosis. These studies reveal a tight interplay between NE components and the mitotic machinery.",
"title": ""
},
{
"docid": "dc71b53847d33e82c53f0b288da89bfa",
"text": "We explore the use of convolutional neural networks for the semantic classification of remote sensing scenes. Two recently proposed architectures, CaffeNet and GoogLeNet, are adopted, with three different learning modalities. Besides conventional training from scratch, we resort to pre-trained networks that are only fine-tuned on the target data, so as to avoid overfitting problems and reduce design time. Experiments on two remote sensing datasets, with markedly different characteristics, testify on the effectiveness and wide applicability of the proposed solution, which guarantees a significant performance improvement over all state-of-the-art references.",
"title": ""
},
{
"docid": "72c4c247c1314ebcbbec4f43becd46f0",
"text": "The evolutionary origin of the eukaryotic cell represents an enigmatic, yet largely incomplete, puzzle. Several mutually incompatible scenarios have been proposed to explain how the eukaryotic domain of life could have emerged. To date, convincing evidence for these scenarios in the form of intermediate stages of the proposed eukaryogenesis trajectories is lacking, presenting the emergence of the complex features of the eukaryotic cell as an evolutionary deus ex machina. However, recent advances in the field of phylogenomics have started to lend support for a model that places a cellular fusion event at the basis of the origin of eukaryotes (symbiogenesis), involving the merger of an as yet unknown archaeal lineage that most probably belongs to the recently proposed 'TACK superphylum' (comprising Thaumarchaeota, Aigarchaeota, Crenarchaeota and Korarchaeota) with an alphaproteobacterium (the protomitochondrion). Interestingly, an increasing number of so-called ESPs (eukaryotic signature proteins) is being discovered in recently sequenced archaeal genomes, indicating that the archaeal ancestor of the eukaryotic cell might have been more eukaryotic in nature than presumed previously, and might, for example, have comprised primitive phagocytotic capabilities. In the present paper, we review the evolutionary transition from archaeon to eukaryote, and propose a new model for the emergence of the eukaryotic cell, the 'PhAT (phagocytosing archaeon theory)', which explains the emergence of the cellular and genomic features of eukaryotes in the light of a transiently complex phagocytosing archaeon.",
"title": ""
},
{
"docid": "a14afa0d14a0fcfb890c8f2944750230",
"text": "RNA turnover is an integral part of cellular RNA homeostasis and gene expression regulation. Whereas the cytoplasmic control of protein-coding mRNA is often the focus of study, we discuss here the less appreciated role of nuclear RNA decay systems in controlling RNA polymerase II (RNAPII)-derived transcripts. Historically, nuclear RNA degradation was found to be essential for the functionalization of transcripts through their proper maturation. Later, it was discovered to also be an important caretaker of nuclear hygiene by removing aberrant and unwanted transcripts. Recent years have now seen a set of new protein complexes handling a variety of new substrates, revealing functions beyond RNA processing and the decay of non-functional transcripts. This includes an active contribution of nuclear RNA metabolism to the overall cellular control of RNA levels, with mechanistic implications during cellular transitions. RNA is controlled at various stages of transcription and processing to achieve appropriate gene regulation. Whereas much research has focused on the cytoplasmic control of RNA levels, this Review discusses our emerging appreciation of the importance of nuclear RNA regulation, including the molecular machinery involved in nuclear RNA decay, how functional RNAs bypass degradation and roles for nuclear RNA decay in physiology and disease.",
"title": ""
},
{
"docid": "90acdc98c332de55e790d20d48dfde5e",
"text": "PURPOSE AND DESIGN\nSnack and Relax® (S&R), a program providing healthy snacks and holistic relaxation modalities to hospital employees, was evaluated for immediate impact. A cross-sectional survey was then conducted to assess the professional quality of life (ProQOL) in registered nurses (RNs); compare S&R participants/nonparticipants on compassion satisfaction (CS), burnout, and secondary traumatic stress (STS); and identify situations in which RNs experienced compassion fatigue or burnout and the strategies used to address these situations.\n\n\nMETHOD\nPre- and post vital signs and self-reported stress were obtained from S&R attendees (N = 210). RNs completed the ProQOL Scale measuring CS, burnout, and STS (N = 158).\n\n\nFINDINGS\nSignificant decreases in self-reported stress, respirations, and heart rate were found immediately after S&R. Low CS was noted in 28.5% of participants, 25.3% had high burnout, and 23.4% had high STS. S&R participants and nonparticipants did not differ on any of the ProQOL scales. Situations in which participants experienced compassion fatigue/burnout were categorized as patient-related, work-related, and personal/family-related. Strategies to address these situations were holistic and stress reducing.\n\n\nCONCLUSION\nProviding holistic interventions such as S&R for nurses in the workplace may alleviate immediate feelings of stress and provide a moment of relaxation in the workday.",
"title": ""
},
{
"docid": "e131e4d4bb59b4d0b513cc7c5dd017f2",
"text": "Although touch is one of the most neglected modalities of communication, several lines of research bear on the important communicative functions served by the modality. The authors highlighted the importance of touch by reviewing and synthesizing the literatures pertaining to the communicative functions served by touch among humans, nonhuman primates, and rats. In humans, the authors focused on the role that touch plays in emotional communication, attachment, bonding, compliance, power, intimacy, hedonics, and liking. In nonhuman primates, the authors examined the relations among touch and status, stress, reconciliation, sexual relations, and attachment. In rats, the authors focused on the role that touch plays in emotion, learning and memory, novelty seeking, stress, and attachment. The authors also highlighted the potential phylogenetic and ontogenetic continuities and discussed suggestions for future research.",
"title": ""
},
{
"docid": "a37493c6cde320091c1baf7eaa57b982",
"text": "The pervasiveness of cell phones and mobile social media applications is generating vast amounts of geolocalized user-generated content. Since the addition of geotagging information, Twitter has become a valuable source for the study of human dynamics. Its analysis is shedding new light not only on understanding human behavior but also on modeling the way people live and interact in their urban environments. In this paper, we evaluate the use of geolocated tweets as a complementary source of information for urban planning applications. Our contributions are focussed in two urban planing areas: (1) a technique to automatically determine land uses in a specific urban area based on tweeting patterns, and (2) a technique to automatically identify urban points of interest as places with high activity of tweets. We apply our techniques in Manhattan (NYC) using 49 days of geolocated tweets and validate them using land use and landmark information provided by various NYC departments. Our results indicate that geolocated tweets are a powerful and dynamic data source to characterize urban environments.",
"title": ""
},
{
"docid": "124cc672103959685cdcb3e98ae33d93",
"text": "With the rise of social media and advancements in AI technology, human-bot interaction will soon be commonplace. In this paper we explore human-bot interaction in STACK OVERFLOW, a question and answer website for developers. For this purpose, we built a bot emulating an ordinary user answering questions concerning the resolution of git error messages. In a first run this bot impersonated a human, while in a second run the same bot revealed its machine identity. Despite being functionally identical, the two bot variants elicited quite different reactions.",
"title": ""
},
{
"docid": "436900539406faa9ff34c1af12b6348d",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
},
{
"docid": "61c4146ac8b55167746d3f2b9c8b64e8",
"text": "In a variety of Network-based Intrusion Detection System (NIDS) applications, one desires to detect groups of unknown attack (e.g., botnet) packet-flows, with a group potentially manifesting its atypicality (relative to a known reference “normal”/null model) on a low-dimensional subset of the full measured set of features used by the IDS. What makes this anomaly detection problem quite challenging is that it is a priori unknown which (possibly sparse) subset of features jointly characterizes a particular application, especially one that has not been seen before, which thus represents an unknown behavioral class (zero-day threat). Moreover, nowadays botnets have become evasive, evolving their behavior to avoid signature-based IDSes. In this work, we apply a novel active learning (AL) framework for botnet detection, facilitating detection of unknown botnets (assuming no ground truth examples of same). We propose a new anomaly-based feature set that captures the informative features and exploits the sequence of packet directions in a given flow. Experiments on real world network traffic data, including several common Zeus botnet instances, demonstrate the advantage of our proposed features and AL system.",
"title": ""
},
{
"docid": "8dab17013cd7753706d818b492a5eb15",
"text": "The paper analyses potentials, challenges and problems of the rural tourism from the point of view of its impact on sustainable rural development. It explores alternative sources of income for rural people by means of tourism and investigates effects of the rural tourism on agricultural production in local rural communities. The aim is to identify the existing and potential tourist attractions within the rural areas in Southern Russia and to provide solutions to be introduced in particular rural settlements in order to make them attractive for tourists. The paper includes the elaboration and testing of a methodology for evaluating the rural tourism potentials using the case of rural settlements of Stavropol Krai, Russia. The paper concludes with a ranking of the selected rural settlements according to their rural tourist capacity and substantiation of the tourism models to be implemented to ensure a sustainable development of the considered rural areas.",
"title": ""
}
] | scidocsrr |
4b6a80b9010fe9aec4ba329c8d7f4be5 | Bioinformatics - an introduction for computer scientists | [
{
"docid": "d6abc85e62c28755ed6118257d9c25c3",
"text": "MOTIVATION\nIn a previous paper, we presented a polynomial time dynamic programming algorithm for predicting optimal RNA secondary structure including pseudoknots. However, a formal grammatical representation for RNA secondary structure with pseudoknots was still lacking.\n\n\nRESULTS\nHere we show a one-to-one correspondence between that algorithm and a formal transformational grammar. This grammar class encompasses the context-free grammars and goes beyond to generate pseudoknotted structures. The pseudoknot grammar avoids the use of general context-sensitive rules by introducing a small number of auxiliary symbols used to reorder the strings generated by an otherwise context-free grammar. This formal representation of the residue correlations in RNA structure is important because it means we can build full probabilistic models of RNA secondary structure, including pseudoknots, and use them to optimally parse sequences in polynomial time.",
"title": ""
}
] | [
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "c9a4aff9871fa2f10c61bfb05b820141",
"text": "With single computer's computation power not sufficing, need for sharing resources to manipulate and manage data through clouds is increasing rapidly. Hence, it is favorable to delegate computations or store data with a third party, the cloud provider. However, delegating data to third party poses the risks of data disclosure during computation. The problem can be addressed by carrying out computation without decrypting the encrypted data. The results are also obtained encrypted and can be decrypted at the user side. This requires modifying functions in such a way that they are still executable while privacy is ensured or to search an encrypted database. Homomorphic encryption provides security to cloud consumer data while preserving system usability. We propose a symmetric key homomorphic encryption scheme based on matrix operations with primitives that make it easily adaptable for different needs in various cloud computing scenarios.",
"title": ""
},
{
"docid": "117c66505964344d9c350a4e57a4a936",
"text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.",
"title": ""
},
{
"docid": "2f471c24ccb38e70627eba6383c003e0",
"text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.",
"title": ""
},
{
"docid": "fb0ccc3d3ce018c413b20db0bb55fef0",
"text": "In many applications, the training data, from which one need s to learn a classifier, is corrupted with label noise. Many st andard algorithms such as SVM perform poorly in presence of label no ise. In this paper we investigate the robustness of risk mini ization to label noise. We prove a sufficient condition on a loss funct io for the risk minimization under that loss to be tolerant t o uniform label noise. We show that the 0 − 1 loss, sigmoid loss, ramp loss and probit loss satisfy this c ondition though none of the standard convex loss functions satisfy it. We also prove that, by choo sing a sufficiently large value of a parameter in the loss func tio , the sigmoid loss, ramp loss and probit loss can be made tolerant t o non-uniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through ext ensive empirical studies, we show that risk minimization un der the 0− 1 loss, the sigmoid loss and the ramp loss has much better robus tness to label noise when compared to the SVM algorithm.",
"title": ""
},
{
"docid": "99e89314a069a059e1f7214148b150e4",
"text": "Wegener’s granulomatosis (WG) is an autoimmune disease, which particularly affects the upper respiratory pathways, lungs and kidney. Oral mucosal involvement presents in around 5%--10% of cases and may be the first disease symptom. Predominant manifestation is granulomatous gingivitis erythematous papules; mucosal necrosis and non-specific ulcers with or without impact on adjacent structures. Clinically speaking, the most characteristic lesion presents as a gingival hyperplasia of the gum, with hyperaemia and petechias on its surface which bleed when touched. Due to its appearance, it has been called ‘‘Strawberry gingiva’’. The following is a clinical case in which the granulomatous strawberry gingivitis was the first sign of WG.",
"title": ""
},
{
"docid": "4050f76539d79edff962963625298ae2",
"text": "An economic evaluation of a hybrid wind/photovoltaic/fuel cell generation system for a typical home in the Pacific Northwest is performed. In this configuration the combination of a fuel cell stack, an electrolyzer, and a hydrogen storage tank is used for the energy storage system. This system is compared to a traditional hybrid energy system with battery storage. A computer program has been developed to size system components in order to match the load of the site in the most cost effective way. A cost of electricity and an overall system cost are also calculated for each configuration. The study was performed using a graphical user interface programmed in MATLAB.",
"title": ""
},
{
"docid": "ba314edceb1b8ac00f94ad0037bd5b8e",
"text": "AMS subject classifications: primary 62G10 secondary 62H20 Keywords: dCor dCov Multivariate independence Distance covariance Distance correlation High dimension a b s t r a c t Distance correlation is extended to the problem of testing the independence of random vectors in high dimension. Distance correlation characterizes independence and determines a test of multivariate independence for random vectors in arbitrary dimension. In this work, a modified distance correlation statistic is proposed, such that under independence the distribution of a transformation of the statistic converges to Student t, as dimension tends to infinity. Thus we obtain a distance correlation t-test for independence of random vectors in arbitrarily high dimension, applicable under standard conditions on the coordinates that ensure the validity of certain limit theorems. This new test is based on an unbiased es-timator of distance covariance, and the resulting t-test is unbiased for every sample size greater than three and all significance levels. The transformed statistic is approximately normal under independence for sample size greater than nine, providing an informative sample coefficient that is easily interpretable for high dimensional data. 1. Introduction Many applications in genomics, medicine, engineering, etc. require analysis of high dimensional data. Time series data can also be viewed as high dimensional data. Objects can be represented by their characteristics or features as vectors p. In this work, we consider the extension of distance correlation to the problem of testing independence of random vectors in arbitrarily high, not necessarily equal dimensions, so the dimension p of the feature space of a random vector is typically large. measure all types of dependence between random vectors in arbitrary, not necessarily equal dimensions. (See Section 2 for definitions.) Distance correlation takes values in [0, 1] and is equal to zero if and only if independence holds. It is more general than the classical Pearson product moment correlation, providing a scalar measure of multivariate independence that characterizes independence of random vectors. The distance covariance test of independence is consistent against all dependent alternatives with finite second moments. In practice, however, researchers are often interested in interpreting the numerical value of distance correlation, without a formal test. For example, given an array of distance correlation statistics, what can one learn about the strength of dependence relations from the dCor statistics without a formal test? This is in fact, a difficult question, but a solution is finally available for a large class of problems. The …",
"title": ""
},
{
"docid": "24dda2b2334810b375f7771685669177",
"text": "This paper presents a 64-times interleaved 2.6 GS/s 10b successive-approximation-register (SAR) ADC in 65 nm CMOS. The ADC combines interleaving hierarchy with an open-loop buffer array operated in feedforward-sampling and feedback-SAR mode. The sampling front-end consists of four interleaved T/Hs at 650 MS/s that are optimized for timing accuracy and sampling linearity, while the back-end consists of four ADC arrays, each consisting of 16 10b current-mode non-binary SAR ADCs. The interleaving hierarchy allows for many ADCs to be used per T/H and eliminates distortion stemming from open loop buffers interfacing between the front-end and back-end. Startup on-chip calibration deals with offset and gain mismatches as well as DAC linearity. Measurements show that the prototype ADC achieves an SNDR of 48.5 dB and a THD of less than 58 dB at Nyquist with an input signal of 1.4 . An estimated sampling clock skew spread of 400 fs is achieved by careful design and layout. Up to 4 GHz an SNR of more than 49 dB has been measured, enabled by the less than 110 fs rms clock jitter. The ADC consumes 480 mW from 1.2/1.3/1.6 V supplies and occupies an area of 5.1 mm.",
"title": ""
},
{
"docid": "1ee679d237c54dd8aaaeb2383d6b49fa",
"text": "Bike sharing systems (BSSs) have become common in many cities worldwide, providing a new transportation mode for residents' commutes. However, the management of these systems gives rise to many problems. As the bike pick-up demands at different places are unbalanced at times, the systems have to be rebalanced frequently. Rebalancing the bike availability effectively, however, is very challenging as it demands accurate prediction for inventory target level determination. In this work, we propose two types of regression models using multi-source data to predict the hourly bike pick-up demand at cluster level: Similarity Weighted K-Nearest-Neighbor (SWK) based regression and Artificial Neural Network (ANN). SWK-based regression models learn the weights of several meteorological factors and/or taxi usage and use the correlation between consecutive time slots to predict the bike pick-up demand. The ANN is trained by using historical trip records of BSS, meteorological data, and taxi trip records. Our proposed methods are tested with real data from a New York City BSS: Citi Bike NYC. Performance comparison between SWK-based and ANN-based methods is provided. Experimental results indicate the high accuracy of ANN-based prediction for bike pick-up demand using multisource data.",
"title": ""
},
{
"docid": "af56806a30f708cb0909998266b4d8c1",
"text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.",
"title": ""
},
{
"docid": "5b97d597534e65bf5d00f89d8df97767",
"text": "Research into online gaming has steadily increased over the last decade, although relatively little research has examined the relationship between online gaming addiction and personality factors. This study examined the relationship between a number of personality traits (sensation seeking, self-control, aggression, neuroticism, state anxiety, and trait anxiety) and online gaming addiction. Data were collected over a 1-month period using an opportunity sample of 123 university students at an East Midlands university in the United Kingdom. Gamers completed all the online questionnaires. Results of a multiple linear regression indicated that five traits (neuroticism, sensation seeking, trait anxiety, state anxiety, and aggression) displayed significant associations with online gaming addiction. The study suggests that certain personality traits may be important in the acquisition, development, and maintenance of online gaming addiction, although further research is needed to replicate the findings of the present study.",
"title": ""
},
{
"docid": "907883af0e81f4157e81facd4ff4344c",
"text": "This work presents a low-power low-cost CDR design for RapidIO SerDes. The design is based on phase interpolator, which is controlled by a synthesized standard cell digital block. Half-rate architecture is adopted to lessen the problems in routing high speed clocks and reduce power. An improved half-rate bang-bang phase detector is presented to assure the stability of the system. Moreover, the paper proposes a simplified control scheme for the phase interpolator to further reduce power and cost. The CDR takes an area of less than 0.05mm2, and post simulation shows that the CDR has a RMS jitter of UIpp/32 ([email protected]) and consumes 9.5mW at 3.125GBaud.",
"title": ""
},
{
"docid": "00509d9e0ab2d8dad6bfd20cd264f555",
"text": "A prototype campus bus tacking system is designed and implemented for helping UiTM Student to pinpoint the location and estimate arrival time of their respective desired bus via their smartphone application. This project comprises integration between hardware and software. An Arduino UNO is used to control the GPS module to get the geographic coordinates. An android smartphone application using App Inventor is also developed for the user not only to determine the time for the campus bus to arrive and also will be able to get the bus information. This friendly user system is named as \"UiTM Bus Checker\" application. The user also will be able to view position of the bus on a digital mapping from Google Maps using their smartphone application and webpage. In order to show the effectiveness of this UiTM campus bus tracking system, the practical implementations have been presented and recorded.",
"title": ""
},
{
"docid": "7e91dd40445de51570a8c77cf50f7211",
"text": "Based on phasor measurement units (PMUs), a synchronphasor system is widely recognized as a promising smart grid measurement system. It is able to provide high-frequency, high-accuracy phasor measurements sampling for Wide Area Monitoring and Control (WAMC) applications.However,the high sampling frequency of measurement data under strict latency constraints introduces new challenges for real time communication. It would be very helpful if the collected data can be prioritized according to its importance such that the existing quality of service (QoS) mechanisms in the communication networks can be leveraged. To achieve this goal, certain anomaly detection functions should be conducted by the PMUs. Inspired by the recent emerging edge-fog-cloud computing hierarchical architecture, which allows computing tasks to be conducted at the network edge, a novel PMU fog is proposed in this paper. Two anomaly detection approaches, Singular Spectrum Analysis (SSA) and K-Nearest Neighbors (KNN), are evaluated in the PMU fog using the IEEE 16-machine 68-bus system. The simulation experiments based on Riverbed Modeler demonstrate that the proposed PMU fog can effectively reduce the data flow end-to-end (ETE) delay without sacrificing data completeness.",
"title": ""
},
{
"docid": "fc167904e713a2b4c48fd50b7efa5332",
"text": "Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.",
"title": ""
},
{
"docid": "dc54b73eb740bc1bbdf1b834a7c40127",
"text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.",
"title": ""
},
{
"docid": "fa888e57652804e86c900c8e1041d399",
"text": "BACKGROUND\nJehovah's Witness patients (Witnesses) who undergo cardiac surgery provide a unique natural experiment in severe blood conservation because anemia, transfusion, erythropoietin, and antifibrinolytics have attendant risks. Our objective was to compare morbidity and long-term survival of Witnesses undergoing cardiac surgery with a similarly matched group of patients who received transfusions.\n\n\nMETHODS\nA total of 322 Witnesses and 87 453 non-Witnesses underwent cardiac surgery at our center from January 1, 1983, to January 1, 2011. All Witnesses prospectively refused blood transfusions. Among non-Witnesses, 38 467 did not receive blood transfusions and 48 986 did. We used propensity methods to match patient groups and parametric multiphase hazard methods to assess long-term survival. Our main outcome measures were postoperative morbidity complications, in-hospital mortality, and long-term survival.\n\n\nRESULTS\nWitnesses had fewer acute complications and shorter length of stay than matched patients who received transfusions: myocardial infarction, 0.31% vs 2.8% (P = . 01); additional operation for bleeding, 3.7% vs 7.1% (P = . 03); prolonged ventilation, 6% vs 16% (P < . 001); intensive care unit length of stay (15th, 50th, and 85th percentiles), 24, 25, and 72 vs 24, 48, and 162 hours (P < . 001); and hospital length of stay (15th, 50th, and 85th percentiles), 5, 7, and 11 vs 6, 8, and 16 days (P < . 001). Witnesses had better 1-year survival (95%; 95% CI, 93%-96%; vs 89%; 95% CI, 87%-90%; P = . 007) but similar 20-year survival (34%; 95% CI, 31%-38%; vs 32% 95% CI, 28%-35%; P = . 90).\n\n\nCONCLUSIONS\nWitnesses do not appear to be at increased risk for surgical complications or long-term mortality when comparisons are properly made by transfusion status. Thus, current extreme blood management strategies do not appear to place patients at heightened risk for reduced long-term survival.",
"title": ""
},
{
"docid": "5e31d7ff393d69faa25cb6dea5917a0e",
"text": "In this paper we aim to formally explain the phenomenon of fast convergence of Stochastic Gradient Descent (SGD) observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of iterations to full gradient descent. For convex loss functions we obtain an exponential convergence bound for mini-batch SGD parallel to that for full gradient descent. We show that there is a critical batch size m∗ such that: (a) SGD iteration with mini-batch sizem ≤ m∗ is nearly equivalent to m iterations of mini-batch size 1 (linear scaling regime). (b) SGD iteration with mini-batch m > m∗ is nearly equivalent to a full gradient descent iteration (saturation regime). Moreover, for the quadratic loss, we derive explicit expressions for the optimal mini-batch and step size and explicitly characterize the two regimes above. The critical mini-batch size can be viewed as the limit for effective mini-batch parallelization. It is also nearly independent of the data size, implying O(n) acceleration over GD per unit of computation. We give experimental evidence on real data which closely follows our theoretical analyses. Finally, we show how our results fit in the recent developments in training deep neural networks and discuss connections to adaptive rates for SGD and variance reduction. † See full version of this paper at arxiv.org/abs/1712.06559. Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA. Correspondence to: Siyuan Ma <[email protected]>, Raef Bassily <[email protected]>, Mikhail Belkin <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "74f02207a48019fa5bc4736ce66e4a0c",
"text": "In this paper we present an effective method for developing realistic numerical three-dimensional (3-D) microwave breast models of different shape, size, and tissue density. These models are especially convenient for microwave breast cancer imaging applications and numerical analysis of human breast-microwave interactions. As in the recent studies on this area, anatomical information of the breast tissue is collected from T1-weighted 3-D MRI data of different patients' in prone position. The method presented in this paper offers significant improvements including efficient noise reduction and tissue segmentation, nonlinear mapping of electromagnetic properties, realistically asymmetric phantom shape, and a realistic classification of breast phantoms. Our method contains a five-step approach where each MRI voxel is classified and mapped to the appropriate dielectric properties. In the first step, the MRI data are denoised by estimating and removing the bias field from each slice, after which the voxels are segmented into two main tissues as fibro-glandular and adipose. Using the distribution of the voxel intensities in MRI histogram, two nonlinear mapping functions are generated for dielectric permittivity and conductivity profiles, which allow each MRI voxel to map to its proper dielectric properties. Obtained dielectric profiles are then converted into 3-D numerical breast phantoms using several image processing techniques, including morphologic operations, filtering. Resultant phantoms are classified according to their adipose content, which is a critical parameter that affects penetration depth during microwave breast imaging.",
"title": ""
}
] | scidocsrr |
ab61ccde29cca0905bc0758058266af8 | Performance of a Precoding MIMO System for Decentralized Multiuser Indoor Visible Light Communications | [
{
"docid": "4583555a91527244488b9658288f4dc2",
"text": "The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as \"block-diagonalization,\" is a generalization of channel inversion when there are multiple antennas at each receiver. It is easily adapted to optimize for either maximum transmission rate or minimum power and approaches the optimal solution at high SNR. The second, known as \"successive optimization,\" is an alternative method for solving the power minimization problem one user at a time, and it yields superior results in some (e.g., low SNR) situations. Both of these algorithms are limited to cases where the transmitter has more antennas than all receive antennas combined. In order to accommodate more general scenarios, we also propose a framework for coordinated transmitter-receiver processing that generalizes the two algorithms to cases involving more receive than transmit antennas. While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity.",
"title": ""
}
] | [
{
"docid": "a5ac7aa3606ebb683d4d9de5dcd89856",
"text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.",
"title": ""
},
{
"docid": "e8a01490bc3407a2f8e204408e34c5b3",
"text": "This paper presents the design and implementation of a Class EF2 inverter and Class EF2 rectifier for two -W wireless power transfer (WPT) systems, one operating at 6.78 MHz and the other at 27.12 MHz. It will be shown that the Class EF2 circuits can be designed to have beneficial features for WPT applications such as reduced second-harmonic component and lower total harmonic distortion, higher power-output capability, reduction in magnetic core requirements and operation at higher frequencies in rectification compared to other circuit topologies. A model will first be presented to analyze the circuits and to derive values of its components to achieve optimum switching operation. Additional analysis regarding harmonic content, magnetic core requirements and open-circuit protection will also be performed. The design and implementation process of the two Class-EF2-based WPT systems will be discussed and compared to an equivalent Class-E-based WPT system. Experimental results will be provided to confirm validity of the analysis. A dc-dc efficiency of 75% was achieved with Class-EF2-based systems.",
"title": ""
},
{
"docid": "d62c2e7ca3040900d04f83ef4f99de4f",
"text": "Manual classification of brain tumor is time devastating and bestows ambiguous results. Automatic image classification is emergent thriving research area in medical field. In the proposed methodology, features are extracted from raw images which are then fed to ANFIS (Artificial neural fuzzy inference system).ANFIS being neuro-fuzzy system harness power of both hence it proves to be a sophisticated framework for multiobject classification. A comprehensive feature set and fuzzy rules are selected to classify an abnormal image to the corresponding tumor type. This proposed technique is fast in execution, efficient in classification and easy in implementation.",
"title": ""
},
{
"docid": "46c4b4a68e0be453148779529f235e98",
"text": "Received Feb 14, 2017 Revised Apr 14, 2017 Accepted Apr 28, 2017 This paper proposes maximum boost control for 7-level z-source cascaded h-bridge inverter and their affiliation between voltage boost gain and modulation index. Z-source network avoids the usage of external dc-dc boost converter and improves output voltage with minimised harmonic content. Z-source network utilises distinctive LC impedance combination with 7-level cascaded inverter and it conquers the conventional voltage source inverter. The maximum boost controller furnishes voltage boost and maintain constant voltage stress across power switches, which provides better output voltage with variation of duty cycles. Single phase 7-level z-source cascaded inverter simulated using matlab/simulink. Keyword:",
"title": ""
},
{
"docid": "c337226d663e69ecde67ff6f35ba7654",
"text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.",
"title": ""
},
{
"docid": "80ece123483d6de02c4e621bdb8eb0fc",
"text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.",
"title": ""
},
{
"docid": "a60752274fdae6687c713538215d0269",
"text": "Some soluble phosphate salts, heavily used in agriculture as highly effective phosphorus (P) fertilizers, cause surface water eutrophication, while solid phosphates are less effective in supplying the nutrient P. In contrast, synthetic apatite nanoparticles could hypothetically supply sufficient P nutrients to crops but with less mobility in the environment and with less bioavailable P to algae in comparison to the soluble counterparts. Thus, a greenhouse experiment was conducted to assess the fertilizing effect of synthetic apatite nanoparticles on soybean (Glycine max). The particles, prepared using one-step wet chemical method, were spherical in shape with diameters of 15.8 ± 7.4 nm and the chemical composition was pure hydroxyapatite. The data show that application of the nanoparticles increased the growth rate and seed yield by 32.6% and 20.4%, respectively, compared to those of soybeans treated with a regular P fertilizer (Ca(H2PO4)2). Biomass productions were enhanced by 18.2% (above-ground) and 41.2% (below-ground). Using apatite nanoparticles as a new class of P fertilizer can potentially enhance agronomical yield and reduce risks of water eutrophication.",
"title": ""
},
{
"docid": "da4b86329c12b0747c2df55f5a6f6cdb",
"text": "As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.",
"title": ""
},
{
"docid": "d56e3d58fdc0ca09fe7f708c7d12122e",
"text": "About nine billion people in the world are deaf and dumb. The communication between a deaf and hearing person poses to be a serious problem compared to communication between blind and normal visual people. This creates a very little room for them with communication being a fundamental aspect of human life. The blind people can talk freely by means of normal language whereas the deaf-dumb have their own manual-visual language known as sign language. Sign language is a non-verbal form of intercourse which is found amongst deaf communities in world. The languages do not have a common origin and hence difficult to interpret. The project aims to facilitate people by means of a glove based communication interpreter system. The glove is internally equipped with five flex sensors. For each specific gesture, the flex sensor produces a proportional change in resistance. The output from the sensor is analog values it is converted to digital. The processing of these hand gestures is in Arduino Duemilanove Board which is an advance version of the microcontroller. It compares the input signal with predefined voltage levels stored in memory. According to that required output displays on the LCD in the form of text & sound is produced which is stored is memory with the help of speaker. In such a way it is easy for deaf and dumb to communicate with normal people. This system can also be use for the woman security since we are sending a message to authority with the help of smart phone.",
"title": ""
},
{
"docid": "8abbd5e2ab4f419a4ca05277a8b1b6a5",
"text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.",
"title": ""
},
{
"docid": "9ed3b0144df3dfa88b9bfa61ee31f40a",
"text": "OBJECTIVE\nTo determine the frequency of early relapse after achieving good initial correction in children who were on clubfoot abduction brace.\n\n\nMETHODS\nThe cross-sectional study was conducted at the Jinnah Postgraduate Medical Centre, Karachi, and included parents of children of either gender in the age range of 6 months to 3years with idiopathic clubfoot deformities who had undergone Ponseti treatment between September 2012 and June 2013, and who were on maintenance brace when the data was collected from December 2013 to March 2014. Parents of patients with follow-up duration in brace less than six months and those with syndromic clubfoot deformity were excluded. The interviews were taken through a purposive designed questionnaire. SPSS 16 was used for data analysis.\n\n\nRESULTS\nThe study included parents of 120 patients. Of them, 95(79.2%) behaved with good compliance on Denis Browne Splint, 10(8.3%) were fair and 15(12.5%)showed poor compliance. Major reason for poor and non-compliance was unaffordability of time and cost for regular follow-up. Besides, 20(16.67%) had inconsistent use due to delay inre-procurement of Foot Abduction Braceonce the child had outgrown the shoe. Only 4(3.33%) talked of cultural barriers and conflict of interest between the parents. Early relapse was observed in 23(19.16%) patients and 6(5%) of them responded to additional treatment and were put back on brace treatment; 13(10.83%) had minor relapse with forefoot varus, without functional disability, and the remaining 4(3.33%) had major relapse requiring extensive surgery. Overall success was recorded in 116(96.67%) cases.\n\n\nCONCLUSIONS\nThe positioning of shoes on abduction brace bar, comfort in shoes, affordability, initial and subsequent delay in procurement of new shoes once the child's feet overgrew the shoe, were the four containable factors on the part of Ponseti practitioner.",
"title": ""
},
{
"docid": "b6da9901abb01572b631085f97fdd1d4",
"text": "Protection against high voltage-standing-wave-ratios (VSWR) is of great importance in many power amplifier applications. Despite excellent thermal and voltage breakdown properties even gallium nitride devices may need such measures. This work focuses on the timing aspect when using barium-strontium-titanate (BST) varactors to limit power dissipation and gate current. A power amplifier was designed and fabricated, implementing a varactor and a GaN-based voltage switch as varactor modulator for VSWR protection. The response time until the protection is effective was measured by switching the voltages at varactor, gate and drain of the transistor, respectively. It was found that it takes a minimum of 50 μs for the power amplifier to reach a safe condition. Pure gate pinch-off or drain voltage reduction solutions were slower and bias-network dependent. For a thick-film BST MIM varactor, optimized for speed and power, a switching time of 160 ns was achieved.",
"title": ""
},
{
"docid": "afa7d0e5c19fea77e1bcb4fce39fbc93",
"text": "Highly Autonomous Driving (HAD) systems rely on deep neural networks for the visual perception of the driving environment. Such networks are train on large manually annotated databases. In this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. The proposed generative framework, coined Generative One-Shot Learning (GOL), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated as Pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. GOL has been evaluated on environment perception challenges encountered in autonomous vision.",
"title": ""
},
{
"docid": "1e32662301070a085ce4d3244673c2cd",
"text": "Conventional automatic speech recognition (ASR) based on a hidden Markov model (HMM)/deep neural network (DNN) is a very complicated system consisting of various modules such as acoustic, lexicon, and language models. It also requires linguistic resources, such as a pronunciation dictionary, tokenization, and phonetic context-dependency trees. On the other hand, end-to-end ASR has become a popular alternative to greatly simplify the model-building process of conventional ASR systems by representing complicated modules with a single deep network architecture, and by replacing the use of linguistic resources with a data-driven learning method. There are two major types of end-to-end architectures for ASR; attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC) uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes hybrid CTC/attention end-to-end ASR, which effectively utilizes the advantages of both architectures in training and decoding. During training, we employ the multiobjective learning framework to improve robustness and achieve fast convergence. During decoding, we perform joint decoding by combining both attention-based and CTC scores in a one-pass beam search algorithm to further eliminate irregular alignments. Experiments with English (WSJ and CHiME-4) tasks demonstrate the effectiveness of the proposed multiobjective learning over both the CTC and attention-based encoder–decoder baselines. Moreover, the proposed method is applied to two large-scale ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and exhibits performance that is comparable to conventional DNN/HMM ASR systems based on the advantages of both multiobjective learning and joint decoding without linguistic resources.",
"title": ""
},
{
"docid": "4ef9dbd33461abe61f0ebeee29b462b4",
"text": "A comparison between corporate fed microstrip antenna array (MSAA) and an electromagnetically coupled microstrip antenna array (EMCP-MSAA) at Ka-band is presented. A low loss feed network is proposed based on the analysis of different line widths used in the feed network. Gain improvement of 25% (1.5 dB) is achieved using the proposed feed network in 2×2 EMCP-MSAA. A 8×8 MSAA has been designed and fabricated at Ka-band. The measured bandwidth is 4.3% with gain of 24dB. Bandwidth enhancement is done by designing and fabricating EMCP-MSAA to give bandwidth of 17% for 8×8 array.",
"title": ""
},
{
"docid": "412278d78888fc4ee28c666133c9bd24",
"text": "A future Internet of Things (IoT) system will connect the physical world into cyberspace everywhere and everything via billions of smart objects. On the one hand, IoT devices are physically connected via communication networks. The service oriented architecture (SOA) can provide interoperability among heterogeneous IoT devices in physical networks. On the other hand, IoT devices are virtually connected via social networks. In this paper we propose adaptive and scalable trust management to support service composition applications in SOA-based IoT systems. We develop a technique based on distributed collaborative filtering to select feedback using similarity rating of friendship, social contact, and community of interest relationships as the filter. Further we develop a novel adaptive filtering technique to determine the best way to combine direct trust and indirect trust dynamically to minimize convergence time and trust estimation bias in the presence of malicious nodes performing opportunistic service and collusion attacks. For scalability, we consider a design by which a capacity-limited node only keeps trust information of a subset of nodes of interest and performs minimum computation to update trust. We demonstrate the effectiveness of our proposed trust management through service composition application scenarios with a comparative performance analysis against EigenTrust and PeerTrust.",
"title": ""
},
{
"docid": "d57bd5c6426ce818328096c26f06b901",
"text": "Introduction Reflexivity is a curious term with various meanings. Finding a definition of reflexivity that demonstrates what it means and how it is achieved is difficult (Colbourne and Sque 2004). Moreover, writings on reflexivity have not been transparent in terms of the difficulties, practicalities and methods of the process (Mauthner and Doucet 2003). Nevertheless, it is argued that an attempt be made to gain ‘some kind of intellectual handle’ on reflexivity in order to make use of it as a guiding standard (Freshwater and Rolfe 2001). The role of reflexivity in the many and varied qualitative methodologies is significant. It is therefore a concept of particular relevance to nursing as qualitative methodologies play a principal function in nursing enquiry. Reflexivity assumes a pivotal role in feminist research (King 1994). It is also paramount in participatory action research (Robertson 2000), ethnographies, and hermeneutic and post-structural approaches (Koch and Harrington 1998). Furthermore, it plays an integral part in medical case study research reflexivity epistemological critical feminist ▲ ▲ ▲ ▲ k e y w o rd s",
"title": ""
},
{
"docid": "73a656b220c8f91ad1b2e2b4dbd691a9",
"text": "Music recommendation systems are well explored and commonly used but are normally based on manually tagged parameters and simple similarity calculation. Our project proposes a recommendation system based on emotional computing, automatic classification and feature extraction, which recommends music based on the emotion expressed by the song.\n To achieve this goal a set of features is extracted from the song, including the MFCC (mel-frequency cepstral coefficients) following the works of McKinney et al. [6] and a machine learning system is trained on a set of 424 songs, which are categorized by emotion. The categorization of the song is performed manually by multiple persons to avoid error. The emotional categorization is performed using a modified version of the Tellegen-Watson-Clark emotion model [7], as proposed by Trohidis et al. [8]. The System is intended as desktop application that can reliably determine similarities between the main emotion in multiple pieces of music, allowing the user to choose music by emotion. We report our findings below.",
"title": ""
},
{
"docid": "a9a8baf6dfb2526d75b0d7e49bb9b138",
"text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.",
"title": ""
},
{
"docid": "dba3434c600ed7ddbb944f0a3adb1ba0",
"text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.",
"title": ""
}
] | scidocsrr |
ec7e6e749851018e569eb28bb6ac9dab | Adaptability of Neural Networks on Varying Granularity IR Tasks | [
{
"docid": "79ad27cffbbcbe3a49124abd82c6e477",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "f7a21cf633a5b0d76d7ae09e6d3e8822",
"text": "We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains. Various architectures are presented and compared. We create and release a QA corpus and setup a new QA task in the insurance domain. Experimental results demonstrate superior performance compared to the baseline methods and various technologies give further improvements. For this highly challenging task, the top-1 accuracy can reach up to 65.3% on a test set, which indicates a great potential for practical use.",
"title": ""
},
{
"docid": "121daac04555fd294eef0af9d0fb2185",
"text": "In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
}
] | [
{
"docid": "e4ac08754a0c31881364f0603f24b0df",
"text": "Software Defined Network (SDN) facilitates network programmers with easier network monitoring, identification of anomalies, instant implementation of changes, central control to the whole network in a cost effective and efficient manner. These features could be beneficial for securing and maintaining entire network. Being a promising network paradigm, it draws a lot of attention from researchers in security domain. But it's logically centralized control tends to single point of failure, increasing the risk of attacks such as Distributed Denial of Service (DDoS) attack. In this paper, we have tried to identify various possibilities of DDoS attacks in SDN environment with the help of attack tree and an attack model. Further, an attempt to analyze the impact of various traditional DDoS attacks on SDN components is done. Such analysis helps in identifying the type of DDoS attacks that impose bigger threat on SDN architecture and also the features that could play important role in identification of these attacks are deduced.",
"title": ""
},
{
"docid": "486c00c39bee410113dc2caf7e98a6bc",
"text": "We investigated teacher versus student seat selection in the context of group and individual seating arrangements. Disruptive behavior during group seating occurred at twice the rate when students chose their seats than when the teacher chose. During individual seating, disruptive behavior occurred more than three times as often when the students chose their seats. The results are discussed in relation to choice and the matching law.",
"title": ""
},
{
"docid": "6bc942f7f78c8549d60cc4be5e0b467a",
"text": "In this study, we propose a novel, lightweight approach to real-time detection of vehicles using parts at intersections. Intersections feature oncoming, preceding, and cross traffic, which presents challenges for vision-based vehicle detection. Ubiquitous partial occlusions further complicate the vehicle detection task, and occur when vehicles enter and leave the camera's field of view. To confront these issues, we independently detect vehicle parts using strong classifiers trained with active learning. We match part responses using a learned matching classification. The learning process for part configurations leverages user input regarding full vehicle configurations. Part configurations are evaluated using Support Vector Machine classification. We present a comparison of detection results using geometric image features and appearance-based features. The full vehicle detection by parts has been evaluated on real-world data, runs in real time, and shows promise for future work in urban driver assistance.",
"title": ""
},
{
"docid": "e1c5199830d2de7c7f8f2ae28d84090b",
"text": "Once generated, neurons are thought to permanently exit the cell cycle and become irreversibly differentiated. However, neither the precise point at which this post-mitotic state is attained nor the extent of its irreversibility is clearly defined. Here we report that newly born neurons from the upper layers of the mouse cortex, despite initiating axon and dendrite elongation, continue to drive gene expression from the neural progenitor tubulin α1 promoter (Tα1p). These observations suggest an ambiguous post-mitotic neuronal state. Whole transcriptome analysis of sorted upper cortical neurons further revealed that neurons continue to express genes related to cell cycle progression long after mitotic exit until at least post-natal day 3 (P3). These genes are however down-regulated thereafter, associated with a concomitant up-regulation of tumor suppressors at P5. Interestingly, newly born neurons located in the cortical plate (CP) at embryonic day 18-19 (E18-E19) and P3 challenged with calcium influx are found in S/G2/M phases of the cell cycle, and still able to undergo division at E18-E19 but not at P3. At P5 however, calcium influx becomes neurotoxic and leads instead to neuronal loss. Our data delineate an unexpected flexibility of cell cycle control in early born neurons, and describe how neurons transit to a post-mitotic state.",
"title": ""
},
{
"docid": "4254ad134a2359d42dea2bcf64d6bdce",
"text": "Radio Frequency Identification (RFID) systems aim to identify objects in open environments with neither physical nor visual contact. They consist of transponders inserted into objects, of readers, and usually of a database which contains information about the objects. The key point is that authorised readers must be able to identify tags without an adversary being able to trace them. Traceability is often underestimated by advocates of the technology and sometimes exaggerated by its detractors. Whatever the true picture, this problem is a reality when it blocks the deployment of this technology and some companies, faced with being boycotted, have already abandoned its use. Using cryptographic primitives to thwart the traceability issues is an approach which has been explored for several years. However, the research carried out up to now has not provided satisfactory results as no universal formalism has been defined. In this paper, we propose an adversarial model suitable for RFID environments. We define the notions of existential and universal untraceability and we model the access to the communication channels from a set of oracles. We show that our formalisation fits the problem being considered and allows a formal analysis of the protocols in terms of traceability. We use our model on several well-known RFID protocols and we show that most of them have weaknesses and are vulnerable to traceability.",
"title": ""
},
{
"docid": "4f5272a35c9991227a6d098209de8d6c",
"text": "This is an investigation of \" Online Creativity. \" I will present a new account of the cognitive and social mechanisms underlying complex thinking of creative scientists as they work on significant problems in contemporary science. I will lay out an innovative methodology that I have developed for investigating creative and complex thinking in a real-world context. Using this method, I have discovered that there are a number of strategies that are used in contemporary science that increase the likelihood of scientists making discoveries. The findings reported in this chapter provide new insights into complex scientific thinking and will dispel many of the myths surrounding the generation of new concepts and scientific discoveries. InVivo cognition: A new way of investigating cognition There is a large background in cognitive research on thinking, reasoning and problem solving processes that form the foundation for creative cognition (see Dunbar, in press, Holyoak 1996 for recent reviews). However, to a large extent, research on reasoning has demonstrated that subjects in psychology experiments make vast numbers of thinking and reasoning errors even in the most simple problems. How is creative thought even possible if people make so many reasoning errors? One problem with research on reasoning is that the concepts and stimuli that the subjects are asked to use are often arbitrary and involve no background knowledge (cf. Dunbar, 1995; Klahr & Dunbar, 1988). I have proposed that one way of determining what reasoning errors are specific and which are general is to investigate cognition in the cognitive laboratory and the real world (Dunbar, 1995). Psychologists should conduct both InVitro and InVivo research to understand thinking. InVitro research is the standard psychological experiment where subjects are brought into the laboratory and controlled experiments are conducted. As can be seen from the research reported in this volume, this approach yields many insights into the psychological mechanisms underlying complex thinking. The use of an InVivo methodology in which online thinking and reasoning are investigated in a real-world context yields fundamental insights into the basic cognitive mechanisms underlying complex cognition and creativity. The results of InVivo cognitive research can then be used as a basis for further InVitro work in which controlled experiments are conducted. In this chapter, I will outline some of the results of my ongoing InVivo research on creative scientific thinking and relate this research back to the more common InVitro research and show that the …",
"title": ""
},
{
"docid": "4efa56d9c2c387608fe9ddfdafca0f9a",
"text": "Accurate cardinality estimates are essential for a successful query optimization. This is not only true for relational DBMSs but also for RDF stores. An RDF database consists of a set of triples and, hence, can be seen as a relational database with a single table with three attributes. This makes RDF rather special in that queries typically contain many self joins. We show that relational DBMSs are not well-prepared to perform cardinality estimation in this context. Further, there are hardly any special cardinality estimation methods for RDF databases. To overcome this lack of appropriate cardinality estimation methods, we introduce characteristic sets together with new cardinality estimation methods based upon them. We then show experimentally that the new methods are-in the RDF context-highly superior to the estimation methods employed by commercial DBMSs and by the open-source RDF store RDF-3X.",
"title": ""
},
{
"docid": "496175f20823fa42c852060cf41f5095",
"text": "Currently, the use of virtual reality (VR) is being widely applied in different fields, especially in computer science, engineering, and medicine. Concretely, the engineering applications based on VR cover approximately one half of the total number of VR resources (considering the research works published up to last year, 2016). In this paper, the capabilities of different computational software for designing VR applications in engineering education are discussed. As a result, a general flowchart is proposed as a guide for designing VR resources in any application. It is worth highlighting that, rather than this study being based on the applications used in the engineering field, the obtained results can be easily extrapolated to other knowledge areas without any loss of generality. This way, this paper can serve as a guide for creating a VR application.",
"title": ""
},
{
"docid": "32afde90b1bf577aa07135db66250b38",
"text": "We present a generic method for augmenting unsupervised query segmentation by incorporating Parts-of-Speech (POS) sequence information to detect meaningful but rare n-grams. Our initial experiments with an existing English POS tagger employing two different POS tagsets and an unsupervised POS induction technique specifically adapted for queries show that POS information can significantly improve query segmentation performance in all these cases.",
"title": ""
},
{
"docid": "d4acd79e2fdbc9b87b2dbc6ebfa2dd43",
"text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.",
"title": ""
},
{
"docid": "80ccc8b5f9e68b5130a24fe3519b9b62",
"text": "A MIMO antenna of size 40mm × 40mm × 1.6mm is proposed for WLAN applications. Antenna consists of four mushroom shaped Apollonian fractal planar monopoles having micro strip feed lines with edge feeding. It uses defective ground structure (DGS) to achieve good isolation. To achieve more isolation, the antenna elements are placed orthogonal to each other. Further, isolation can be increased using parasitic elements between the elements of antenna. Simulation is done to study reflection coefficient as well as coupling between input ports, directivity, peak gain, efficiency, impedance and VSWR. Results show that MIMO antenna has a bandwidth of 1.9GHZ ranging from 5 to 6.9 GHz, and mutual coupling of less than -20dB.",
"title": ""
},
{
"docid": "5a8729b6b08e79e7c27ddf779b0a5267",
"text": "Electric solid propellants are an attractive option for space propulsion because they are ignited by applied electric power only. In this work, the behavior of pulsed microthruster devices utilizing such a material is investigated. These devices are similar in function and operation to the pulsed plasma thruster, which typically uses Teflon as propellant. A Faraday probe, Langmuir triple probe, residual gas analyzer, pendulum thrust stand and high speed camera are utilized as diagnostic devices. These thrusters are made in batches, of which a few devices were tested experimentally in vacuum environments. Results indicate a plume electron temperature of about 1.7 eV, with an electron density between 10 and 10 cm. According to thermal equilibrium and adiabatic expansion calculations, these relatively hot electrons are mixed with ~2000 K neutral and ion species, forming a non-equilibrium gas. From time-of-flight analysis, this gas mixture plume has an effective velocity of 1500-1650 m/s on centerline. The ablated mass of this plume is 215 μg on average, of which an estimated 0.3% is ionized species while 45±11% is ablated at negligible relative speed. This late-time ablation occurs on a time scale three times that of the 0.5 ms pulse discharge, and does not contribute to the measured 0.21 mN-s impulse per pulse. Similar values have previously been measured in pulsed plasma thrusters. These observations indicate the electric solid propellant material in this configuration behaves similar to Teflon in an electrothermal pulsed plasma",
"title": ""
},
{
"docid": "eaf1fbcc93c2330e56335f9df14513e3",
"text": "Virtual machine placement (VMP) and energy efficiency are significant topics in cloud computing research. In this paper, evolutionary computing is applied to VMP to minimize the number of active physical servers, so as to schedule underutilized servers to save energy. Inspired by the promising performance of the ant colony system (ACS) algorithm for combinatorial problems, an ACS-based approach is developed to achieve the VMP goal. Coupled with order exchange and migration (OEM) local search techniques, the resultant algorithm is termed an OEMACS. It effectively minimizes the number of active servers used for the assignment of virtual machines (VMs) from a global optimization perspective through a novel strategy for pheromone deposition which guides the artificial ants toward promising solutions that group candidate VMs together. The OEMACS is applied to a variety of VMP problems with differing VM sizes in cloud environments of homogenous and heterogeneous servers. The results show that the OEMACS generally outperforms conventional heuristic and other evolutionary-based approaches, especially on VMP with bottleneck resource characteristics, and offers significant savings of energy and more efficient use of different resources.",
"title": ""
},
{
"docid": "3e18a760083cd3ed169ed8dae36156b9",
"text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making",
"title": ""
},
{
"docid": "51c1a1257b5223401e1465579d75bff2",
"text": "This work describes the statistical machine translation (SMT) systems of RWTH Aachen University developed for the evaluation campaign International Workshop on Spoken Language Translation (IWSLT) 2013. We participated in the English→French, English↔German, Arabic→English, Chinese→English and Slovenian↔English MT tracks and the English→French and English→German SLT tracks. We apply phrase-based and hierarchical SMT decoders, which are augmented by state-of-the-art extensions. The novel techniques we experimentally evaluate include discriminative phrase training, a continuous space language model, a hierarchical reordering model, a word class language model, domain adaptation via data selection and system combination of standard and reverse order models. By application of these methods we can show considerable improvements over the respective baseline systems.",
"title": ""
},
{
"docid": "35f439b86c07f426fd127823a45ffacf",
"text": "The paper concentrates on the fundamental coordination problem that requires a network of agents to achieve a specific but arbitrary formation shape. A new technique based on complex Laplacian is introduced to address the problems of which formation shapes specified by inter-agent relative positions can be formed and how they can be achieved with distributed control ensuring global stability. Concerning the first question, we show that all similar formations subject to only shape constraints are those that lie in the null space of a complex Laplacian satisfying certain rank condition and that a formation shape can be realized almost surely if and only if the graph modeling the inter-agent specification of the formation shape is 2-rooted. Concerning the second question, a distributed and linear control law is developed based on the complex Laplacian specifying the target formation shape, and provable existence conditions of stabilizing gains to assign the eigenvalues of the closed-loop system at desired locations are given. Moreover, we show how the formation shape control law is extended to achieve a rigid formation if a subset of knowledgable agents knowing the desired formation size scales the formation while the rest agents do not need to re-design and change their control laws.",
"title": ""
},
{
"docid": "caa10e745374970796bdd0039416a29d",
"text": "s: Feature selection methods try to find a subset of the available features to improve the application of a learning algorithm. Many methods are based on searching a feature set that optimizes some evaluation function. On the other side, feature set estimators evaluate features individually. Relief is a well known and good feature set estimator. While being usually faster feature estimators have some disadvantages. Based on Relief ideas, we propose a feature set measure that can be used to evaluate the feature sets in a search process. We show how the proposed measure can help guiding the search process, as well as selecting the most appropriate feature set. The new measure is compared with a consistency measure, and the highly reputed wrapper approach.",
"title": ""
},
{
"docid": "58d7e76a4b960e33fc7b541d04825dc9",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
},
{
"docid": "b9300a58c4b55bfb0f57b36e5054e5c6",
"text": "The problem of designing, coordinating, and managing complex systems has been central to the management and organizations literature. Recent writings have tended to offer modularity as, at least, a partial solution to this design problem. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the destabilizing effects of overly refined modularization and the modest levels of search and a premature fixation on inferior designs that can result from excessive levels of integration. The analysis highlights an asymmetry in this trade-off, with excessively refined modules leading to cycling behavior and a lack of performance improvement. We discuss the implications of these arguments for product and organization design.",
"title": ""
},
{
"docid": "0a8c2b600b7392b94677d4ae9d7eae74",
"text": "We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.",
"title": ""
}
] | scidocsrr |
c46ef737772868f2a42597ffa10ec0c8 | Crowdsourcing and language studies: the new generation of linguistic data | [
{
"docid": "c6ad70b8b213239b0dd424854af194e2",
"text": "The neural mechanisms underlying the processing of conventional and novel conceptual metaphorical sentences were examined with event-related potentials (ERPs). Conventional metaphors were created based on the Contemporary Theory of Metaphor and were operationally defined as familiar and readily interpretable. Novel metaphors were unfamiliar and harder to interpret. Using a sensicality judgment task, we compared ERPs elicited by the same target word when it was used to end anomalous, novel metaphorical, conventional metaphorical and literal sentences. Amplitudes of the N400 ERP component (320-440 ms) were more negative for anomalous sentences, novel metaphors, and conventional metaphors compared with literal sentences. Within a later window (440-560 ms), ERPs associated with conventional metaphors converged to the same level as literal sentences while the novel metaphors stayed anomalous throughout. The reported results were compatible with models assuming an initial stage for metaphor mappings from one concept to another and that these mappings are cognitively taxing.",
"title": ""
},
{
"docid": "37913e0bfe44ab63c0c229c20b53c779",
"text": "The authors present several versions of a general model, titled the E-Z Reader model, of eye movement control in reading. The major goal of the modeling is to relate cognitive processing (specifically aspects of lexical access) to eye movements in reading. The earliest and simplest versions of the model (E-Z Readers 1 and 2) merely attempt to explain the total time spent on a word before moving forward (the gaze duration) and the probability of fixating a word; later versions (E-Z Readers 3-5) also attempt to explain the durations of individual fixations on individual words and the number of fixations on individual words. The final version (E-Z Reader 5) appears to be psychologically plausible and gives a good account of many phenomena in reading. It is also a good tool for analyzing eye movement data in reading. Limitations of the model and directions for future research are also discussed.",
"title": ""
},
{
"docid": "f66854fd8e3f29ae8de75fc83d6e41f5",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] | [
{
"docid": "6c93139f503e8a88fcc8292d64d5b5fb",
"text": "Chatbots use a database of responses often culled from a corpus of text generated for a different purpose, for example film scripts or interviews. One consequence of this approach is a mismatch between the data and the inputs generated by participants. We describe an approach that while starting from an existing corpus (of interviews) makes use of crowdsourced data to augment the response database, focusing on responses that people judge as inappropriate. The long term goal is to create a data set of more appropriate chat responses; the short term consequence appears to be the identification and replacement of particularly inappropriate responses. We found the version with the expanded database was rated significantly better in terms of the response level appropriateness and the overall ability to engage users. We also describe strategies we developed that target certain breakdowns discovered during data collection. Both the source code of the chatbot, TickTock, and the data collected are publicly available.",
"title": ""
},
{
"docid": "5c6401477feb7336d9e9eaf491fd5549",
"text": "Responses to domestic violence have focused, to date, primarily on intervention after the problem has already been identified and harm has occurred. There are, however, new domestic violence prevention strategies emerging, and prevention approaches from the public health field can serve as models for further development of these strategies. This article describes two such models. The first involves public health campaigns that identify and address the underlying causes of a problem. Although identifying the underlying causes of domestic violence is difficult--experts do not agree on causation, and several different theories exist--these theories share some common beliefs that can serve as a foundation for prevention strategies. The second public health model can be used to identify opportunities for domestic violence prevention along a continuum of possible harm: (1) primary prevention to reduce the incidence of the problem before it occurs; (2) secondary prevention to decrease the prevalence after early signs of the problem; and (3) tertiary prevention to intervene once the problem is already clearly evident and causing harm. Examples of primary prevention include school-based programs that teach students about domestic violence and alternative conflict-resolution skills, and public education campaigns to increase awareness of the harms of domestic violence and of services available to victims. Secondary prevention programs could include home visiting for high-risk families and community-based programs on dating violence for adolescents referred through child protective services (CPS). Tertiary prevention includes the many targeted intervention programs already in place (and described in other articles in this journal issue). Early evaluations of existing prevention programs show promise, but results are still preliminary and programs remain small, locally based, and scattered throughout the United States and Canada. What is needed is a broadly based, comprehensive prevention strategy that is supported by sound research and evaluation, receives adequate public backing, and is based on a policy of zero tolerance for domestic violence.",
"title": ""
},
{
"docid": "0b5ca91480dfff52de5c1d65c3b32f3d",
"text": "Spotting anomalies in large multi-dimensional databases is a crucial task with many applications in finance, health care, security, etc. We introduce COMPREX, a new approach for identifying anomalies using pattern-based compression. Informally, our method finds a collection of dictionaries that describe the norm of a database succinctly, and subsequently flags those points dissimilar to the norm---with high compression cost---as anomalies.\n Our approach exhibits four key features: 1) it is parameter-free; it builds dictionaries directly from data, and requires no user-specified parameters such as distance functions or density and similarity thresholds, 2) it is general; we show it works for a broad range of complex databases, including graph, image and relational databases that may contain both categorical and numerical features, 3) it is scalable; its running time grows linearly with respect to both database size as well as number of dimensions, and 4) it is effective; experiments on a broad range of datasets show large improvements in both compression, as well as precision in anomaly detection, outperforming its state-of-the-art competitors.",
"title": ""
},
{
"docid": "9bbc279974aaa899d12fee26948ce029",
"text": "Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable’s definition and its uses. Such a test objective of interest is referred to as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT’s complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.",
"title": ""
},
{
"docid": "f9cf436f8b5c40598b2c24930c735c1b",
"text": "We present a joint theoretical and experimental investigation of the absorption spectra of silver clusters Ag(n) (4<or=n<or=22). The experimental spectra of clusters isolated in an Ar matrix are compared with the calculated ones in the framework of the time-dependent density functional theory. The analysis of the molecular transitions indicates that the s-electrons are responsible for the optical response of small clusters (n<or=8) while the d-electrons play a crucial role in the optical excitations for larger n values.",
"title": ""
},
{
"docid": "1014a33211c9ca3448fa02cf734a5775",
"text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.",
"title": ""
},
{
"docid": "264338f11dbd4d883e791af8c15aeb0d",
"text": "With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose occupancy networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.",
"title": ""
},
{
"docid": "9c82588d5e82df20e2156ca1bda91f09",
"text": "Lean and simulation analysis are driven by the same objective, how to better design and improve processes making the companies more competitive. The adoption of lean has been widely spread in companies from public to private sectors and simulation is nowadays becoming more and more popular. Several authors have pointed out the benefits of combining simulation and lean, however, they are still rarely used together in practice. Optimization as an additional technique to this combination is even a more powerful approach especially when designing and improving complex processes with multiple conflicting objectives. This paper presents the mutual benefits that are gained when combining lean, simulation and optimization and how they overcome each other's limitations. A framework including the three concepts, some of the barriers for its implementation and a real-world industrial example are also described.",
"title": ""
},
{
"docid": "6cd317113158241a98517ad5a8247174",
"text": "Feature Oriented Programming (FOP) is an emerging paradigmfor application synthesis, analysis, and optimization. Atarget application is specified declaratively as a set of features,like many consumer products (e.g., personal computers,automobiles). FOP technology translates suchdeclarative specifications into efficient programs.",
"title": ""
},
{
"docid": "31338a16eca7c0f60b789c38f2774816",
"text": "As a promising area in artificial intelligence, a new learning paradigm, called Small Sample Learning (SSL), has been attracting prominent research attention in the recent years. In this paper, we aim to present a survey to comprehensively introduce the current techniques proposed on this topic. Specifically, current SSL techniques can be mainly divided into two categories. The first category of SSL approaches can be called “concept learning”, which emphasizes learning new concepts from only few related observations. The purpose is mainly to simulate human learning behaviors like recognition, generation, imagination, synthesis and analysis. The second category is called “experience learning”, which usually co-exists with the large sample learning manner of conventional machine learning. This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures. More extensive surveys on both categories of SSL techniques are introduced and some neuroscience evidences are provided to clarify the rationality of the entire SSL regime, and the relationship with human learning process. Some discussions on the main challenges and possible future research directions along this line are also presented.",
"title": ""
},
{
"docid": "7f5d032cc176ae27a5bcd9c601e3b9bd",
"text": "The grand challenge of neuromorphic computation is to develop a flexible brain-inspired architecture capable of a wide array of real-time applications, while striving towards the ultra-low power consumption and compact size of biological neural systems. Toward this end, we fabricated a building block of a modular neuromorphic architecture, a neurosynaptic core. Our implementation consists of 256 integrate-and-fire neurons and a 1,024×256 SRAM crossbar memory for synapses that fits in 4.2mm2 using a 45nm SOI process and consumes just 45pJ per spike. The core is fully configurable in terms of neuron parameters, axon types, and synapse states and its fully digital implementation achieves one-to-one correspondence with software simulation models. One-to-one correspondence allows us to introduce an abstract neural programming model for our chip, a contract guaranteeing that any application developed in software functions identically in hardware. This contract allows us to rapidly test and map applications from control, machine vision, and classification. To demonstrate, we present four test cases (i) a robot driving in a virtual environment, (ii) the classic game of pong, (iii) visual digit recognition and (iv) an autoassociative memory.",
"title": ""
},
{
"docid": "bafdfa2ecaeb18890ab8207ef1bc4f82",
"text": "This content analytic study investigated the approaches of two mainstream newspapers—The New York Times and the Chicago Tribune—to cover the gay marriage issue. The study used the Massachusetts legitimization of gay marriage as a dividing point to look at what kinds of specific political or social topics related to gay marriage were highlighted in the news media. The study examined how news sources were framed in the coverage of gay marriage, based upon the newspapers’ perspectives and ideologies. The results indicated that The New York Times was inclined to emphasize the topic of human equality related to the legitimization of gay marriage. After the legitimization, The New York Times became an activist for gay marriage. Alternatively, the Chicago Tribune highlighted the importance of human morality associated with the gay marriage debate. The perspective of the Chicago Tribune was not dramatically influenced by the legitimization. It reported on gay marriage in terms of defending American traditions and family values both before and after the gay marriage legitimization. Published by Elsevier Inc on behalf of Western Social Science Association. Gay marriage has been a controversial issue in the United States, especially since the Massachusetts Supreme Judicial Court officially authorized it. Although the practice has been widely discussed for several years, the acceptance of gay marriage does not seem to be concordant with mainstream American values. This is in part because gay marriage challenges the traditional value of the family institution. In the United States, people’s perspectives of and attitudes toward gay marriage have been mostly polarized. Many people optimistically ∗ Corresponding author. E-mail addresses: [email protected], [email protected] (P.-L. Pan). 0362-3319/$ – see front matter. Published by Elsevier Inc on behalf of Western Social Science Association. doi:10.1016/j.soscij.2010.02.002 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 631 support gay legal rights and attempt to legalize it in as many states as possible, while others believe legalizing homosexuality may endanger American society and moral values. A number of forces and factors may expand this divergence between the two polarized perspectives, including family, religion and social influences. Mass media have a significant influence on socialization that cultivates individual’s belief about the world as well as affects individual’s values on social issues (Comstock & Paik, 1991). Moreover, news media outlets become a strong factor in influencing people’s perceptions of and attitudes toward gay men and lesbians because the news is one of the most powerful media to influence people’s attitudes toward gay marriage (Anderson, Fakhfakh, & Kondylis, 1999). Some mainstream newspapers are considered as media elites (Lichter, Rothman, & Lichter, 1986). Furthermore, numerous studies have demonstrated that mainstream newspapers would produce more powerful influences on people’s perceptions of public policies and political issues than television news (e.g., Brians & Wattenberg, 1996; Druckman, 2005; Eveland, Seo, & Marton, 2002) Gay marriage legitimization, a specific, divisive issue in the political and social dimensions, is concerned with several political and social issues that have raised fundamental questions about Constitutional amendments, equal rights, and American family values. The role of news media becomes relatively important while reporting these public debates over gay marriage, because not only do the news media affect people’s attitudes toward gays and lesbians by positively or negatively reporting the gay and lesbian issue, but also shape people’s perspectives of the same-sex marriage policy by framing the recognition of gay marriage in the news coverage. The purpose of this study is designed to examine how gay marriage news is described in the news coverage of The New York Times and the Chicago Tribune based upon their divisive ideological framings. 1. Literature review 1.1. Homosexual news coverage over time Until the 1940s, news media basically ignored the homosexual issue in the United States (Alwood, 1996; Bennett, 1998). According to Bennett (1998), of the 356 news stories about gays and lesbians that appeared in Time and Newsweek from 1947 to 1997, the Kinsey report on male sexuality published in 1948 was the first to draw reporters to the subject of homosexuality. From the 1940s to 1950s, the homosexual issue was reported as a social problem. Approximately 60% of the articles described homosexuals as a direct threat to the strength of the U.S. military, the security of the U.S. government, and the safety of ordinary Americans during this period. By the 1960s, the gay and lesbian issue began to be discussed openly in the news media. However, these portrayals were covered in the context of crime stories and brief items that ridiculed effeminate men or masculine women (Miller, 1991; Streitmatter, 1993). In 1963, a cover story, “Let’s Push Homophile Marriage,” was the first to treat gay marriage as a matter of winning legal recognition (Stewart-Winter, 2006). However, this cover story did not cause people to pay positive attention to gay marriage, but raised national debates between punishment and pity of homosexuals. Specifically speaking, although numerous arti632 P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 cles reported before the 1960s provided growing visibility for homosexuals, they were still highly critical of them (Bennett, 1998). In September 1967, the first hard-hitting gay newspaper—the Los Angeles Advocate—began publication. Different from other earlier gay and lesbian publications, its editorial mix consisted entirely of non-fiction materials, including news stories, editorials, and columns (Cruikshank, 1992; Streitmatter, 1993). The Advocate was the first gay publication to operate as an independent business financed entirely by advertising and circulation, rather than by subsidies from a membership organization (Streitmatter, 1995a, 1995b). After the Stonewall Rebellion in June 1969 in New York City ignited the modern phase of the gay and lesbian liberation movement, the number and circulation of the gay and lesbian press exploded (Streitmatter, 1998). Therefore, gay rights were discussed in the news media during the early 1970s. Homosexuals began to organize a series of political actions associated with gay rights, which was widely covered by the news media, while a backlash also appeared against the gay-rights movements, particularly among fundamentalist Christians (Alwood, 1996; Bennett, 1998). Later in the 1970s, the genre entered a less political phrase by exploring the dimensions of the developing culture of gay and lesbian. The news media plumbed the breadth and depth of topics ranging from the gay and lesbian sensibility in art and literature to sex, spirituality, personal appearance, dyke separatism, lesbian mothers, drag queen, leather men, and gay bathhouses (Streitmatter, 1995b). In the 1980s, the gay and lesbian issue confronted a most formidable enemy when AIDS/HIV, one of the most devastating diseases in the history of medicine, began killing gay men at an alarming rate. Accordingly, AIDS/HIV became the biggest gay story reported by the news media. Numerous news media outlets linked the AIDS/HIV epidemic with homosexuals, which implied the notion of the promiscuous gay and lesbian lifestyle. The gays and lesbians, therefore, were described as a dangerous minority in the news media during the 1980s (Altman, 1986; Cassidy, 2000). In the 1990s, issues about the growing visibility of gays and lesbians and their campaign for equal rights were frequently covered in the news media, primarily because of AIDS and the debate over whether the ban on gays in the military should be lifted. The increasing visibility of gay people resulted in the emergence of lifestyle magazines (Bennett, 1998; Streitmatter, 1998). The Out, a lifestyle magazine based in New York City but circulated nationally, led the new phase, since its upscale design and fashion helped attract mainstream advertisers. This magazine, which devalued news in favor of stories on entertainment and fashions, became the first gay and lesbian publication sold in mainstream bookstores and featured on the front page of The New York Times (Streitmatter, 1998). From the late 1990s to the first few years of the 2000s, homosexuals were described as a threat to children’s development as well as a danger to family values in the news media. The legitimacy of same-sex marriage began to be discussed, because news coverage dominated the issue of same-sex marriage more frequently than before (Bennett, 1998). According to Gibson (2004), The New York Times first announced in August 2002 that its Sunday Styles section would begin publishing reports of same-sex commitment ceremonies along with the traditional heterosexual wedding announcements. Moreover, many newspapers joined this trend. Gibson (2004) found that not only the national newspapers, such as The New York Times, but also other regional newspapers, such as the Houston Chronicle and the Seattle Times, reported surprisingly large P.-L. Pan et al. / The Social Science Journal 47 (2010) 630–645 633 number of news stories about the everyday lives of gays and lesbians, especially since the Massachusetts Supreme Judicial Court ruled in November 2003 that same-sex couples had the same right to marry as heterosexuals. Previous studies investigated the increased amount of news coverage of gay and lesbian issues in the past six decades, but they did not analyze how homosexuals are framed in the news media in terms of public debates on the gay marriage issue. These studies failed to examine how newspapers report this national debate on gay marriage as well as what kinds of news frames are used in reporting this controversial issue. 1.2. Framing gay and lesbian partnersh",
"title": ""
},
{
"docid": "fa3587a9f152db21ec7fe5e935ebf8ba",
"text": "Person re-identification has been usually solved as either the matching of single-image representation (SIR) or the classification of cross-image representation (CIR). In this work, we exploit the connection between these two categories of methods, and propose a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN). Specifically, our deep architecture contains one shared sub-network together with two sub-networks that extract the SIRs of given images and the CIRs of given image pairs, respectively. The SIR sub-network is required to be computed once for each image (in both the probe and gallery sets), and the depth of the CIR sub-network is required to be minimal to reduce computational burden. Therefore, the two types of representation can be jointly optimized for pursuing better matching accuracy with moderate computational cost. Furthermore, the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance. Experiments on the CUHK03, CUHK01 and VIPeR datasets show that the proposed method can achieve favorable accuracy while compared with state-of-the-arts.",
"title": ""
},
{
"docid": "70038e828b49a4093f4375084c248fd6",
"text": "Use of reporter genes provides a convenient way to study the activity and regulation of promoters and examine the rate and control of gene transcription. Many reporter genes and transfection methods can be efficiently used for this purpose. To investigate gene regulation and signaling pathway interactions during ovarian follicle development, we have examined promoter activities of several key follicle-regulating genes in the mouse ovary. In this chapter, we describe use of luciferase and beta-galactosidase genes as reporters and a cationic liposome mediated cell transfection method for studying regulation of activin subunit- and estrogen receptor alpha (ERalpha)-promoter activities. We have demonstrated that estrogen suppresses activin subunit gene promoter activity while activin increases ERalpha promoter activity and increases functional ER activity, suggesting a reciprocal regulation between activin and estrogen signaling in the ovary. We also discuss more broadly some key considerations in the use of reporter genes and cell-based transfection assays in endocrine research.",
"title": ""
},
{
"docid": "56a490b515dc9be979a54f62db5d5bca",
"text": "We searched for quantitative trait loci (QTL) associated with the palm oil fatty acid composition of mature fruits of the oil palm E. guineensis Jacq. in comparison with its wild relative E. oleifera (H.B.K) Cortés. The oil palm cross LM2T x DA10D between two heterozygous parents was considered in our experiment as an intraspecific representative of E. guineensis. Its QTLs were compared to QTLs published for the same traits in an interspecific Elaeis pseudo-backcross used as an indirect representative of E. oleifera. Few correlations were found in E. guineensis between pulp fatty acid proportions and yield traits, allowing for the rather independent selection of both types of traits. Sixteen QTLs affecting palm oil fatty acid proportions and iodine value were identified in oil palm. The phenotypic variation explained by the detected QTLs was low to medium in E. guineensis, ranging between 10% and 36%. The explained cumulative variation was 29% for palmitic acid C16:0 (one QTL), 68% for stearic acid C18:0 (two QTLs), 50% for oleic acid C18:1 (three QTLs), 25% for linoleic acid C18:2 (one QTL), and 40% (two QTLs) for the iodine value. Good marker co-linearity was observed between the intraspecific and interspecific Simple Sequence Repeat (SSR) linkage maps. Specific QTL regions for several traits were found in each mapping population. Our comparative QTL results in both E. guineensis and interspecific materials strongly suggest that, apart from two common QTL zones, there are two specific QTL regions with major effects, which might be one in E. guineensis, the other in E. oleifera, which are independent of each other and harbor QTLs for several traits, indicating either pleiotropic effects or linkage. Using QTL maps connected by highly transferable SSR markers, our study established a good basis to decipher in the future such hypothesis at the Elaeis genus level.",
"title": ""
},
{
"docid": "1ce09062b1ced2cd643c04f7c075c4f1",
"text": "We propose a new approach to the task of fine grained entity type classifications based on label embeddings that allows for information sharing among related labels. Specifically, we learn an embedding for each label and each feature such that labels which frequently co-occur are close in the embedded space. We show that it outperforms state-of-the-art methods on two fine grained entity-classification benchmarks and that the model can exploit the finer-grained labels to improve classification of standard coarse types.",
"title": ""
},
{
"docid": "cba787c228bba0a0b94faa52e94ec3dc",
"text": "Purpose. The aim of the present prospective study was to investigate correlations between 3D facial soft tissue scan and lateral cephalometric radiography measurements. Materials and Methods. The study sample comprised 312 subjects of Caucasian ethnic origin. Exclusion criteria were all the craniofacial anomalies, noticeable asymmetries, and previous or current orthodontic treatment. A cephalometric analysis was developed employing 11 soft tissue landmarks and 14 sagittal and 14 vertical angular measurements corresponding to skeletal cephalometric variables. Cephalometric analyses on lateral cephalometric radiographies were performed for all subjects. The measurements were analysed in terms of their reliability and gender-age specific differences. Then, the soft tissue values were analysed for any correlations with lateral cephalometric radiography variables using Pearson correlation coefficient analysis. Results. Low, medium, and high correlations were found for sagittal and vertical measurements. Sagittal measurements seemed to be more reliable in providing a soft tissue diagnosis than vertical measurements. Conclusions. Sagittal parameters seemed to be more reliable in providing a soft tissue diagnosis similar to lateral cephalometric radiography. Vertical soft tissue measurements meanwhile showed a little less correlation with the corresponding cephalometric values perhaps due to the low reproducibility of cranial base and mandibular landmarks.",
"title": ""
},
{
"docid": "f5519eff0c13e0ee42245fdf2627b8ae",
"text": "An efficient vehicle tracking system is designed and implemented for tracking the movement of any equipped vehicle from any location at any time. The proposed system made good use of a popular technology that combines a Smartphone application with a microcontroller. This will be easy to make and inexpensive compared to others. The designed in-vehicle device works using Global Positioning System (GPS) and Global system for mobile communication / General Packet Radio Service (GSM/GPRS) technology that is one of the most common ways for vehicle tracking. The device is embedded inside a vehicle whose position is to be determined and tracked in real-time. A microcontroller is used to control the GPS and GSM/GPRS modules. The vehicle tracking system uses the GPS module to get geographic coordinates at regular time intervals. The GSM/GPRS module is used to transmit and update the vehicle location to a database. A Smartphone application is also developed for continuously monitoring the vehicle location. The Google Maps API is used to display the vehicle on the map in the Smartphone application. Thus, users will be able to continuously monitor a moving vehicle on demand using the Smartphone application and determine the estimated distance and time for the vehicle to arrive at a given destination. In order to show the feasibility and effectiveness of the system, this paper presents experimental results of the vehicle tracking system and some experiences on practical implementations.",
"title": ""
},
{
"docid": "95ff1a86eedad42b0d869cca0d7d6e33",
"text": "360° videos give viewers a spherical view and immersive experience of surroundings. However, one challenge of watching 360° videos is continuously focusing and re-focusing intended targets. To address this challenge, we developed two Focus Assistance techniques: Auto Pilot (directly bringing viewers to the target), and Visual Guidance (indicating the direction of the target). We conducted an experiment to measure viewers' video-watching experience and discomfort using these techniques and obtained their qualitative feedback. We showed that: 1) Focus Assistance improved ease of focus. 2) Focus Assistance techniques have specificity to video content. 3) Participants' preference of and experience with Focus Assistance depended not only on individual difference but also on their goal of watching the video. 4) Factors such as view-moving-distance, salience of the intended target and guidance, and language comprehension affected participants' video-watching experience. Based on these findings, we provide design implications for better 360° video focus assistance.",
"title": ""
}
] | scidocsrr |
f0949771ac6d5ad74ddfdb859ab79076 | Evaluating Student Satisfaction with Blended Learning in a Gender-Segregated Environment | [
{
"docid": "6ccfe86f2a07dc01f87907855f6cb337",
"text": "H istorically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior faceto-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990). Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.",
"title": ""
}
] | [
{
"docid": "8850b66d131088dbf99430d2c76f5bca",
"text": "The richness of visual details in most computer graphics images nowadays is largely due to the extensive use of texture mapping techniques. Texture mapping is the main tool in computer graphics to integrate a given shape to a given pattern. Despite its power it has problems and limitations. Current solutions cannot handle complex shapes properly. The de nition of the mapping function and problems like distortions can turn the process into a very cumbersome one for the application programmer and consequently for the nal user. An associated problem is the synthesis of patterns which are used as texture. The available options are usually limited to scanning in real pictures. This document is a PhD proposal to investigate techniques to integrate complex shapes and patterns which will not only overcome problems usually associated with texture mapping but also give us more control and make less ad hoc the task of combining shape and pattern. We break the problem into three parts: modeling of patterns, modeling of shape and integration. The integration step will use common information to drive both the modeling of patterns and shape in an integrated manner. Our approach is inspired by observations on how these processes happen in real life, where there is no pattern without a shape associated with it. The proposed solutions will hopefully extent the generality, applicability and exibility of existing integration methods in computer graphics. iii Table of",
"title": ""
},
{
"docid": "b0ea2ca170a8d0bcf4bd5dc8311c6201",
"text": "A cascade of sigma-delta modulator stages that employ a feedforward architecture to reduce the signal ranges required at the integrator inputs and outputs has been used to implement a broadband, high-resolution oversampling CMOS analog-to-digital converter capable of operating from low-supply voltages. An experimental prototype of the proposed architecture has been integrated in a 0.25-/spl mu/m CMOS technology and operates from an analog supply of only 1.2 V. At a sampling rate of 40 MSamples/sec, it achieves a dynamic range of 96 dB for a 1.25-MHz signal bandwidth. The analog power dissipation is 44 mW.",
"title": ""
},
{
"docid": "2e812c0a44832721fcbd7272f9f6a465",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "2cb1c713b8e75e7f2e38be90c1b5a9e6",
"text": "Frequent action video game players often outperform non-gamers on measures of perception and cognition, and some studies find that video game practice enhances those abilities. The possibility that video game training transfers broadly to other aspects of cognition is exciting because training on one task rarely improves performance on others. At first glance, the cumulative evidence suggests a strong relationship between gaming experience and other cognitive abilities, but methodological shortcomings call that conclusion into question. We discuss these pitfalls, identify how existing studies succeed or fail in overcoming them, and provide guidelines for more definitive tests of the effects of gaming on cognition.",
"title": ""
},
{
"docid": "2a34800bc275f062f820c0eb4597d297",
"text": "Construction sites are dynamic and complicated systems. The movement and interaction of people, goods and energy make construction safety management extremely difficult. Due to the ever-increasing amount of information, traditional construction safety management has operated under difficult circumstances. As an effective way to collect, identify and process information, sensor-based technology is deemed to provide new generation of methods for advancing construction safety management. It makes the real-time construction safety management with high efficiency and accuracy a reality and provides a solid foundation for facilitating its modernization, and informatization. Nowadays, various sensor-based technologies have been adopted for construction safety management, including locating sensor-based technology, vision-based sensing and wireless sensor networks. This paper provides a systematic and comprehensive review of previous studies in this field to acknowledge useful findings, identify the research gaps and point out future research directions.",
"title": ""
},
{
"docid": "6974bf94292b51fc4efd699c28c90003",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "187595fb12a5ca3bd665ffbbc9f47465",
"text": "In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance.",
"title": ""
},
{
"docid": "a74ccbf1f9280806a3f21f7ce468a4c7",
"text": "The professional norms of good journalism include in particular the following: truthfulness, objectivity, neutrality and detachment. For Public Relations these norms are at best irrelevant. The only thing that matters is success. And this success is measured in terms ofachieving specific communication aims which are \"externally defined by a client, host organization or particular groups ofstakeholders\" (Hanitzsch, 2007, p. 2). Typical aims are, e.g., to convince the public of the attractiveness of a product, of the justice of one's own political goals or also of the wrongfulness of a political opponent.",
"title": ""
},
{
"docid": "c023633ca0fe1cfc78b1d579d1ae157b",
"text": "A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motivated work behavior to develop; (b) the characteristics of jobs that can create these psychological states; and (c) the attributes of individuals that determine how positively a person will respond to a complex and challenging job. The model was tested for 658 employees who work on 62 different jobs in seven organizations, and results support its validity. A number of special features of the model are discussed (including its use as a basis for the diagnosis of jobs and the evaluation of job redesign projects), and the model is compared to other theories of job design.",
"title": ""
},
{
"docid": "69e0179971396fcaf09c9507735a8d5b",
"text": "In this paper, we describe a statistical approach to both an articulatory-to-acoustic mapping and an acoustic-to-articulatory inversion mapping without using phonetic information. The joint probability density of an articulatory parameter and an acoustic parameter is modeled using a Gaussian mixture model (GMM) based on a parallel acoustic-articulatory speech database. We apply the GMM-based mapping using the minimum mean-square error (MMSE) criterion, which has been proposed for voice conversion, to the two mappings. Moreover, to improve the mapping performance, we apply maximum likelihood estimation (MLE) to the GMM-based mapping method. The determination of a target parameter trajectory having appropriate static and dynamic properties is obtained by imposing an explicit relationship between static and dynamic features in the MLE-based mapping. Experimental results demonstrate that the MLE-based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.",
"title": ""
},
{
"docid": "8e44d0e60c6460a07d66ba9a90741b86",
"text": "Although graph embedding has been a powerful tool for modeling data intrinsic structures, simply employing all features for data structure discovery may result in noise amplification. This is particularly severe for high dimensional data with small samples. To meet this challenge, this paper proposes a novel efficient framework to perform feature selection for graph embedding, in which a category of graph embedding methods is cast as a least squares regression problem. In this framework, a binary feature selector is introduced to naturally handle the feature cardinality in the least squares formulation. The resultant integral programming problem is then relaxed into a convex Quadratically Constrained Quadratic Program (QCQP) learning problem, which can be efficiently solved via a sequence of accelerated proximal gradient (APG) methods. Since each APG optimization is w.r.t. only a subset of features, the proposed method is fast and memory efficient. The proposed framework is applied to several graph embedding learning problems, including supervised, unsupervised, and semi-supervised graph embedding. Experimental results on several high dimensional data demonstrated that the proposed method outperformed the considered state-of-the-art methods.",
"title": ""
},
{
"docid": "a49abd0b1c03e39c83d9809fc344ba93",
"text": "Controller Area Network (CAN) is the leading serial bus system for embedded control. More than two billion CAN nodes have been sold since the protocol's development in the early 1980s. CAN is a mainstream network and was internationally standardized (ISO 11898–1) in 1993. This paper describes an approach to implementing security services on top of a higher level Controller Area Network (CAN) protocol, in particular, CANopen. Since the CAN network is an open, unsecured network, every node has access to all data on the bus. A system which produces and consumes sensitive data is not well suited for this environment. Therefore, a general-purpose security solution is needed which will allow secure nodes access to the basic security services such as authentication, integrity, and confidentiality.",
"title": ""
},
{
"docid": "18b7dadfec8b02624b6adeb2a65d7223",
"text": "This paper provides a brief introduction to recent work in st atistical parsing and its applications. We highlight succes ses to date, remaining challenges, and promising future work.",
"title": ""
},
{
"docid": "570eca9884edb7e4a03ed95763be20aa",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "1bdb24fb4c85b3aaf8a8e5d71328a920",
"text": "BACKGROUND\nHigh-grade intraepithelial neoplasia is known to progress to invasive squamous-cell carcinoma of the anus. There are limited reports on the rate of progression from high-grade intraepithelial neoplasia to anal cancer in HIV-positive men who have sex with men.\n\n\nOBJECTIVES\nThe purpose of this study was to describe in HIV-positive men who have sex with men with perianal high-grade intraepithelial neoplasia the rate of progression to anal cancer and the factors associated with that progression.\n\n\nDESIGN\nThis was a prospective cohort study.\n\n\nSETTINGS\nThe study was conducted at an outpatient clinic at a tertiary care center in Toronto.\n\n\nPATIENTS\nThirty-eight patients with perianal high-grade anal intraepithelial neoplasia were identified among 550 HIV-positive men who have sex with men.\n\n\nINTERVENTION\nAll of the patients had high-resolution anoscopy for symptoms, screening, or surveillance with follow-up monitoring/treatment.\n\n\nMAIN OUTCOME MEASURES\nWe measured the incidence of anal cancer per 100 person-years of follow-up.\n\n\nRESULTS\nSeven (of 38) patients (18.4%) with perianal high-grade intraepithelial neoplasia developed anal cancer. The rate of progression was 6.9 (95% CI, 2.8-14.2) cases of anal cancer per 100 person-years of follow-up. A diagnosis of AIDS, previously treated anal cancer, and loss of integrity of the lesion were associated with progression. Anal bleeding was more than twice as common in patients who progressed to anal cancer.\n\n\nLIMITATIONS\nThere was the potential for selection bias and patients were offered treatment, which may have affected incidence estimates.\n\n\nCONCLUSIONS\nHIV-positive men who have sex with men should be monitored for perianal high-grade intraepithelial neoplasia. Those with high-risk features for the development of anal cancer may need more aggressive therapy.",
"title": ""
},
{
"docid": "2ccae5b48fc5ac10f948b79fc4fb6ff3",
"text": "Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.",
"title": ""
},
{
"docid": "56f7c98c85eeb519f80966db3ac26dc6",
"text": "Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.",
"title": ""
},
{
"docid": "e9f6b48b367b4a182ce7fb42cbb59b79",
"text": "We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. Our implicit field decoder is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our decoder for representation learning and generative modeling of shapes, we demonstrate superior results for tasks such as shape autoencoding, generation, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "83041d4927d8bff8acd2524441dbd227",
"text": "In this paper, we introduce a novel stereo-monocular fusion approach to on-road localization and tracking of vehicles. Utilizing a calibrated stereo-vision rig, the proposed approach combines monocular detection with stereo-vision for on-road vehicle localization and tracking for driver assistance. The system initially acquires synchronized monocular frames and calculates depth maps from the stereo rig. The system then detects vehicles in the image plane using an active learning-based monocular vision approach. Using the image coordinates of detected vehicles, the system then localizes the vehicles in real-world coordinates using the calculated depth map. The vehicles are tracked both in the image plane, and in real-world coordinates, fusing information from both the monocular and stereo modalities. Vehicles' states are estimated and tracked using Kalman filtering. Quantitative analysis of tracks is provided. The full system takes 46ms to process a single frame.",
"title": ""
}
] | scidocsrr |
a72d8fd6002882fe7456554484225884 | Fine-Grained Control-Flow Integrity Through Binary Hardening | [
{
"docid": "094f784bfb5ad7cfeb52891242dfc38b",
"text": "Code diversification has been proposed as a technique to mitigate code reuse attacks, which have recently become the predominant way for attackers to exploit memory corruption vulnerabilities. As code reuse attacks require detailed knowledge of where code is in memory, diversification techniques attempt to mitigate these attacks by randomizing what instructions are executed and where code is located in memory. As an attacker cannot read the diversified code, it is assumed he cannot reliably exploit the code.\n In this paper, we show that the fundamental assumption behind code diversity can be broken, as executing the code reveals information about the code. Thus, we can leak information without needing to read the code. We demonstrate how an attacker can utilize a memory corruption vulnerability to create side channels that leak information in novel ways, removing the need for a memory disclosure vulnerability. We introduce seven new classes of attacks that involve fault analysis and timing side channels, where each allows a remote attacker to learn how code has been diversified.",
"title": ""
},
{
"docid": "82e6533bf92395a008a024e880ef61b1",
"text": "A new binary software randomization and ControlFlow Integrity (CFI) enforcement system is presented, which is the first to efficiently resist code-reuse attacks launched by informed adversaries who possess full knowledge of the inmemory code layout of victim programs. The defense mitigates a recent wave of implementation disclosure attacks, by which adversaries can exfiltrate in-memory code details in order to prepare code-reuse attacks (e.g., Return-Oriented Programming (ROP) attacks) that bypass fine-grained randomization defenses. Such implementation-aware attacks defeat traditional fine-grained randomization by undermining its assumption that the randomized locations of abusable code gadgets remain secret. Opaque CFI (O-CFI) overcomes this weakness through a novel combination of fine-grained code-randomization and coarsegrained control-flow integrity checking. It conceals the graph of hijackable control-flow edges even from attackers who can view the complete stack, heap, and binary code of the victim process. For maximal efficiency, the integrity checks are implemented using instructions that will soon be hardware-accelerated on commodity x86-x64 processors. The approach is highly practical since it does not require a modified compiler and can protect legacy binaries without access to source code. Experiments using our fully functional prototype implementation show that O-CFI provides significant probabilistic protection against ROP attacks launched by adversaries with complete code layout knowledge, and exhibits only 4.7% mean performance overhead on current hardware (with further overhead reductions to follow on forthcoming Intel processors). I. MOTIVATION Code-reuse attacks (cf., [5]) have become a mainstay of software exploitation over the past several years, due to the rise of data execution protections that nullify traditional codeinjection attacks. Rather than injecting malicious payload code directly onto the stack or heap, where modern data execution protections block it from being executed, attackers now ingeniously inject addresses of existing in-memory code fragments (gadgets) onto victim stacks, causing the victim process to execute its own binary code in an unanticipated order [38]. With a sufficiently large victim code section, the pool of exploitable gadgets becomes arbitrarily expressive (e.g., Turing-complete) [20], facilitating the construction of arbitrary attack payloads without the need for code-injection. Such payload construction has even been automated [34]. As a result, code-reuse has largely replaced code-injection as one of the top software security threats. Permission to freely reproduce all or part of this paper for noncommercial purposes is granted provided that copies bear this notice and the full citation on the first page. Reproduction for commercial purposes is strictly prohibited without the prior written consent of the Internet Society, the first-named author (for reproduction of an entire paper only), and the author’s employer if the paper was prepared within the scope of employment. NDSS ’15, 8–11 February 2015, San Diego, CA, USA Copyright 2015 Internet Society, ISBN 1-891562-38-X http://dx.doi.org/10.14722/ndss.2015.23271 This has motivated copious work on defenses against codereuse threats. Prior defenses can generally be categorized into: CFI [1] and artificial software diversity [8]. CFI restricts all of a program’s runtime control-flows to a graph of whitelisted control-flow edges. Usually the graph is derived from the semantics of the program source code or a conservative disassembly of its binary code. As a result, CFIprotected programs reject control-flow hijacks that attempt to traverse edges not supported by the original program’s semantics. Fine-grained CFI monitors indirect control-flows precisely; for example, function callees must return to their exact callers. Although such precision provides the highest security, it also tends to incur high performance overheads (e.g., 21% for precise caller-callee return-matching [1]). Because this overhead is often too high for industry adoption, researchers have proposed many optimized, coarser-grained variants of CFI. Coarse-grained CFI trades some security for better performance by reducing the precision of the checks. For example, functions must return to valid call sites (but not necessarily to the particular site that invoked the callee). Unfortunately, such relaxations have proved dangerous—a number of recent proof-of-concept exploits have shown how even minor relaxations of the control-flow policy can be exploited to effect attacks [6, 11, 18, 19]. Table I summarizes the impact of several of these recent exploits. Artificial software diversity offers a different but complementary approach that randomizes programs in such a way that attacks succeeding against one program instance have a very low probability of success against other (independently randomized) instances of the same program. Probabilistic defenses rely on memory secrecy—i.e., the effects of randomization must remain hidden from attackers. One of the simplest and most widely adopted forms of artificial diversity is Address Space Layout Randomization (ASLR), which randomizes the base addresses of program segments at loadtime. Unfortunately, merely randomizing the base addresses does not yield sufficient entropy to preserve memory secrecy in many cases; there are numerous successful derandomization attacks against ASLR [13, 26, 36, 37, 39, 42]. Finer-grained diversity techniques obtain exponentially higher entropy by randomizing the relative distances between all code points. For example, binary-level Self-Transforming Instruction Relocation (STIR) [45] and compilers with randomized code-generation (e.g., [22]) have both realized fine-grained artificial diversity for production-level software at very low overheads. Recently, a new wave of implementation disclosure attacks [4, 10, 35, 40] have threatened to undermine fine-grained artificial diversity defenses. Implementation disclosure attacks exploit information leak vulnerabilities to read memory pages of victim processes at the discretion of the attacker. By reading the TABLE I. OVERVIEW OF CONTROL-FLOW INTEGRITY BYPASSES CFI [1] bin-CFI [50] CCFIR [49] kBouncer [33] ROPecker [7] ROPGuard [16] EMET [30] DeMott [12] Feb 2014 / Göktaş et al. [18] May 2014 / / / Davi et al. [11] Aug 2014 / / / / / Göktaş et al. [19] Aug 2014 / / Carlini and Wagner [6] Aug 2014 / / in-memory code sections, attackers violate the memory secrecy assumptions of artificial diversity, rendering their defenses ineffective. Since finding and closing all information leaks is well known to be prohibitively difficult and often intractable for many large software products, these attacks constitute a very dangerous development in the cyber-threat landscape; there is currently no well-established, practical defense. This paper presents Opaque CFI (O-CFI): a new approach to coarse-grained CFI that strengthens fine-grained artificial diversity to withstand implementation disclosure attacks. The heart of O-CFI is a new form of control-flow check that conceals the graph of abusable control-flow edges even from attackers who have complete read-access to the randomized binary code, the stack, and the heap of victim processes. Such access only affords attackers knowledge of the intended (and therefore nonabusable) edges of the control-flow graph, not the edges left unprotected by the coarse-grained CFI implementation. Artificial diversification is employed to vary the set of unprotected edges between program instances, maintaining the probabilistic guarantees of fine-grained diversity. Experiments show that O-CFI enjoys performance overheads comparable to standard fine-grained diversity and non-opaque, coarse-grained CFI. Moreover, O-CFI’s control-flow checking logic is implemented using Intel x86/x64 memory-protection extensions (MPX) that are expected to be hardware-accelerated in commodity CPUs from 2015 onwards. We therefore expect even better performance for O-CFI in the near future. Our contributions are as follows: • We introduce O-CFI, the first low-overhead code-reuse defense that tolerates implementation disclosures. • We describe our implementation of a fully functional prototype that protects stripped, x86 legacy binaries without source code. • Analysis shows that O-CFI provides quantifiable security against state-of-the-art exploits—including JITROP [40] and Blind-ROP [4]. • Performance evaluation yields competitive overheads of just 4.7% for computation-intensive programs. II. THREAT MODEL Our work is motivated by the emergence of attacks against fine-grained diversity and coarse-grained control-flow integrity. We therefore introduce these attacks and distill them into a single, unified threat model. A. Bypassing Coarse-Grained CFI Ideally, CFI permits only programmer-intended control-flow transfers during a program’s execution. The typical approach is to assign a unique ID to each permissible indirect controlflow target, and check the IDs at runtime. Unfortunately, this introduces performance overhead proportional to the degree of the graph—the more overlaps between valid target sets of indirect branch instructions, the more IDs must be stored and checked at each branch. Moreover, perfect CFI cannot be realized with a purely static control-flow graph; for example, the permissible destinations of function returns depend on the calling context, which is only known at runtime. Fine-grained CFI therefore implements a dynamically computed shadow stack, incurring high overheads [1]. To avoid this, coarse-grained CFI implementations resort to a reduced-degree, static approximation of the control-flow graph, and merge identifiers at the cost of reduced security. For example, bin-CFI [49] and CCFIR [50] use at most three IDs per branch, and omit shadow stacks. Recent work has demonstrated that these optimizations open exploitable",
"title": ""
}
] | [
{
"docid": "4fc356024295824f6c68360bf2fcb860",
"text": "Detecting depression is a key public health challenge, as almost 12% of all disabilities can be attributed to depression. Computational models for depression detection must prove not only that can they detect depression, but that they can do it early enough for an intervention to be plausible. However, current evaluations of depression detection are poor at measuring model latency. We identify several issues with the currently popular ERDE metric, and propose a latency-weighted F1 metric that addresses these concerns. We then apply this evaluation to several models from the recent eRisk 2017 shared task on depression detection, and show how our proposed measure can better capture system differences.",
"title": ""
},
{
"docid": "d1cde8ce9934723224ecf21c3cab6615",
"text": "Deep Neural Networks (DNNs) denote multilayer artificial neural networks with more than one hidden layer and millions of free parameters. We propose a Generalized Discriminant Analysis (GerDA) based on DNNs to learn discriminative features of low dimension optimized with respect to a fast classification from a large set of acoustic features for emotion recognition. On nine frequently used emotional speech corpora, we compare the performance of GerDA features and their subsequent linear classification with previously reported benchmarks obtained using the same set of acoustic features classified by Support Vector Machines (SVMs). Our results impressively show that low-dimensional GerDA features capture hidden information from the acoustic features leading to a significantly raised unweighted average recall and considerably raised weighted average recall.",
"title": ""
},
{
"docid": "659b1c167f0778c825788710237da569",
"text": "Voice conversion methods based on frequency warping followed by amplitude scaling have been recently proposed. These methods modify the frequency axis of the source spectrum in such manner that some significant parts of it, usually the formants, are moved towards their image in the target speaker's spectrum. Amplitude scaling is then applied to compensate for the differences between warped source spectra and target spectra. This article presents a fully parametric formulation of a frequency warping plus amplitude scaling method in which bilinear frequency warping functions are used. Introducing this constraint allows for the conversion error to be described in the cepstral domain and to minimize it with respect to the parameters of the transformation through an iterative algorithm, even when multiple overlapping conversion classes are considered. The paper explores the advantages and limitations of this approach when applied to a cepstral representation of speech. We show that it achieves significant improvements in quality with respect to traditional methods based on Gaussian mixture models, with no loss in average conversion accuracy. Despite its relative simplicity, it achieves similar performance scores to state-of-the-art statistical methods involving dynamic features and global variance.",
"title": ""
},
{
"docid": "40495cc96353f56481ed30f7f5709756",
"text": "This paper reported the construction of partial discharge measurement system under influence of cylindrical metal particle in transformer oil. The partial discharge of free cylindrical metal particle in the uniform electric field under AC applied voltage was studied in this paper. The partial discharge inception voltage (PDIV) for the single particle was measure to be 11kV. The typical waveform of positive PD and negative PD was also obtained. The result shows that the magnitude of negative PD is higher compared to positive PD. The observation on cylindrical metal particle movement revealed that there were a few stages of motion process involved.",
"title": ""
},
{
"docid": "06f4ec7c6425164ee7fc38a8b26b8437",
"text": "In this paper we present a decomposition strategy for solving large scheduling problems using mathematical programming methods. Instead of formulating one huge and unsolvable MILP problem, we propose a decomposition scheme that generates smaller programs that can often be solved to global optimality. The original problem is split into subproblems in a natural way using the special features of steel making and avoiding the need for expressing the highly complex rules as explicit constraints. We present a small illustrative example problem, and several real-world problems to demonstrate the capabilities of the proposed strategy, and the fact that the solutions typically lie within 1-3% of the global optimum.",
"title": ""
},
{
"docid": "1348ee3316643f4269311b602b71d499",
"text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.",
"title": ""
},
{
"docid": "7aefad1e65b946a3149897c65b9c3fad",
"text": "A touch-less interaction technology on vision based wearable device is designed and evaluated. Users interact with the application with dynamic hands/feet gestures in front of the camera. Several proof-of-concept prototypes with eleven dynamic gestures are developed based on the touch-less interaction. At last, a comparing user study evaluation is proposed to demonstrate the usability of the touch-less approach, as well as the impact on user's emotion, running on a wearable framework or Google Glass.",
"title": ""
},
{
"docid": "dacb4491a0cf1e05a2972cc1a82a6c62",
"text": "Human parechovirus type 3 (HPeV3) can cause serious conditions in neonates, such as sepsis and encephalitis, but data for adults are lacking. The case of a pregnant woman with HPeV3 infection is reported herein. A 28-year-old woman at 36 weeks of pregnancy was admitted because of myalgia and muscle weakness. Her grip strength was 6.0kg for her right hand and 2.5kg for her left hand. The patient's symptoms, probably due to fasciitis and not myositis, improved gradually with conservative treatment, however labor pains with genital bleeding developed unexpectedly 3 days after admission. An obstetric consultation was obtained and a cesarean section was performed, with no complications. A real-time PCR assay for the detection of viral genomic ribonucleic acid against HPeV showed positive results for pharyngeal swabs, feces, and blood, and negative results for the placenta, umbilical cord, umbilical cord blood, amniotic fluid, and breast milk. The HPeV3 was genotyped by sequencing of the VP1 region. The woman made a full recovery and was discharged with her infant in a stable condition.",
"title": ""
},
{
"docid": "715de052c6a603e3c8a572531920ecfa",
"text": "Muscle samples were obtained from the gastrocnemius of 17 female and 23 male track athletes, 10 untrained women, and 11 untrained men. Portions of the specimen were analyzed for total phosphorylase, lactic dehydrogenase (LDH), and succinate dehydrogenase (SDH) activities. Sections of the muscle were stained for myosin adenosine triphosphatase, NADH2 tetrazolium reductase, and alpha-glycerophosphate dehydrogenase. Maximal oxygen uptake (VO2max) was measured on a treadmill for 23 of the volunteers (6 female athletes, 11 male athletes, 10 untrained women, and 6 untrained men). These measurements confirm earlier reports which suggest that the athlete's preference for strength, speed, and/or endurance events is in part a matter of genetic endowment. Aside from differences in fiber composition and enzymes among middle-distance runners, the only distinction between the sexes was the larger fiber areas of the male athletes. SDH activity was found to correlate 0.79 with VO2max, while muscle LDH appeared to be a function of muscle fiber composition. While sprint- and endurance-trained athletes are characterized by distinct fiber compositions and enzyme activities, participants in strength events (e.g., shot-put) have relatively low muscle enzyme activities and a variety of fiber compositions.",
"title": ""
},
{
"docid": "466b1e13c9c94f83bbacb740def7416b",
"text": "High service quality is imperative and important for competitiveness of service industry. In order to provide much quality service, a deeper research on service quality models is necessary. There are plenty of service quality models which enable managers and practitioners to identify quality problems and improve the efficiency and profitability of overall performance. One of the most influential models in the service quality literature is the model of service quality gaps. In this paper, the model of service quality gaps has been critically reviewed and developed in order to make it more comprehensive. The developed model has been verified based using a survey on 16 experts. Compared to the traditional models, the proposed model involves five additional components and eight additional gaps.",
"title": ""
},
{
"docid": "641754ee9332e1032838d0dba7712607",
"text": "Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, new medical technology and numerous administration policies and procedures. Adverse events initiated by medication error are a crucial area to improve patient safety. This project looked at the complexity of the medication administration process at a regional hospital and the effect of two medication distribution systems. A reduction in work complexity and time spent gathering medication and supplies, was a goal of this work; but more importantly was determining what barriers to safety and efficiency exist in the medication administration process and the impact of barcode scanning and other technologies. The concept of mobile medication units is attractive to both managers and clinicians; however it is only one solution to the problems with medication administration. Introduction and Background Medication administration is an increasingly complex process, influenced by the number of medications on the market, the number of medications prescribed for each patient, and the numerous policies and procedures created for their administration. Mayo and Duncan (2004) found that a “single [hospital] patient can receive up to 18 medications per day, and a nurse can administer as many as 50 medications per shift” (p. 209). While some researchers indicated that the solution is more nurse education or training (e.g. see Mayo & Duncan, 2004; and Tang, Sheu, Yu, Wei, & Chen, 2007), it does not appear that they have determined the feasibility of this solution and the increased time necessary to look up every unfamiliar medication. Most of the research which focuses on the causes of medication errors does not examine the processes involved in the administration of the medication. And yet, understanding the complexity in the nurses’ processes and workflow is necessary to develop safeguards and create more robust systems that reduce the probability of errors and adverse events. Current medication administration processes include many \\ tasks, including but not limited to, assessing the patient to obtain pertinent data, gathering medications, confirming the five rights (right dose, patient, route, medication, and time), administering the medications, documenting administration, and observing for therapeutic and untoward effects. In studies of the delivery of nursing care in acute care settings, Potter et al. (2005) found that nurses spent 16% their time preparing or administering medication. In addition to the amount of time that the nurses spent in preparing and administering medication, Potter et al found that a significant number of interruptions occurred during this critical process. Interruptions impact the cognitive workload of the nurse, and create an environment where medication errors are more likely to occur. A second environmental factor that affects the nurses’ workflow, is the distance traveled to administer care during a shift. Welker, Decker, Adam, & Zone-Smith (2006) found that on average, ward nurses who were assigned three patients walked just over 4.1 miles per shift while a nurse assigned to six patients walked over 4.8 miles. As a large number of interruptions (22%) occurred within the medication rooms, which were highly visible and in high traffic locations (Potter et al., 2005), and while collecting supplies or traveling to and from patient rooms (Ebright, Patterson, Chalko, & Render, 2003), reducing the distances and frequency of repeated travel could have the ability to decrease the number of interruptions and possibly errors in medication administration. Adding new technology, revising policies and procedures, and providing more education have often been the approaches taken to reduce medication errors. Unfortunately these new technologies, such as computerized order entry and electronic medical records / charting, and new procedures, for instance bar code scanning both the medicine and the patient, can add complexity to the nurse’s taskload. The added complexity in correspondence with the additional time necessary to complete the additional steps can lead to workarounds and variations in care. Given the problems in the current medication administration processes, this work focused on facilitating the nurse’s role in the medication administration process. This study expands on the Braswell and Duggar (2006) investigation and compares processes at baseline and postintroduction of a new mobile medication system. To do this, the current medication administration and distribution process was fully documented to determine a baseline in workload complexity. Then a new mobile medication center was installed to allow nurses easier access to patient medications while traveling on the floor, and the medication administration and distribution process was remapped to demonstrate where process complexities were reduced and nurse workflow is more efficient. A similar study showed that the time nurses spend gathering medications and supplies can be dramatically reduced through this type of system (see Braswell & Duggar, 2006); however, they did not directly investigate the impact on the nursing process. Thus, this research is presented to document the impact of this technology on the nursing workflow at a regional hospital, and as an expansion on the work begun by Braswell and Duggar.",
"title": ""
},
{
"docid": "6e7d629c5dd111df1064b969755863ef",
"text": "Recently proposed universal filtered multicarrier (UFMC) system is not an orthogonal system in multipath channel environments and might cause significant performance loss. In this paper, the authors propose a cyclic prefix (CP) based UFMC system and first analyze the conditions for interference-free one-tap equalization in the absence of transceiver imperfections. Then the corresponding signal model and output signal-to-noise ratio expression are derived. In the presence of carrier frequency offset, timing offset, and insufficient CP length, the authors establish an analytical system model as a summation of desired signal, intersymbol interference, intercarrier interference, and noise. New channel equalization algorithms are proposed based on the derived analytical signal model. Numerical results show that the derived model matches the simulation results precisely, and the proposed equalization algorithms improve the UFMC system performance in terms of bit error rate.",
"title": ""
},
{
"docid": "0c57dd3ce1f122d3eb11a98649880475",
"text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.",
"title": ""
},
{
"docid": "e1651c1f329b8caa53e5322be5bf700b",
"text": "Personalized curriculum sequencing is an important research issue for web-based learning systems because no fixed learning paths will be appropriate for all learners. Therefore, many researchers focused on developing e-learning systems with personalized learning mechanisms to assist on-line web-based learning and adaptively provide learning paths in order to promote the learning performance of individual learners. However, most personalized e-learning systems usually neglect to consider if learner ability and the difficulty level of the recommended courseware are matched to each other while performing personalized learning services. Moreover, the problem of concept continuity of learning paths also needs to be considered while implementing personalized curriculum sequencing because smooth learning paths enhance the linked strength between learning concepts. Generally, inappropriate courseware leads to learner cognitive overload or disorientation during learning processes, thus reducing learning performance. Therefore, compared to the freely browsing learning mode without any personalized learning path guidance used in most web-based learning systems, this paper assesses whether the proposed genetic-based personalized e-learning system, which can generate appropriate learning paths according to the incorrect testing responses of an individual learner in a pre-test, provides benefits in terms of learning performance promotion while learning. Based on the results of pre-test, the proposed genetic-based personalized e-learning system can conduct personalized curriculum sequencing through simultaneously considering courseware difficulty level and the concept continuity of learning paths to support web-based learning. Experimental results indicated that applying the proposed genetic-based personalized e-learning system for web-based learning is superior to the freely browsing learning mode because of high quality and concise learning path for individual learners. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "51505087f5ae1a9f57fe04f5e9ad241e",
"text": "Microblogs have recently received widespread interest from NLP researchers. However, current tools for Japanese word segmentation and POS tagging still perform poorly on microblog texts. We developed an annotated corpus and proposed a joint model for overcoming this situation. Our annotated corpus of microblog texts enables not only training of accurate statistical models but also quantitative evaluation of their performance. Our joint model with lexical normalization handles the orthographic diversity of microblog texts. We conducted an experiment to demonstrate that the corpus and model substantially contribute to boosting accuracy.",
"title": ""
},
{
"docid": "5a0cfbd3d8401d4d8e437ec1a1e9458f",
"text": "Ehlers-Danlos syndrome is an inherited heterogeneous group of connective tissue disorders, characterized by abnormal collagen synthesis, affecting skin, ligaments, joints, blood vessels and other organs. It is one of the oldest known causes of bruising and bleeding and was first described by Hipprocrates in 400 BC. Edvard Ehlers, in 1901, recognized the condition as a distinct entity. In 1908, Henri-Alexandre Danlos suggested that skin extensibility and fragility were the cardinal features of the syndrome. In 1998, Beighton published the classification of Ehlers-Danlos syndrome according to the Villefranche nosology. From the 1960s the genetic make up was identified. Management of bleeding problems associated with Ehlers-Danlos has been slow to progress.",
"title": ""
},
{
"docid": "b9cf32ef9364f55c5f59b4c6a9626656",
"text": "Graph-based methods have gained attention in many areas of Natural Language Processing (NLP) including Word Sense Disambiguation (WSD), text summarization, keyword extraction and others. Most of the work in these areas formulate their problem in a graph-based setting and apply unsupervised graph clustering to obtain a set of clusters. Recent studies suggest that graphs often exhibit a hierarchical structure that goes beyond simple flat clustering. This paper presents an unsupervised method for inferring the hierarchical grouping of the senses of a polysemous word. The inferred hierarchical structures are applied to the problem of word sense disambiguation, where we show that our method performs significantly better than traditional graph-based methods and agglomerative clustering yielding improvements over state-of-the-art WSD systems based on sense induction.",
"title": ""
},
{
"docid": "afe2bc204458117fb278ef500b485ea1",
"text": "PURPOSE\nTitanium based implant systems, though considered as the gold standard for rehabilitation of edentulous spaces, have been criticized for many inherent flaws. The onset of hypersensitivity reactions, biocompatibility issues, and an unaesthetic gray hue have raised demands for more aesthetic and tissue compatible material for implant fabrication. Zirconia is emerging as a promising alternative to conventional Titanium based implant systems for oral rehabilitation with superior biological, aesthetics, mechanical and optical properties. This review aims to critically analyze and review the credibility of Zirconia implants as an alternative to Titanium for prosthetic rehabilitation.\n\n\nSTUDY SELECTION\nThe literature search for articles written in the English language in PubMed and Cochrane Library database from 1990 till December 2016. The following search terms were utilized for data search: \"zirconia implants\" NOT \"abutment\", \"zirconia implants\" AND \"titanium implants\" AND \"osseointegration\", \"zirconia implants\" AND compatibility.\n\n\nRESULTS\nThe number of potential relevant articles selected were 47. All the human in vivo clinical, in vitro, animals' studies were included and discussed under the following subheadings: Chemical composition, structure and phases; Physical and mechanical properties; Aesthetic and optical properties; Osseointegration and biocompatibility; Surface modifications; Peri-implant tissue compatibility, inflammation and soft tissue healing, and long-term prognosis.\n\n\nCONCLUSIONS\nZirconia implants are a promising alternative to titanium with a superior soft-tissue response, biocompatibility, and aesthetics with comparable osseointegration. However, further long-term longitudinal and comparative clinical trials are required to validate zirconia as a viable alternative to the titanium implant.",
"title": ""
},
{
"docid": "2aa5f065e63a9bc0e24f74d4a37a7ea6",
"text": "Dataflow programming models are suitable to express multi-core streaming applications. The design of high-quality embedded systems in that context requires static analysis to ensure the liveness and bounded memory of the application. However, many streaming applications have a dynamic behavior. The previously proposed dataflow models for dynamic applications do not provide any static guarantees or only in exchange of significant restrictions in expressive power or automation. To overcome these restrictions, we propose the schedulable parametric dataflow (SPDF) model. We present static analyses and a quasi-static scheduling algorithm. We demonstrate our approach using a video decoder case study.",
"title": ""
},
{
"docid": "cc08e377d924f86fb6ceace022ad8db2",
"text": "Homomorphic cryptography has been one of the most interesting topics of mathematics and computer security since Gentry presented the first construction of a fully homomorphic encryption (FHE) scheme in 2009. Since then, a number of different schemes have been found, that follow the approach of bootstrapping a fully homomorphic scheme from a somewhat homomorphic foundation. All existing implementations of these systems clearly proved, that fully homomorphic encryption is not yet practical, due to significant performance limitations. However, there are many applications in the area of secure methods for cloud computing, distributed computing and delegation of computation in general, that can be implemented with homomorphic encryption schemes of limited depth. We discuss a simple algebraically homomorphic scheme over the integers that is based on the factorization of an approximate semiprime integer. We analyze the properties of the scheme and provide a couple of known protocols that can be implemented with it. We also provide a detailed discussion on searching with encrypted search terms and present implementations and performance figures for the solutions discussed in this paper.",
"title": ""
}
] | scidocsrr |
6eba4926c68232b11cea5f89f5dbf693 | Towards Bayesian Deep Learning: A Survey | [
{
"docid": "46cd71806e85374c36bc77ea28293ecb",
"text": "In this paper we introduce a novel collapsed Gibbs sampling method for the widely used latent Dirichlet allocation (LDA) model. Our new method results in significant speedups on real world text corpora. Conventional Gibbs sampling schemes for LDA require O(K) operations per sample where K is the number of topics in the model. Our proposed method draws equivalent samples but requires on average significantly less then K operations per sample. On real-word corpora FastLDA can be as much as 8 times faster than the standard collapsed Gibbs sampler for LDA. No approximations are necessary, and we show that our fast sampling scheme produces exactly the same results as the standard (but slower) sampling scheme. Experiments on four real world data sets demonstrate speedups for a wide range of collection sizes. For the PubMed collection of over 8 million documents with a required computation time of 6 CPU months for LDA, our speedup of 5.7 can save 5 CPU months of computation.",
"title": ""
},
{
"docid": "9e45bc3ac789fd1343e4e400b7f0218e",
"text": "Due to its successful application in recommender systems, collaborative filtering (CF) has become a hot research topic in data mining and information retrieval. In traditional CF methods, only the feedback matrix, which contains either explicit feedback (also called ratings) or implicit feedback on the items given by users, is used for training and prediction. Typically, the feedback matrix is sparse, which means that most users interact with few items. Due to this sparsity problem, traditional CF with only feedback information will suffer from unsatisfactory performance. Recently, many researchers have proposed to utilize auxiliary information, such as item content (attributes), to alleviate the data sparsity problem in CF. Collaborative topic regression (CTR) is one of these methods which has achieved promising performance by successfully integrating both feedback information and item content information. In many real applications, besides the feedback and item content information, there may exist relations (also known as networks) among the items which can be helpful for recommendation. In this paper, we develop a novel hierarchical Bayesian model called Relational Collaborative Topic Regression (RCTR), which extends CTR by seamlessly integrating the user-item feedback information, item content information, and network structure among items into the same model. Experiments on real-world datasets show that our model can achieve better prediction accuracy than the state-of-the-art methods with lower empirical training time. Moreover, RCTR can learn good interpretable latent structures which are useful for recommendation.",
"title": ""
}
] | [
{
"docid": "eeba7960e52f351405b4be37a0c9174a",
"text": "While vehicle license plate recognition (VLPR) is usually done with a sliding window approach, it can have limited performance on datasets with characters that are of variable width. This can be solved by hand-crafting algorithms to prescale the characters. While this approach can work fairly well, the recognizer is only aware of the pixels within each detector window, and fails to account for other contextual information that might be present in other parts of the image. A sliding window approach also requires training data in the form of presegmented characters, which can be more difficult to obtain. In this paper, we propose a unified ConvNet-RNN model to recognize real-world captured license plate photographs. By using a Convolutional Neural Network (ConvNet) to perform feature extraction and using a Recurrent Neural Network (RNN) for sequencing, we address the problem of sliding window approaches being unable to access the context of the entire image by feeding the entire image as input to the ConvNet. This has the added benefit of being able to perform end-to-end training of the entire model on labelled, full license plate images. Experimental results comparing the ConvNet-RNN architecture to a sliding window-based approach shows that the ConvNet-RNN architecture performs significantly better. Keywords—Vehicle license plate recognition, end-to-end recognition, ConvNet-RNN, segmentation-free recognition",
"title": ""
},
{
"docid": "c88f5359fc6dc0cac2c0bd53cea989ee",
"text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.",
"title": ""
},
{
"docid": "8dd540b33035904f63c67b57d4c97aa3",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "338dcbb45ff0c1752eeb34ec1be1babe",
"text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural",
"title": ""
},
{
"docid": "53a1d344a6e38dd790e58c6952e51cdb",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "826e54e8e46dcea0451b53645e679d55",
"text": "Microtia is a congenital disease with various degrees of severity, ranging from the presence of rudimentary and malformed vestigial structures to the total absence of the ear (anotia). The complex anatomy of the external ear and the necessity to provide good projection and symmetry make this reconstruction particularly difficult. The aim of this work is to report our surgical technique of microtic ear correction and to analyse the short and long term results. From 2000 to 2013, 210 patients affected by microtia were treated at the Maxillo-Facial Surgery Division, Head and Neck Department, University Hospital of Parma. The patient population consisted of 95 women and 115 men, aged from 7 to 49 years. A total of 225 reconstructions have been performed in two surgical stages basing of Firmin's technique with some modifications and refinements. The first stage consists in fabrication and grafting of a three-dimensional costal cartilage framework. The second stage is performed 5-6 months later: the reconstructed ear is raised up and an additional cartilaginous graft is used to increase its projection. A mastoid fascial flap together with a skin graft are then used to protect the cartilage graft. All reconstructions were performed without any major complication. The results have been considered satisfactory by all patients starting from the first surgical step. Low morbidity, the good results obtained and a high rate of patient satisfaction make our protocol an optimal choice for treatment of microtia. The surgeon's experience and postoperative patient care must be considered as essential aspects of treatment.",
"title": ""
},
{
"docid": "ccedb6cff054254f3427ab0d45017d2a",
"text": "Traffic and power generation are the main sources of urban air pollution. The idea that outdoor air pollution can cause exacerbations of pre-existing asthma is supported by an evidence base that has been accumulating for several decades, with several studies suggesting a contribution to new-onset asthma as well. In this Series paper, we discuss the effects of particulate matter (PM), gaseous pollutants (ozone, nitrogen dioxide, and sulphur dioxide), and mixed traffic-related air pollution. We focus on clinical studies, both epidemiological and experimental, published in the previous 5 years. From a mechanistic perspective, air pollutants probably cause oxidative injury to the airways, leading to inflammation, remodelling, and increased risk of sensitisation. Although several pollutants have been linked to new-onset asthma, the strength of the evidence is variable. We also discuss clinical implications, policy issues, and research gaps relevant to air pollution and asthma.",
"title": ""
},
{
"docid": "a461592a276b13a6a25c25ab64c23d61",
"text": "To maintain the integrity of an organism constantly challenged by pathogens, the immune system is endowed with a variety of cell types. B lymphocytes were initially thought to only play a role in the adaptive branch of immunity. However, a number of converging observations revealed that two B-cell subsets, marginal zone (MZ) and B1 cells, exhibit unique developmental and functional characteristics, and can contribute to innate immune responses. In addition to their capacity to mount a local antibody response against type-2 T-cell-independent (TI-2) antigens, MZ B-cells can participate to T-cell-dependent (TD) immune responses through the capture and import of blood-borne antigens to follicular areas of the spleen. Here, we discuss the multiple roles of MZ B-cells in humans, non-human primates, and rodents. We also summarize studies - performed in transgenic mice expressing fully human antibodies on their B-cells and in macaques whose infection with Simian immunodeficiency virus (SIV) represents a suitable model for HIV-1 infection in humans - showing that infectious agents have developed strategies to subvert MZ B-cell functions. In these two experimental models, we observed that two microbial superantigens for B-cells (protein A from Staphylococcus aureus and protein L from Peptostreptococcus magnus) as well as inactivated AT-2 virions of HIV-1 and infectious SIV preferentially deplete innate-like B-cells - MZ B-cells and/or B1 B-cells - with different consequences on TI and TD antibody responses. These data revealed that viruses and bacteria have developed strategies to deplete innate-like B-cells during the acute phase of infection and to impair the antibody response. Unraveling the intimate mechanisms responsible for targeting MZ B-cells in humans will be important for understanding disease pathogenesis and for designing novel vaccine strategies.",
"title": ""
},
{
"docid": "9563b47a73e41292599c368e1dfcd40a",
"text": "Non-functional requirements are an important, and often critical, aspect of any software system. However, determining the degree to which any particular software system meets such requirements and incorporating such considerations into the software design process is a difficult challenge. This paper presents a modification of the NFR framework that allows for the discovery of a set of system functionalities that optimally satisfice a given set of non-functional requirements. This new technique introduces an adaptation of softgoal interdependency graphs, denoted softgoal interdependency ruleset graphs, in which label propagation can be done consistently. This facilitates the use of optimisation algorithms to determine the best set of bottom-level operationalizing softgoals that optimally satisfice the highest-level NFR softgoals. The proposed method also introduces the capacity to incorporate both qualitative and quantitative information.",
"title": ""
},
{
"docid": "69b0c5a4a3d5fceda5e902ec8e0479bb",
"text": "Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.",
"title": ""
},
{
"docid": "696fd5b7e7bff90432f8c219230ebc7c",
"text": "This paper proposes a simple, cost-effective, and efficient brushless dc (BLDC) motor drive for solar photovoltaic (SPV) array-fed water pumping system. A zeta converter is utilized to extract the maximum available power from the SPV array. The proposed control algorithm eliminates phase current sensors and adapts a fundamental frequency switching of the voltage source inverter (VSI), thus avoiding the power losses due to high frequency switching. No additional control or circuitry is used for speed control of the BLDC motor. The speed is controlled through a variable dc link voltage of VSI. An appropriate control of zeta converter through the incremental conductance maximum power point tracking (INC-MPPT) algorithm offers soft starting of the BLDC motor. The proposed water pumping system is designed and modeled such that the performance is not affected under dynamic conditions. The suitability of proposed system at practical operating conditions is demonstrated through simulation results using MATLAB/Simulink followed by an experimental validation.",
"title": ""
},
{
"docid": "9c5711c68c7a9c7a4a8fc4d9dbcf145d",
"text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884",
"title": ""
},
{
"docid": "76156cea2ef1d49179d35fd8f333b011",
"text": "Climate change, pollution, and energy insecurity are among the greatest problems of our time. Addressing them requires major changes in our energy infrastructure. Here, we analyze the feasibility of providing worldwide energy for all purposes (electric power, transportation, heating/cooling, etc.) from wind, water, and sunlight (WWS). In Part I, we discuss WWS energy system characteristics, current and future energy demand, availability of WWS resources, numbers of WWS devices, and area and material requirements. In Part II, we address variability, economics, and policy of WWS energy. We estimate that !3,800,000 5 MW wind turbines, !49,000 300 MW concentrated solar plants, !40,000 300 MW solar PV power plants, !1.7 billion 3 kW rooftop PV systems, !5350 100 MWgeothermal power plants, !270 new 1300 MW hydroelectric power plants, !720,000 0.75 MWwave devices, and !490,000 1 MW tidal turbines can power a 2030 WWS world that uses electricity and electrolytic hydrogen for all purposes. Such a WWS infrastructure reduces world power demand by 30% and requires only !0.41% and !0.59% more of the world’s land for footprint and spacing, respectively. We suggest producing all new energy withWWSby 2030 and replacing the pre-existing energy by 2050. Barriers to the plan are primarily social and political, not technological or economic. The energy cost in a WWS world should be similar to",
"title": ""
},
{
"docid": "7c5ce3005c4529e0c34220c538412a26",
"text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.",
"title": ""
},
{
"docid": "3fec27391057a4c14f2df5933c4847d8",
"text": "This article explains how entrepreneurship can help resolve the environmental problems of global socio-economic systems. Environmental economics concludes that environmental degradation results from the failure of markets, whereas the entrepreneurship literature argues that opportunities are inherent in market failure. A synthesis of these literatures suggests that environmentally relevant market failures represent opportunities for achieving profitability while simultaneously reducing environmentally degrading economic behaviors. It also implies conceptualizations of sustainable and environmental entrepreneurship which detail how entrepreneurs seize the opportunities that are inherent in environmentally relevant market failures. Finally, the article examines the ability of the proposed theoretical framework to transcend its environmental context and provide insight into expanding the domain of the study of entrepreneurship. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "e107c2f396299e79f9b3db29ae43c943",
"text": "To achieve the concept of smart roads, intelligent sensors are being placed on the roadways to collect real-time traffic streams. Traditional method is not a real-time response, and incurs high communication and storage costs. Existing distributed stream mining algorithms do not consider the resource limitation on the lightweight devices such as sensors. In this paper, we propose a distributed traffic stream mining system. The central server performs various data mining tasks only in the training and updating stage and sends the interesting patterns to the sensors. The sensors monitor and predict the coming traffic or raise alarms independently by comparing with the patterns observed in the historical streams. The sensors provide real-time response with less wireless communication and small resource requirement, and the computation burden on the central server is reduced. We evaluate our system on the real highway traffic streams in the GCM Transportation Corridor in Chicagoland.",
"title": ""
},
{
"docid": "69f95ac2ca7b32677151de88b9d95d4c",
"text": "Gunaratna, Kalpa. PhD, Department of Computer Science and Engineering, Wright State University, 2017. Semantics-based Summarization of Entities in Knowledge Graphs. The processing of structured and semi-structured content on the Web has been gaining attention with the rapid progress in the Linking Open Data project and the development of commercial knowledge graphs. Knowledge graphs capture domain-specific or encyclopedic knowledge in the form of a data layer and add rich and explicit semantics on top of the data layer to infer additional knowledge. The data layer of a knowledge graph represents entities and their descriptions. The semantic layer on top of the data layer is called the schema (ontology), where relationships of the entity descriptions, their classes, and the hierarchy of the relationships and classes are defined. Today, there exist large knowledge graphs in the research community (e.g., encyclopedic datasets like DBpedia and Yago) and corporate world (e.g., Google knowledge graph) that encapsulate a large amount of knowledge for human and machine consumption. Typically, they consist of millions of entities and billions of facts describing these entities. While it is good to have this much knowledge available on the Web for consumption, it leads to information overload, and hence proper summarization (and presentation) techniques need to be explored. In this dissertation, we focus on creating both comprehensive and concise entity summaries at: (i) the single entity level and (ii) the multiple entity level. To summarize a single entity, we propose a novel approach called FACeted Entity Summarization (FACES) that considers importance, which is computed by combining popularity and uniqueness, and diversity of facts getting selected for the summary. We first conceptually group facts using semantic expansion and hierarchical incremental clustering techniques and form facets (i.e., groupings) that go beyond syntactic similarity. Then we rank both the facts and facets using Information Retrieval (IR) ranking techniques to pick the",
"title": ""
},
{
"docid": "b769f7b96b9613132790a73752c2a08f",
"text": "ITIL is the most widely used IT framework in majority of organizations in the world now. However, implementing such best practice experiences in an organization comes with some implementation challenges such as staff resistance, task conflicts and ambiguous orders. It means that implementing such framework is not easy and it can be caused of the organization destruction. This paper tries to describe overall view of ITIL framework and address major reasons on the failure of this framework’s implementation in the organizations",
"title": ""
},
{
"docid": "bab949abe2d00567853504e38c84a1c9",
"text": "7SK RNA is a key player in the regulation of polymerase II transcription. 7SK RNA was considered as a highly conserved vertebrate innovation. The discovery of poorly conserved homologs in several insects and lophotrochozoans, however, implies a much earlier evolutionary origin. The mechanism of 7SK function requires interaction with the proteins HEXIM and La-related protein 7. Here, we present a comprehensive computational analysis of these two proteins in metazoa, and we extend the collection of 7SK RNAs by several additional candidates. In particular, we describe 7SK homologs in Caenorhabditis species. Furthermore, we derive an improved secondary structure model of 7SK RNA, which shows that the structure is quite well-conserved across animal phyla despite the extreme divergence at sequence level.",
"title": ""
}
] | scidocsrr |
b026c22ef03caa1381fa639d5de6c8ba | Going Spear Phishing: Exploring Embedded Training and Awareness | [
{
"docid": "40fbee18e4b0eca3f2b9ad69119fec5d",
"text": "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.",
"title": ""
}
] | [
{
"docid": "95b48a41d796aec0a1f23b3fc0879ed9",
"text": "Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame’s representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.",
"title": ""
},
{
"docid": "30fa14e4cfa8e33d863295c4f14ee671",
"text": "Approximate computing can decrease the design complexity with an increase in performance and power efficiency for error resilient applications. This brief deals with a new design approach for approximation of multipliers. The partial products of the multiplier are altered to introduce varying probability terms. Logic complexity of approximation is varied for the accumulation of altered partial products based on their probability. The proposed approximation is utilized in two variants of 16-bit multipliers. Synthesis results reveal that two proposed multipliers achieve power savings of 72% and 38%, respectively, compared to an exact multiplier. They have better precision when compared to existing approximate multipliers. Mean relative error figures are as low as 7.6% and 0.02% for the proposed approximate multipliers, which are better than the previous works. Performance of the proposed multipliers is evaluated with an image processing application, where one of the proposed models achieves the highest peak signal to noise ratio.",
"title": ""
},
{
"docid": "3c79c23036ed7c9a5542670264310141",
"text": "This paper investigates possible improvements in grid voltage stability and transient stability with wind energy converter units using modified P/Q control. The voltage source converter (VSC) in modern variable speed wind turbines is utilized to achieve this enhancement. The findings show that using only available hardware for variable-speed turbines improvements could be obtained in all cases. Moreover, it was found that power system stability improvement is often larger when the control is modified for a given variable speed wind turbine rather than when standard variable speed turbines are used instead of fixed speed turbines. To demonstrate that the suggested modifications can be incorporated in real installations, a real situation is presented where short-term voltage stability is improved as an additional feature of an existing VSC high voltage direct current (HVDC) installation",
"title": ""
},
{
"docid": "112026af056b3350eceed0c6d0035260",
"text": "This paper presents a short-baseline real-time stereo vision system that is capable of the simultaneous and robust estimation of the ego-motion and of the 3D structure and the independent motion of thousands of points of the environment. Kalman filters estimate the position and velocity of world points in 3D Euclidean space. The six degrees of freedom of the ego-motion are obtained by minimizing the projection error of the current and previous clouds of static points. Experimental results with real data in indoor and outdoor environments demonstrate the robustness, accuracy and efficiency of our approach. Since the baseline is as short as 13cm, the device is head-mountable, and can be used by a visually impaired person. Our proposed system can be used to augment the perception of the user in complex dynamic environments.",
"title": ""
},
{
"docid": "687dbb03f675f0bf70e6defa9588ae23",
"text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.",
"title": ""
},
{
"docid": "baaff0e771e784304202ad7a0c987ef8",
"text": "This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform.",
"title": ""
},
{
"docid": "e0580a51b7991f86559a7a3aa8b26204",
"text": "A new ultra-wideband monocycle pulse generator with good performance is designed and demonstrated. The pulse generator circuits employ SRD(step recovery diode), Schottky diode, and simple RC coupling and decoupling circuit, and are completely fabricated on the planar microstrip structure, which have the characteristic of low cost and small size. Through SRD modeling, the accuracy of the simulation is improved, which save the design period greatly. The generated monocycle pulse has the peak-to-peak amplitude 1.3V, pulse width 370ps and pulse repetition rate of 10MHz, whose waveform features are symmetric well and low ringing level. Good agreement between the measured and calculated results is achieved.",
"title": ""
},
{
"docid": "2525c33c5b06a2864eb44e390ce802d8",
"text": "The energy landscape theory of protein folding is a statistical description of a protein's potential surface. It assumes that folding occurs through organizing an ensemble of structures rather than through only a few uniquely defined structural intermediates. It suggests that the most realistic model of a protein is a minimally frustrated heteropolymer with a rugged funnel-like landscape biased toward the native structure. This statistical description has been developed using tools from the statistical mechanics of disordered systems, polymers, and phase transitions of finite systems. We review here its analytical background and contrast the phenomena in homopolymers, random heteropolymers, and protein-like heteropolymers that are kinetically and thermodynamically capable of folding. The connection between these statistical concepts and the results of minimalist models used in computer simulations is discussed. The review concludes with a brief discussion of how the theory helps in the interpretation of results from fast folding experiments and in the practical task of protein structure prediction.",
"title": ""
},
{
"docid": "1490331d46b8c19fce0a94e072bff502",
"text": "We explore the reliability and validity of a self-report measure of procrastination and conscientiousness designed for use with thirdto fifth-grade students. The responses of 120 students are compared with teacher and parent ratings of the student. Confirmatory and exploratory factor analyses were also used to examine the structure of the scale. Procrastination and conscientiousness are highly correlated (inversely); evidence suggests that procrastination and conscientiousness are aspects of the same construct. Procrastination and conscientiousness are correlated with the Physiological Anxiety subscale of the Revised Children’s Manifest Anxiety Scale, and with the Task (Mastery) and Avoidance (Task Aversiveness) subscales of Skaalvik’s (1997) Goal Orientation Scales. Both theoretical implications and implications for interventions are discussed. © 2002 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "3823975ea2bcda029c3c3cda2b0472be",
"text": "by Dimitrios Tzionas for the degree of Doctor rerum naturalium Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. To my family Maria, Konstantinos, Glyka. In the loving memory of giagià 'Olga & pappo‘c Giànnhc. (Olga Matoula & Ioannis Matoulas) πste ô yuqò πsper ô qe–r ‚stin· ka» gÄr ô qe»r ÓrganÏn ‚stin Êrgànwn, ka» Â no‹c e⁄doc e d¿n ka» ô a“sjhsic e⁄doc a sjht¿n.",
"title": ""
},
{
"docid": "23f2f6e5dd50942809aece136c26e549",
"text": "Paraphrases extracted from parallel corpora by the pivot method (Bannard and Callison-Burch, 2005) constitute a valuable resource for multilingual NLP applications. In this study, we analyse the semantics of unigram pivot paraphrases and use a graph-based sense induction approach to unveil hidden sense distinctions in the paraphrase sets. The comparison of the acquired senses to gold data from the Lexical Substitution shared task (McCarthy and Navigli, 2007) demonstrates that sense distinctions exist in the paraphrase sets and highlights the need for a disambiguation step in applications using this resource.",
"title": ""
},
{
"docid": "44e527e6078a01abd79a5f1f74fa1b78",
"text": "A transformer provides galvanic isolation and grounding of the photovoltaic (PV) array in a PV-fed grid-connected inverter. Inclusion of the transformer, however, may increase the cost and/or bulk of the system. To overcome this drawback, a single-phase, single-stage [no extra converter for voltage boost or maximum power point tracking (MPPT)], doubly grounded, transformer-less PV interface, based on the buck-boost principle, is presented. The configuration is compact and uses lesser components. Only one (undivided) PV source and one buck-boost inductor are used and shared between the two half cycles, which prevents asymmetrical operation and parameter mismatch problems. Total harmonic distortion and DC component of the current supplied to the grid is low, compared to existing topologies and conform to standards like IEEE 1547. A brief review of the existing, transformer-less, grid-connected inverter topologies is also included. It is demonstrated that, as compared to the split PV source topology, the proposed configuration is more effective in MPPT and array utilization. Design and analysis of the inverter in discontinuous conduction mode is carried out. Simulation and experimental results are presented.",
"title": ""
},
{
"docid": "8de25881e8a5f12f891656f271c44d4d",
"text": "Forest fires play a critical role in landscape transformation, vegetation succession, soil degradation and air quality. Improvements in fire risk estimation are vital to reduce the negative impacts of fire, either by lessen burn severity or intensity through fuel management, or by aiding the natural vegetation recovery using post-fire treatments. This paper presents the methods to generate the input variables and the risk integration developed within the Firemap project (funded under the Spanish Ministry of Science and Technology) to map wildland fire risk for several regions of Spain. After defining the conceptual scheme for fire risk assessment, the paper describes the methods used to generate the risk parameters, and presents",
"title": ""
},
{
"docid": "beff5ce5202460e736af0f06d5d75f83",
"text": "MOTIVATION\nDuring the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data.\n\n\nRESULTS\nThis paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein-protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins--membrane proteins and ribosomal proteins--performs significantly better than the same algorithm trained on any single type of data.\n\n\nAVAILABILITY\nSupplementary data at http://noble.gs.washington.edu/proj/sdp-svm",
"title": ""
},
{
"docid": "d3f35e91d5d022de5fe816cf1234e415",
"text": "Rock mass description and characterisation is a basic task for exploration, mining work-flows and ground-water studies. Rock analysis can be performed using borehole logs that are created using a televiewer. Planar discontinuities in the rock appear as sinusoidal curves in borehole logs. The aim of this project is to develop a fast algorithm to analyse borehole imagery using image processing techniques, to identify and trace the discontinuities, and to perform quantitative analysis on their distribution.",
"title": ""
},
{
"docid": "9a27c676b5d356d5feb91850e975a336",
"text": "Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.",
"title": ""
},
{
"docid": "595052e154117ce66202a1a82e0a4072",
"text": "This paper presents the design of a new haptic feedback device for transradial myoelectric upper limb prosthesis that allows the amputee person to perceive the sensation of force-gripping and object-sliding. The system designed has three mechanical-actuator units to convey the sensation of force, and one vibrotactile unit to transmit the sensation of object sliding. The device designed will be placed on the user's amputee forearm. In order to validate the design of the structure, a stress analysis through Finite Element Method (FEM) is conducted.",
"title": ""
},
{
"docid": "ae7fb63bb4a70aa508fab8500e451402",
"text": "Dynamic Optimization Problems (DOPs) have been widely studied using Evolutionary Algorithms (EAs). Yet, a clear and rigorous definition of DOPs is lacking in the Evolutionary Dynamic Optimization (EDO) community. In this paper, we propose a unified definition of DOPs based on the idea of multiple-decision-making discussed in the Reinforcement Learning (RL) community. We draw a connection between EDO and RL by arguing that both of them are studying DOPs according to our definition of DOPs. We point out that existing EDO or RL research has been mainly focused on some types of DOPs. A conceptualized benchmark problem, which is aimed at the systematic study of various DOPs, is then developed. Some interesting experimental studies on the benchmark reveal that EDO and RL methods are specialized in certain types of DOPs and more importantly new algorithms for DOPs can be developed by combining the strength of both EDO and RL methods.",
"title": ""
},
{
"docid": "a41dfbce4138a8422bc7ddfac830e557",
"text": "This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered.",
"title": ""
}
] | scidocsrr |
c74df8599fc83009b02a67d9863e0984 | A subject identification method based on term frequency technique | [
{
"docid": "1b2d7b2895ae4b996797ea64ddbae14e",
"text": "For the past decade, query processing on relational data has been studied extensively, and many theoretical and practical solutions to query processing have been proposed under various scenarios. With the recent popularity of cloud computing, users now have the opportunity to outsource their data as well as the data management tasks to the cloud. However, due to the rise of various privacy issues, sensitive data (e.g., medical records) need to be encrypted before outsourcing to the cloud. In addition, query processing tasks should be handled by the cloud; otherwise, there would be no point to outsource the data at the first place. To process queries over encrypted data without the cloud ever decrypting the data is a very challenging task. In this paper, we focus on solving the k-nearest neighbor (kNN) query problem over encrypted database outsourced to a cloud: a user issues an encrypted query record to the cloud, and the cloud returns the k closest records to the user. We first present a basic scheme and demonstrate that such a naive solution is not secure. To provide better security, we propose a secure kNN protocol that protects the confidentiality of the data, user's input query, and data access patterns. Also, we empirically analyze the efficiency of our protocols through various experiments. These results indicate that our secure protocol is very efficient on the user end, and this lightweight scheme allows a user to use any mobile device to perform the kNN query.",
"title": ""
},
{
"docid": "e659f976983c28631062bb5c8b1c35ab",
"text": "This paper presents the outcomes of research into using lingual parts of music in an automatic mood classification system. Using a collection of lyrics and corresponding user-tagged moods, we build classifiers that classify lyrics of songs into moods. By comparing the performance of different mood frameworks (or dimensions), we examine to what extent the linguistic part of music reveals adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that word oriented metrics provide a valuable source of information for automatic mood classification of music, based on lyrics only. Metrics such as term frequencies and tf*idf values are used to measure relevance of words to the different mood classes. These metrics are incorporated in a machine learning classifier setup. Different partitions of the mood plane are investigated and we show that there is no large difference in mood prediction based on the mood division. Predictions on the valence, tension and combinations of aspects lead to similar performance.",
"title": ""
}
] | [
{
"docid": "5109892c554f7fed68136f43b8c05bb8",
"text": "Obese white adipose tissue (AT) is characterized by large-scale infiltration of proinflammatory macrophages, in parallel with systemic insulin resistance; however, the cellular stimulus that initiates this signaling cascade and chemokine release is still unknown. The objective of this study was to determine the role of the phosphoinositide 3-kinase (PI3K) regulatory subunits on AT macrophage (ATM) infiltration in obesity. Here, we find that the Pik3r1 regulatory subunits (i.e., p85a/p55a/p50a) are highly induced in AT from high-fat diet–fed obese mice, concurrent with insulin resistance. Global heterozygous deletion of the Pik3r1 regulatory subunits (aHZ), but not knockout of Pik3r2 (p85b), preserves whole-body, AT, and skeletal muscle insulin sensitivity, despite severe obesity. Moreover, ATM accumulation, proinflammatory gene expression, and ex vivo chemokine secretion in obese aHZ mice are markedly reduced despite endoplasmic reticulum (ER) stress, hypoxia, adipocyte hypertrophy, and Jun NH2-terminal kinase activation. Furthermore, bone marrow transplant studies reveal that these improvements in obese aHZ mice are independent of reduced Pik3r1 expression in the hematopoietic compartment. Taken together, these studies demonstrate that Pik3r1 expression plays a critical role in mediating AT insulin sensitivity and, more so, suggest that reduced PI3K activity is a key step in the initiation and propagation of the inflammatory response in obese AT.",
"title": ""
},
{
"docid": "be75b351098bfda2829967a13b89c5fd",
"text": "Human activities such as international trade and travel promote biological invasions by accidentally or deliberately dispersing species outside their native biogeographical ranges (Lockwood, 2005; Alpert, 2006). Invasive species are now viewed as a significant component of global change and have become a serious threat to natural communities (Mack et al., 2000; Pyšek & Richardson, 2010). The ecological impact of invasive species has been observed in all types of ecosystems. Typically, invaders can change the niches of co-occurring species, alter the structure and function of ecosystems by degrading native communities and disrupt evolutionary processes through anthropogenic movement of species across physical and geographical barriers (D’Antonio & Vitousek, 1992; Mack et al., 2000; Richardson et al., 2000; Levine et al., 2003; Vitousek et al., 2011). Concerns for the implications and consequences of successful invasions have stimulated a considerable amount of research. Recent invasion research ranges from the developing testable hypotheses aimed at understanding the mechanisms of invasion to providing guidelines for control and management of invasive species. Several recent studies have used hyperspectral remote sensing (Underwood et al., 2003; Lass et al., 2005; Underwood Department of Biological Sciences, Murray State University, Murray, KY 42071, USA, Fondazione Edmund Mach, Research and Innovation Centre, Department of Biodiversity and Molecular Ecology, GIS and Remote Sensing Unit, Via E. Mach 1, 38010 S. Michele all’Adige, TN, Italy, Center for the Study of Institutions, Population, and Environmental Change, Indiana University, 408 N. Indiana Avenue, Bloomington, IN 47408, USA, Ashoka Trust for Research in Ecology and the Environment (ATREE), Royal Enclave, Srirampura, Jakkur Post, Bangalore 560064, India",
"title": ""
},
{
"docid": "906b6d1ddac67f9303ce86117b88edf2",
"text": "Over the years, we have harnessed the power of computing to improve the speed of operations and increase in productivity. Also, we have witnessed the merging of computing and telecommunications. This excellent combination of two important fields has propelled our capability even further, allowing us to communicate anytime and anywhere, improving our work flow and increasing our quality of life tremendously. The next wave of evolution we foresee is the convergence of telecommunication, computing, wireless, and transportation technologies. Once this happens, our roads and highways will be both our communications and transportation platforms, which will completely revolutionize when and how we access services and entertainment, how we communicate, commute, navigate, etc., in the coming future. This paper presents an overview of the current state-of-the-art, discusses current projects, their goals, and finally highlights how emergency services and road safety will evolve with the blending of vehicular communication networks with road transportation.",
"title": ""
},
{
"docid": "4f89160f87b862fdc471815b026511d1",
"text": "A procedure is described whereby a computer can determine whether two fingerpring impressions were made by the same finger. The procedure used the X and Y coordinates and the individual directions of the minutiae (ridge endings and bifurcations). The identity of two impressions is established by computing the density of clusters of points in AX and AY space where AX and AY are the differences in coordinates that are found in going from one of the fingerpring impressions to the other. Single fingerpring classification is discussed and experimental results using machine-read minutiae data are given. References: J. H. Wegstein, NBS Technical Notes 538 and 730. ~7 Information Processing for Radar Target Detection andClassification. A. KSIENSKI and L. WHITE, Ohio State-Previous research has demonstrated the feasibility of using multiple low-frequency radar returns for target classification. Simple object shapes have been successfully classified by such techniques, but aircraft data poses greater difficulty, as in general such data are not linearly separable. A misclassification error analysis is provided for aircraft data using k-nearest neighbor algorithms. Another recognition scheme involves the use of a bilinear fit of aircraft data; a misclassification error analysis is being prepared for this technique and will be reported. ~ A Parallel Machine for Silhouette Pre-Processing. PAUL NAHIN, Harvey Mudd-The concept of slope density is introduced as a descriptor of silhouettes. The mechanism of a parallel machine that extracts an approximation to the slope denisty is presented. The machine has been built by Aero-Jet, but because of its complexity, a digital simulation program has been developed I. The effect of sample and hold filtering on the machine output has been investigated, both theoretically, and via simulation. The design of a medical cell analyzer (i.e., marrow granulocyte precursor counter) incorporating the slope density machine is given. of Pittsburgh-In studying pictures of impossible objects, D. A. Huffman I described a labeling technique for interpreting a two dimensional line drawing as a picture of a polyhedron (a solid three dimensional object bounded by plane surfaces). Our work extends this method to interpret a set of planes in three dimensions as a \"picture\" of a four dimensional polyhedron. Huffman labeled each line in two dimensions as either i) concave, 2) convex with one visible plane, or 3) convex with two visible planes. A labeled line drawing is a valid interpretation iff the labeled lines intersect in one of twelve legal ways. Our method is …",
"title": ""
},
{
"docid": "fb4630a6b558ac9b8d8444275e1978e3",
"text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.",
"title": ""
},
{
"docid": "405182cedabc0c75c1b79052bd6db5b3",
"text": "Human resource management systems (HRMS) integrate human resource processes and an organization's information systems. An HRMS frequently represents one of the modules of an enterprise resource planning system (ERP). ERPs are information systems that manage the business and consist of integrated software applications such customer relations and supply chain management, manufacturing, finance and human resources. ERP implementation projects frequently have high failure rates; although research has investigated a number of factors for success and failure rates, limited attention has been directed toward the implementation teams, and how to make these more effective. In this paper we argue that shared leadership represents an appropriate approach to improving the functioning of ERP implementation teams. Shared leadership represents a form of team leadership where the team members, rather than only a single team leader, engage in leadership behaviors. While shared leadership has received increased research attention during the past decade, it has not been applied to ERP implementation teams and therefore that is the purpose of this article. Toward this end, we describe issues related to ERP and HRMS implementation, teams, and the concept of shared leadership, review theoretical and empirical literature, present an integrative framework, and describe the application of shared leadership to ERP and HRMS implementation. Published by Elsevier Inc.",
"title": ""
},
{
"docid": "d5c159a759aeace5085a7305609793e5",
"text": "In this paper, a new method is proposed to eliminate electrolytic capacitors in a two-stage ac-dc light-emitting diode (LED) driver. DC-biased sinusoidal or square-wave LED driving-current can help to reduce the power imbalance between ac input and dc output. In doing so, film capacitors can be adopted to improve LED driver's lifetime. The relationship between the peak-to-average ratio of the pulsating current in LEDs and the storage capacitance according to given storage capacitance is derived. Using the proposed “zero-low-level square-wave driving current” scheme, the storage capacitance in the LED driver can be reduced to 52.7% comparing with that in the driver using constant dc driving current. The input power factor is almost unity, which complies with lighting equipment standards such as IEC-1000-3-2 for Class C equipments. The voltage across the storage capacitors is analyzed and verified during the whole pulse width modulation dimming range. For the ease of dimming and implementation, a 50 W LED driver with zero-low-level square-wave driving current is built and the experimental results are presented to verify the proposed methods.",
"title": ""
},
{
"docid": "da7a2d40d2740e52ac7388fa23f1c797",
"text": "The use of business intelligence tools and other means to generate queries has led to great variety in the size of join queries. While most queries are reasonably small, join queries with up to a hundred relations are not that exotic anymore, and the distribution of query sizes has an incredible long tail. The largest real-world query that we are aware of accesses more than 4,000 relations. This large spread makes query optimization very challenging. Join ordering is known to be NP-hard, which means that we cannot hope to solve such large problems exactly. On the other hand most queries are much smaller, and there is no reason to sacrifice optimality there. This paper introduces an adaptive optimization framework that is able to solve most common join queries exactly, while simultaneously scaling to queries with thousands of joins. A key component there is a novel search space linearization technique that leads to near-optimal execution plans for large classes of queries. In addition, we describe implementation techniques that are necessary to scale join ordering algorithms to these extremely large queries. Extensive experiments with over 10 different approaches show that the new adaptive approach proposed here performs excellent over a huge spectrum of query sizes, and produces optimal or near-optimal solutions for most common queries.",
"title": ""
},
{
"docid": "afa0e5c40ed180b797c0e2e3ec7c62cb",
"text": "We present Science Assistments, an interactive environment, which assesses students’ inquiry skills as they engage in inquiry using science microworlds. We frame our variables, tasks, assessments, and methods of analyzing data in terms of evidence-centered design. Specifically, we focus on the student model, the task model, and the evidence model in the conceptual assessment framework. In order to support both assessment and the provision of scaffolding, the environment makes inferences about student inquiry skills using models developed through a combination of text replay tagging [cf. Sao Pedro et al. 2011], a method for rapid manual coding of student log files, and educational data mining. Models were developed for multiple inquiry skills, with particular focus on detecting if students are testing their articulated hypotheses, and if they are designing controlled experiments. Student-level cross-validation was applied to validate that this approach can automatically and accurately identify these inquiry skills for new students. The resulting detectors also can be applied at run-time to drive scaffolding intervention.",
"title": ""
},
{
"docid": "4f2112175c5d8175c5c0f8cb4d9185a2",
"text": "It is difficult to fully assess the quality of software inhouse, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports inhouse debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",
"title": ""
},
{
"docid": "cec75ff485e6575fbf58cb5553e1f8e9",
"text": "Preparation for the role of therapist can occur on both professional and personal levels. Research has found that therapists are at risk for occupationally related psychological problems. It follows that self-care may be a useful complement to the professional training of future therapists. The present study examined the effects of one approach to self-care, Mindfulness-Based Stress Reduction (MBSR), for therapists in training. Using a prospective, cohort-controlled design, the study found participants in the MBSR program reported significant declines in stress, negative affect, rumination, state and trait anxiety, and significant increases in positive affect and self-compassion. Further, MBSR participation was associated with increases in mindfulness, and this enhancement was related to several of the beneficial effects of MBSR participation. Discussion highlights the potential for future research addressing the mental health needs of therapists and therapist trainees.",
"title": ""
},
{
"docid": "c32b7f497450d92634ea097bbb062178",
"text": "This work addresses fine-grained image classification. Our work is based on the hypothesis that when dealing with subtle differences among object classes it is critical to identify and only account for a few informative image parts, as the remaining image context may not only be uninformative but may also hurt recognition. This motivates us to formulate our problem as a sequential search for informative parts over a deep feature map produced by a deep Convolutional Neural Network (CNN). A state of this search is a set of proposal bounding boxes in the image, whose informativeness is evaluated by the heuristic function (H), and used for generating new candidate states by the successor function (S). The two functions are unified via a Long Short-Term Memory network (LSTM) into a new deep recurrent architecture, called HSnet. Thus, HSnet (i) generates proposals of informative image parts and (ii) fuses all proposals toward final fine-grained recognition. We specify both supervised and weakly supervised training of HSnet depending on the availability of object part annotations. Evaluation on the benchmark Caltech-UCSD Birds 200-2011 and Cars-196 datasets demonstrate our competitive performance relative to the state of the art.",
"title": ""
},
{
"docid": "4d857311f86baca70700bb78c8771f22",
"text": "Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly non-trivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. This paper surveys the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis---and beyond---and describe recent developments towards automated parameter synthesis.",
"title": ""
},
{
"docid": "626470bd5182dd2a6d4e8a09b31731df",
"text": "In this paper, we present a semi-supervised method for automatic speech act recognition in email and forums. The major challenge of this task is due to lack of labeled data in these two genres. Our method leverages labeled data in the SwitchboardDAMSL and the Meeting Recorder Dialog Act database and applies simple domain adaptation techniques over a large amount of unlabeled email and forum data to address this problem. Our method uses automatically extracted features such as phrases and dependency trees, called subtree features, for semi-supervised learning. Empirical results demonstrate that our model is effective in email and forum speech act recognition.",
"title": ""
},
{
"docid": "1d606f39d429c5f344d5d3bc6810f2f9",
"text": "Cryptography is increasingly applied to the E-commerce world, especially to the untraceable payment system and the electronic voting system. Protocols for these systems strongly require the anonymous digital signature property, and thus a blind signature strategy is the answer to it. Chaum stated that every blind signature protocol should hold two fundamental properties, blindness and intractableness. All blind signature schemes proposed previously almost are based on the integer factorization problems, discrete logarithm problems, or the quadratic residues, which are shown by Lee et al. that none of the schemes is able to meet the two fundamental properties above. Therefore, an ECC-based blind signature scheme that possesses both the above properties is proposed in this paper.",
"title": ""
},
{
"docid": "a79f9ad24c4f047d8ace297b681ccf0a",
"text": "BACKGROUND\nLe Fort III distraction advances the Apert midface but leaves the central concavity and vertical compression untreated. The authors propose that Le Fort II distraction and simultaneous zygomatic repositioning as a combined procedure can move the central midface and lateral orbits in independent vectors in order to improve the facial deformity. The purpose of this study was to determine whether this segmental movement results in more normal facial proportions than Le Fort III distraction.\n\n\nMETHODS\nComputed tomographic scan analyses were performed before and after distraction in patients undergoing Le Fort III distraction (n = 5) and Le Fort II distraction with simultaneous zygomatic repositioning (n = 4). The calculated axial facial ratios and vertical facial ratios relative to the skull base were compared to those of unoperated Crouzon (n = 5) and normal (n = 6) controls.\n\n\nRESULTS\nWith Le Fort III distraction, facial ratios did not change with surgery and remained lower (p < 0.01; paired t test comparison) than normal and Crouzon controls. Although the face was advanced, its shape remained abnormal. With the Le Fort II segmental movement procedure, the central face advanced and lengthened more than the lateral orbit. This differential movement changed the abnormal facial ratios that were present before surgery into ratios that were not significantly different from normal controls (p > 0.05).\n\n\nCONCLUSION\nCompared with Le Fort III distraction, Le Fort II distraction with simultaneous zygomatic repositioning normalizes the position and the shape of the Apert face.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, III.",
"title": ""
},
{
"docid": "baa3d41ba1970125301b0fdd9380a966",
"text": "This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.",
"title": ""
},
{
"docid": "e2807120a8a04a9c5f5f221e413aec4d",
"text": "Background A military aircraft in a hostile environment may need to use radar jamming in order to avoid being detected or engaged by the enemy. Effective jamming can require knowledge of the number and type of enemy radars; however, the radar receiver on the aircraft will observe a single stream of pulses from all radar emitters combined. It is advantageous to separate this collection of pulses into individual streams each corresponding to a particular emitter in the environment; this process is known as pulse deinterleaving. Pulse deinterleaving is critical for effective electronic warfare (EW) signal processing such as electronic attack (EA) and electronic protection (EP) because it not only aids in the identification of enemy radars but also permits the intelligent allocation of processing resources.",
"title": ""
},
{
"docid": "c6c9643816533237a29dd93fd420018f",
"text": "We present an algorithm for finding a meaningful vertex-to-vertex correspondence between two 3D shapes given as triangle meshes. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigid-body transformation. Invariance to shape bending is achieved by relying on geodesic point proximities on a mesh to capture its shape. To deal with stretching, we propose to use non-rigid alignment via thin-plate splines in the spectral domain. This is combined with a refinement step based on the geodesic proximities to improve dense correspondence. We show empirically that our algorithm outperforms previous spectral methods, as well as schemes that compute correspondence in the spatial domain via non-rigid iterative closest points or the use of local shape descriptors, e.g., 3D shape context",
"title": ""
},
{
"docid": "b866fc215dbae6538e998b249563e78d",
"text": "The term `heavy metal' is, in this context, imprecise. It should probably be reserved for those elements with an atomic mass of 200 or greater [e.g., mercury (200), thallium (204), lead (207), bismuth (209) and the thorium series]. In practice, the term has come to embrace any metal, exposure to which is clinically undesirable and which constitutes a potential hazard. Our intention in this review is to provide an overview of some general concepts of metal toxicology and to discuss in detail metals of particular importance, namely, cadmium, lead, mercury, thallium, bismuth, arsenic, antimony and tin. Poisoning from individual metals is rare in the UK, even when there is a known risk of exposure. Table 1 shows that during 1991±92 only 1 ́1% of male lead workers in the UK and 5 ́5% of female workers exceeded the legal limits for blood lead concentration. Collectively, however, poisoning with metals forms an important aspect of toxicology because of their widespread use and availability. Furthermore, hitherto unrecognized hazards and accidents continue to be described. The investigation of metal poisoning forms a distinct specialist area, since most metals are usually measured using atomic absorption techniques. Analyses require considerable expertise and meticulous attention to detail to ensure valid results. Different analytical performance standards may be required of assays used for environmental and occupational monitoring, or for solely toxicological purposes. Because of the high capital cost of good quality instruments, the relatively small numbers of tests required and the variety of metals, it is more cost-effective if such testing is carried out in regional, national or other centres having the necessary experience. Nevertheless, patients are frequently cared for locally, and clinical biochemists play a crucial role in maintaining a high index of suspicion and liaising with clinical colleagues to ensure the provision of correct samples for analysis and timely advice.",
"title": ""
}
] | scidocsrr |
2e33c79d5d3f826ab8c431bf0eba277c | Selfishness, Altruism and Message Spreading in Mobile Social Networks | [
{
"docid": "2af231da02dbfb4db5c44c386870142c",
"text": "Mobile ad hoc routing protocols allow nodes with wireless adaptors to communicate with one another without any pre-existing network infrastructure. Existing ad hoc routing protocols, while robust to rapidly changing network topology, assume the presence of a connected path from source to destination. Given power limitations, the advent of short-range wireless networks, and the wide physical conditions over which ad hoc networks must be deployed, in some scenarios it is likely that this assumption is invalid. In this work, we develop techniques to deliver messages in the case where there is never a connected path from source to destination or when a network partition exists at the time a message is originated. To this end, we introduce Epidemic Routing, where random pair-wise exchanges of messages among mobile hosts ensure eventual message delivery. The goals of Epidemic Routing are to: i) maximize message delivery rate, ii) minimize message latency, and iii) minimize the total resources consumed in message delivery. Through an implementation in the Monarch simulator, we show that Epidemic Routing achieves eventual delivery of 100% of messages with reasonable aggregate resource consumption in a number of interesting scenarios.",
"title": ""
}
] | [
{
"docid": "f8275a80021312a58c9cd52bbcd4c431",
"text": "Mobile online social networks (OSNs) are emerging as the popular mainstream platform for information and content sharing among people. In order to provide Quality of Experience (QoE) support for mobile OSN services, in this paper we propose a socially-driven learning-based framework, namely Spice, for media content prefetching to reduce the access delay and enhance mobile user's satisfaction. Through a large-scale data-driven analysis over real-life mobile Twitter traces from over 17,000 users during a period of five months, we reveal that the social friendship has a great impact on user's media content click behavior. To capture this effect, we conduct social friendship clustering over the set of user's friends, and then develop a cluster-based Latent Bias Model for socially-driven learning-based prefetching prediction. We then propose a usage-adaptive prefetching scheduling scheme by taking into account that different users may possess heterogeneous patterns in the mobile OSN app usage. We comprehensively evaluate the performance of Spice framework using trace-driven emulations on smartphones. Evaluation results corroborate that the Spice can achieve superior performance, with an average 67.2% access delay reduction at the low cost of cellular data and energy consumption. Furthermore, by enabling users to offload their machine learning procedures to a cloud server, our design can achieve speed-up of a factor of 1000 over the local data training execution on smartphones.",
"title": ""
},
{
"docid": "a29ee41e8f46d1feebeb67886b657f70",
"text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.",
"title": ""
},
{
"docid": "a3fafe73615c434375cd3f35323c939e",
"text": "In this paper, Magnetic Resonance Images,T2 weighte d modality , have been pre-processed by bilateral filter to reduce th e noise and maintaining edges among the different tissues. Four different t echniques with morphological operations have been applied to extra c the tumor region. These were: Gray level stretching and Sobel edge de tection, K-Means Clustering technique based on location and intensit y, Fuzzy C-Means Clustering, and An Adapted K-Means clustering techn ique and Fuzzy CMeans technique. The area of the extracted tumor re gions has been calculated. The present work showed that the four i mplemented techniques can successfully detect and extract the brain tumor and thereby help doctors in identifying tumor's size and region.",
"title": ""
},
{
"docid": "3205184f918eab105ee17bfb12277696",
"text": "The Trilobita were characterized by a cephalic region in which the biomineralized exoskeleton showed relatively high morphological differentiation among a taxonomically stable set of well defined segments, and an ontogenetically and taxonomically dynamic trunk region in which both exoskeletal segments and ventral appendages were similar in overall form. Ventral appendages were homonomous biramous limbs throughout both the cephalon and trunk, except for the most anterior appendage pair that was antenniform, preoral, and uniramous, and a posteriormost pair of antenniform cerci, known only in one species. In some clades trunk exoskeletal segments were divided into two batches. In some, but not all, of these clades the boundary between batches coincided with the boundary between the thorax and the adult pygidium. The repeated differentiation of the trunk into two batches of segments from the homonomous trunk condition indicates an evolutionary trend in aspects of body patterning regulation that was achieved independently in several trilobite clades. The phylogenetic placement of trilobites and congruence of broad patterns of tagmosis with those seen among extant arthropods suggest that the expression domains of trilobite cephalic Hox genes may have overlapped in a manner similar to that seen among extant arachnates. This, coupled with the fact that trilobites likely possessed ten Hox genes, presents one alternative to a recent model in which Hox gene distribution in trilobites was equated to eight putative divisions of the trilobite body plan.",
"title": ""
},
{
"docid": "3ebe9aecd4c84e9b9ed0837bd294b4ed",
"text": "A bond graph model of a hybrid electric vehicle (HEV) powertrain test cell is proposed. The test cell consists of a motor/generator coupled to a HEV powertrain and powered by a bidirectional power converter. Programmable loading conditions, including positive and negative resistive and inertial loads of any magnitude are modeled, avoiding the use of mechanical inertial loads involved in conventional test cells. The dynamics and control equations of the test cell are derived directly from the bond graph models. The modeling and simulation results of the dynamics of the test cell are validated through experiments carried out on a scaled-down system.",
"title": ""
},
{
"docid": "6620fdf695d7e89a703fc17e007de7e2",
"text": "This paper presents the machine translation system known as TransLI (Translation of Legal Information) developed by the authors for automatic translation of Canadian Court judgments from English to French and from French to English. Normally, a certified translation of a legal judgment takes several months to complete. The authors attempted to shorten this time significantly using a unique statistical machine translation system which has attracted the attention of the federal courts in Canada for its accuracy and speed. This paper also describes the results of a human evaluation of the output of the system in the context of a pilot project in collaboration with the federal courts of Canada. 1. Context of the work NLP Technologies is an enterprise devoted to the use of advanced information technologies in the judicial domain. Its main focus is DecisionExpressTM a service utilizing automatic summarization technology with respect to legal information. DecisionExpress is a weekly bulletin of recent decisions of Canadian federal courts and tribunals. It is an tool that processes judicial decisions automatically and makes the daily information used by jurists more accessible by presenting the legal record of the proceedings of federal courts in Canada as a table-style summary (Farzindar et al., 2004, Chieze et al. 2008). NLP Technologies in collaboration with researchers from the RALI at Université de Montréal have developed TransLI to translate automatically the judgments from the Canadian Federal Courts. As it happens, for the new weekly published judgments, 75% of decisions are originally written in English 1 http://www.nlptechnologies.ca 2 http://rali.iro.umontreal.ca Machine Translation of Legal Information and Its Evaluation 2 and 25% in French. By law, the Federal Courts have to provide a translation in the other official language of Canada. The legal domain has continuous publishing and translation cycles, large volumes of digital content and growing demand to distribute more multilingual information. It is necessary to handle a high volume of translations quickly. Currently, a certified translation of a legal judgment takes several months to complete. Afterwards, there is a significant delay between the publication of a judgment in the original language and the availability of its human translation into the other official language. Initially, the goal of this work was to allow the court, during the few months when the official translation is pending, to publish automatically translated judgments and summaries with the appropriate caveat. Once the official translation would become available, the Court would replace the machine translations by the official ones. However, the high quality of the machine translation system obtained, developed and trained specifically on the Federal Courts corpora, opens further opportunities which are currently being investigated: machine translations could be considered as first drafts for official translations that would only need to be revised before their publication. This procedure would thus reduce the delay between the publication of the decision in the original language and its official translation. It would also provide opportunities for saving on the cost of translation. We evaluated the French and English output and performed a more detailed analysis of the modifications made to the translations by the evaluators in the context of a pilot study to be conducted in cooperation with the Federal Courts. This paper describes our statistical machine translation system, whose performance has been assessed with the usual automatic evaluation metrics. We also present the results of a manual evaluation of the translations and the result of a completed translation pilot project in a real context of publication of the federal courts of Canada. To our knowledge, this is the first attempt to build a large-scale translation system of complete judgments for eventual publication.",
"title": ""
},
{
"docid": "a727d23d78f794ce437351c5f603195f",
"text": "We initiate the study of secure multi-party computation (MPC) in a server-aided setting, where the parties have access to a single server that (1) does not have any input to the computation; (2) does not receive any output from the computation; but (3) has a vast (but bounded) amount of computational resources. In this setting, we are concerned with designing protocols that minimize the computation of the parties at the expense of the server. We develop new definitions of security for this server-aided setting that generalize the standard simulation-based definitions for MPC and allow us to formally capture the existence of dishonest but non-colluding participants. This requires us to introduce a formal characterization of non-colluding adversaries that may be of independent interest. We then design general and special-purpose server-aided MPC protocols that are more efficient (in terms of computation and communication) for the parties than the alternative of running a standard MPC protocol (i.e., without the server). Our main general-purpose protocol provides security when there is at least one honest party with input. We also construct a new and efficient server-aided protocol for private set intersection and give a general transformation from any secure delegated computation scheme to a server-aided two-party protocol. ∗Microsoft Research. [email protected]. †University of Calgary. [email protected]. Work done while visiting Microsoft Research. ‡Columbia University. [email protected]. Work done as an intern at Microsoft Research.",
"title": ""
},
{
"docid": "8fcf31f2de602cf10f769c41acccc221",
"text": "This book contains materials that come out of the Artificial General Intelligence Research Institute (AGIRI) Workshop, held in May 20-21, 2006 at Washington DC. The theme of the workshop is “Transitioning from Narrow AI to Artificial General Intelligence.” In this introductory chapter, we will clarify the notion of “Artificial General Intelligence”, briefly survey the past and present situation of the field, analyze and refute some common objections and doubts regarding this area of research, and discuss what we believe needs to be addressed by the field as a whole in the near future. Finally, we will briefly summarize the contents of the other chapters in this collection.",
"title": ""
},
{
"docid": "8b947250873921478dd7798c47314979",
"text": "In this letter, an ultra-wideband (UWB) bandpass filter (BPF) using stepped-impedance stub-loaded resonator (SISLR) is presented. Characterized by theoretical analysis, the proposed SISLR is found to have the advantage of providing more degrees of freedom to adjust the resonant frequencies. Besides, two transmission zeros can be created at both lower and upper sides of the passband. Benefiting from these features, a UWB BPF is then investigated by incorporating this SISLR and two aperture-backed interdigital coupled-lines. Finally, this filter is built and tested. The simulated and measured results are in good agreement with each other, showing good wideband filtering performance with sharp rejection skirts outside the passband.",
"title": ""
},
{
"docid": "b318cfcbe82314cc7fa898f0816dbab8",
"text": "Flow experience is often considered as an important standard of ideal user experience (UX). Till now, flow is mainly measured via self-report questionnaires, which cannot evaluate flow immediately and objectively. In this paper, we constructed a physiological evaluation model to evaluate flow in virtual reality (VR) game. The evaluation model consists of five first-level indicators and their respective second-level indicators. Then, we conducted an empirical experiment to test the effectiveness of partial indicators to predict flow experience. Most results supported the model and revealed that heart rate, interbeat interval, heart rate variability (HRV), low-frequency HRV (LF-HRV), high-frequency HRV (HF-HRV), and respiratory rate are all effective indicators in predicting flow experience. Further research should be conducted to improve the evaluation model and conclude practical implications in UX and VR game design.",
"title": ""
},
{
"docid": "48a3c9d1f41f9b7ed28f8ef46b5c4533",
"text": "We introduce two new methods of deriving the classical PCA in the framework of minimizing the mean square error upon performing a lower-dimensional approximation of the data. These methods are based on two forms of the mean square error function. One of the novelties of the presented methods is that the commonly employed process of subtraction of the mean of the data becomes part of the solution of the optimization problem and not a pre-analysis heuristic. We also derive the optimal basis and the minimum error of approximation in this framework and demonstrate the elegance of our solution in comparison with a recent solution in the framework.",
"title": ""
},
{
"docid": "2f0e767a5d4524ed2fed6b43d4b22a70",
"text": "The cerebellum is involved in learning and memory of sensory motor skills. However, the way this process takes place in local microcircuits is still unclear. The initial proposal, casted into the Motor Learning Theory, suggested that learning had to occur at the parallel fiber–Purkinje cell synapse under supervision of climbing fibers. However, the uniqueness of this mechanism has been questioned, and multiple forms of long-term plasticity have been revealed at various locations in the cerebellar circuit, including synapses and neurons in the granular layer, molecular layer and deep-cerebellar nuclei. At present, more than 15 forms of plasticity have been reported. There has been a long debate on which plasticity is more relevant to specific aspects of learning, but this question turned out to be hard to answer using physiological analysis alone. Recent experiments and models making use of closed-loop robotic simulations are revealing a radically new view: one single form of plasticity is insufficient, while altogether, the different forms of plasticity can explain the multiplicity of properties characterizing cerebellar learning. These include multi-rate acquisition and extinction, reversibility, self-scalability, and generalization. Moreover, when the circuit embeds multiple forms of plasticity, it can easily cope with multiple behaviors endowing therefore the cerebellum with the properties needed to operate as an effective generalized forward controller.",
"title": ""
},
{
"docid": "7120d5acf58f8ec623d65b4f41bef97d",
"text": "BACKGROUND\nThis study analyzes the problems and consequences associated with prolonged use of laparoscopic instruments (dissector and needle holder) and equipments.\n\n\nMETHODS\nA total of 390 questionnaires were sent to the laparoscopic surgeons of the Spanish Health System. Questions were structured on the basis of 4 categories: demographics, assessment of laparoscopic dissector, assessment of needle holder, and other informations.\n\n\nRESULTS\nA response rate of 30.26% was obtained. Among them, handle shape of laparoscopic instruments was identified as the main element that needed to be improved. Furthermore, the type of instrument, electrocautery pedals and height of the operating table were identified as major causes of forced positions during the use of both surgical instruments.\n\n\nCONCLUSIONS\nAs far as we know, this is the largest Spanish survey conducted on this topic. From this survey, some ergonomic drawbacks have been identified in: (a) the instruments' design, (b) the operating tables, and (c) the posture of the surgeons.",
"title": ""
},
{
"docid": "88e72e039de541b00722901a8eff7d19",
"text": "When building agents and synthetic characters, and in order to achieve believability, we must consider the emotional relations established between users and characters, that is, we must consider the issue of \"empathy\". Defined in broad terms as \"An observer reacting emotionally because he perceives that another is experiencing or about to experience an emotion\", empathy is an important element to consider in the creation of relations between humans and agents. In this paper we will focus on the role of empathy in the construction of synthetic characters, providing some requirements for such construction and illustrating the presented concepts with a specific system called FearNot!. FearNot! was developed to address the difficult and often devastating problem of bullying in schools. By using role playing and empathic synthetic characters in a 3D environment, FearNot! allows children from 8 to 12 to experience a virtual scenario where they can witness (in a third-person perspective) bullying situations. To build empathy into FearNot! we have considered the following components: agentýs architecture; the charactersý embodiment and emotional expression; proximity with the user and emotionally charged situations.We will describe how these were implemented in FearNot! and report on the preliminary results we have with it.",
"title": ""
},
{
"docid": "b7bf7d430e4132a4d320df3a155ee74c",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "732aa9623301d4d3cc6fc9d15c6836fe",
"text": "Growing network traffic brings huge pressure to the server cluster. Using load balancing technology in server cluster becomes the choice of most enterprises. Because of many limitations, the development of the traditional load balancing technology has encountered bottlenecks. This has forced companies to find new load balancing method. Software Defined Network (SDN) provides a good method to solve the load balancing problem. In this paper, we implemented two load balancing algorithm that based on the latest SDN network architecture. The first one is a static scheduling algorithm and the second is a dynamic scheduling algorithm. Our experiments show that the performance of the dynamic algorithm is better than the static algorithm.",
"title": ""
},
{
"docid": "f5c60102070450489f7301d089d6fbd4",
"text": "This study presents a new approach to solve the well-known power system Economic Load Dispatch problem (ED) using a hybrid algorithm consisting of Genetic Algorithm (GA), Pattern Search (PS) and Sequential Quadratic Programming (SQP). GA is the main optimizer of this algorithm, whereas PS and SQP are used to fine-tune the results obtained from the GA, thereby increasing solution confidence. To test the effectiveness of this approach it was applied to various test systems. Furthermore, the convergence characteristics and robustness of the proposed method have been explored through comparisons with results reported in literature. The outcome is very encouraging and suggests that the hybrid GA-PS-SQP algorithm is very effective in solving the power system economic load dispatch problem.",
"title": ""
},
{
"docid": "a1306f761e45fdd56ae91d1b48909d74",
"text": "We propose a graphical model for representing networks of stochastic processes, the minimal generative model graph. It is based on reduced factorizations of the joint distribution over time. We show that under appropriate conditions, it is unique and consistent with another type of graphical model, the directed information graph, which is based on a generalization of Granger causality. We demonstrate how directed information quantifies Granger causality in a particular sequential prediction setting. We also develop efficient methods to estimate the topological structure from data that obviate estimating the joint statistics. One algorithm assumes upper bounds on the degrees and uses the minimal dimension statistics necessary. In the event that the upper bounds are not valid, the resulting graph is nonetheless an optimal approximation in terms of Kullback-Leibler (KL) divergence. Another algorithm uses near-minimal dimension statistics when no bounds are known, but the distribution satisfies a certain criterion. Analogous to how structure learning algorithms for undirected graphical models use mutual information estimates, these algorithms use directed information estimates. We characterize the sample-complexity of two plug-in directed information estimators and obtain confidence intervals. For the setting when point estimates are unreliable, we propose an algorithm that uses confidence intervals to identify the best approximation that is robust to estimation error. Last, we demonstrate the effectiveness of the proposed algorithms through the analysis of both synthetic data and real data from the Twitter network. In the latter case, we identify which news sources influence users in the network by merely analyzing tweet times.",
"title": ""
},
{
"docid": "c80dbfc2e1f676a7ffe4a6a4f7460d36",
"text": "Coarse-grained semantic categories such as supersenses have proven useful for a range of downstream tasks such as question answering or machine translation. To date, no effort has been put into integrating the supersenses into distributional word representations. We present a novel joint embedding model of words and supersenses, providing insights into the relationship between words and supersenses in the same vector space. Using these embeddings in a deep neural network model, we demonstrate that the supersense enrichment leads to a significant improvement in a range of downstream classification tasks.",
"title": ""
},
{
"docid": "7363b433f17e1f3dfecc805b58a8706b",
"text": "Mobile Edge Computing (MEC) consists of deploying computing resources (CPU, storage) at the edge of mobile networks; typically near or with eNodeBs. Besides easing the deployment of applications and services requiring low access to the remote server, such as Virtual Reality and Vehicular IoT, MEC will enable the development of context-aware and context-optimized applications, thanks to the Radio API (e.g. information on user channel quality) exposed by eNodeBs. Although ETSI is defining the architecture specifications, solutions to integrate MEC to the current 3GPP architecture are still open. In this paper, we fill this gap by proposing and implementing a Software Defined Networking (SDN)-based MEC framework, compliant with both ETSI and 3GPP architectures. It provides the required data-plane flexibility and programmability, which can on-the-fly improve the latency as a function of the network deployment and conditions. To illustrate the benefit of using SDN concept for the MEC framework, we present the details of software architecture as well as performance evaluations.",
"title": ""
}
] | scidocsrr |
c0224b859e856875fef59a0c77f04b2f | Map-Reduce for Machine Learning on Multicore | [
{
"docid": "6b038c702a3636664a2f7d4e3dcde4ff",
"text": "This article is reprinted from the Internaional Electron Devices Meeting (1975). It discusses the complexity of integrated circuits, identifies their manufacture, production, and deployment, and addresses trends to their future deployment.",
"title": ""
}
] | [
{
"docid": "b9e4a201050b379500e5e8a2bca81025",
"text": "On the basis of a longitudinal field study of domestic communication, we report some essential constituents of the user experience of awareness of others who are distant in space or time, i.e. presence-in-absence. We discuss presence-in-absence in terms of its social (Contact) and informational (Content) facets, and the circumstances of the experience (Context). The field evaluation of a prototype, 'The Cube', designed to support presence-in-absence, threw up issues in the interrelationships between contact, content and context; issues that the designers of similar social artifacts will need to address.",
"title": ""
},
{
"docid": "bc5a3cd619be11132ea39907f732bf4c",
"text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.",
"title": ""
},
{
"docid": "2a43e164e536600ee6ceaf6a9c1af1be",
"text": "Unsupervised paraphrase acquisition has been an active research field in recent years, but its effective coverage and performance have rarely been evaluated. We propose a generic paraphrase-based approach for Relation Extraction (RE), aiming at a dual goal: obtaining an applicative evaluation scheme for paraphrase acquisition and obtaining a generic and largely unsupervised configuration for RE. We analyze the potential of our approach and evaluate an implemented prototype of it using an RE dataset. Our findings reveal a high potential for unsupervised paraphrase acquisition. We also identify the need for novel robust models for matching paraphrases in texts, which should address syntactic complexity and variability.",
"title": ""
},
{
"docid": "611b985ae194f562e459dc78f7aafdc3",
"text": "In order to understand the formation and subsequent evolution of galaxies one must first distinguish between the two main morphological classes of massive systems: spirals and early-type systems. This paper introduces a project, Galaxy Zoo, which provides visual morphological classifications for nearly one million galaxies, extracted from the Sloan Digital Sky Survey (SDSS). This achievement was made possible by inviting the general public to visually inspect and classify these galaxies via the internet. The project has obtained more than 4 × 107 individual classifications made by ∼105 participants. We discuss the motivation and strategy for this project, and detail how the classifications were performed and processed. We find that Galaxy Zoo results are consistent with those for subsets of SDSS galaxies classified by professional astronomers, thus demonstrating that our data provide a robust morphological catalogue. Obtaining morphologies by direct visual inspection avoids introducing biases associated with proxies for morphology such as colour, concentration or structural parameters. In addition, this catalogue can be used to directly compare SDSS morphologies with older data sets. The colour–magnitude diagrams for each morphological class are shown, and we illustrate how these distributions differ from those inferred using colour alone as a proxy for",
"title": ""
},
{
"docid": "8d07f52f154f81ce9dedd7c5d7e3182d",
"text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.",
"title": ""
},
{
"docid": "ac96a4c1644dfbabc1dd02878c43c966",
"text": "A labeled text corpus made up of Turkish papers' titles, abstracts and keywords is collected. The corpus includes 35 number of different disciplines, and 200 documents per subject. This study presents the text corpus' collection and content. The classification performance of Term Frequcney - Inverse Document Frequency (TF-IDF) and topic probabilities of Latent Dirichlet Allocation (LDA) features are compared for the text corpus. The text corpus is shared as open source so that it could be used for natural language processing applications with academic purposes.",
"title": ""
},
{
"docid": "242e78ed606d13502ace6d5eae00b315",
"text": "Use of information technology management framework plays a major influence on organizational success. This article focuses on the field of Internet of Things (IoT) management. In this study, a number of risks in the field of IoT is investigated, then with review of a number of COBIT5 risk management schemes, some associated strategies, objectives and roles are provided. According to the in-depth studies of this area it is expected that using the best practices of COBIT5 can be very effective, while the use of this standard considerably improve some criteria such as performance, cost and time. Finally, the paper proposes a framework which reflects the best practices and achievements in the field of IoT risk management.",
"title": ""
},
{
"docid": "e6c7d1db1e1cfaab5fdba7dd1146bcd2",
"text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet",
"title": ""
},
{
"docid": "77d80da2b0cd3e8598f9c677fc8827a9",
"text": "In this report, our approach to tackling the task of ActivityNet 2018 Kinetics-600 challenge is described in detail. Though spatial-temporal modelling methods, which adopt either such end-to-end framework as I3D [1] or two-stage frameworks (i.e., CNN+RNN), have been proposed in existing state-of-the-arts for this task, video modelling is far from being well solved. In this challenge, we propose spatial-temporal network (StNet) for better joint spatial-temporal modelling and comprehensively video understanding. Besides, given that multimodal information is contained in video source, we manage to integrate both early-fusion and later-fusion strategy of multi-modal information via our proposed improved temporal Xception network (iTXN) for video understanding. Our StNet RGB single model achieves 78.99% top-1 precision in the Kinetics-600 validation set and that of our improved temporal Xception network which integrates RGB, flow and audio modalities is up to 82.35%. After model ensemble, we achieve top-1 precision as high as 85.0% on the validation set and rank No.1 among all submissions.",
"title": ""
},
{
"docid": "e61a0ba24db737d42a730d5738583ffa",
"text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.",
"title": ""
},
{
"docid": "c227cae0ec847a227945f1dec0b224d2",
"text": "We present a highly flexible and efficient software pipeline for programmable triangle voxelization. The pipeline, entirely written in CUDA, supports both fully conservative and thin voxelizations, multiple boolean, floating point, vector-typed render targets, user-defined vertex and fragment shaders, and a bucketing mode which can be used to generate 3D A-buffers containing the entire list of fragments belonging to each voxel. For maximum efficiency, voxelization is implemented as a sort-middle tile-based rasterizer, while the A-buffer mode, essentially performing 3D binning of triangles over uniform grids, uses a sort-last pipeline. Despite its major flexibility, the performance of our tile-based rasterizer is always competitive with and sometimes more than an order of magnitude superior to that of state-of-the-art binary voxelizers, whereas our bucketing system is up to 4 times faster than previous implementations. In both cases the results have been achieved through the use of careful load-balancing and high performance sorting primitives.",
"title": ""
},
{
"docid": "cf45599aeb22470b7922fc64394f114c",
"text": "This paper addresses the task of assigning multiple labels of fine-grained named entity (NE) types to Wikipedia articles. To address the sparseness of the input feature space, which is salient particularly in fine-grained type classification, we propose to learn article vectors (i.e. entity embeddings) from hypertext structure of Wikipedia using a Skip-gram model and incorporate them into the input feature set. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled instances. The results of our experiments show that our idea gained statistically significant improvements in classification results.",
"title": ""
},
{
"docid": "9d19d15b070faf62ecfa99d90e37b908",
"text": "Title of Thesis: SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM Degree candidate: Phillip Yip Degree and year: Master of Science, 2004 Thesis directed by: Assistant Professor Dimitrios Hristu-Varsakelis Department of Mechanical Engineering Modern control systems often consist of networks of components that must share a common communication channel. Not all components of the networked control system can communicate with one another simultaneously at any given time. The “attention” that each component receives is an important factor that affects the system’s overall performance. An effective controller should ensure that sensors and actuators receive sufficient attention. This thesis describes a “ball-on-plate” dynamical system that includes a digital controller, which communicates with a pair of language-driven actuators, and an overhead camera. A control algorithm was developed to restrict the ball to a small region on the plate using a quantized set of language-based commands. The size of this containment region was analytically determined as a function of the communication constraints and other control system parameters. The effectiveness of the proposed control law was evaluated in experiments and mathematical simulations. SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM by Phillip Yip Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Science 2004 Advisory Commmittee: Assistant Professor Dimitrios Hristu-Varsakelis, Chair/Advisor Professor Balakumar Balachandran Professor Amr Baz c ©Copyright by Phillip T. Yip 2004 DEDICATION: To my family",
"title": ""
},
{
"docid": "3f40b9d1dfff00d8310f08df12096d63",
"text": "This paper explores a monetary policy model with habit formation for consumers, in which consumers’ utility depends in part on current consumption relative to past consumption. The empirical tests developed in the paper show that one can reject the hypothesis of no habit formation with tremendous confidence, largely because the habit formation model captures the gradual hump-shaped response of real spending to various shocks. The paper then embeds the habit consumption specification in a monetary policy model and finds that the responses of both spending and inflation to monetary policy actions are significantly improved by this modification. (JEL D12, E52, E43) Forthcoming, American Economic Review, June 2000. With the resurgence of interest in the effects of monetary policy on the macroeconomy, led by the work of the Christina D. and David H. Romer (1989), Ben S. Bernanke and Alan S. Blinder (1992), Lawrence J. Christiano, Martin S. Eichenbaum, and Charles L. Evans (1996), and others, the need for a structural model that could plausibly be used for monetary policy analysis has become evident. Of course, many extant models have been used for monetary policy analysis, but many of these are perceived as having critical shortcomings. First, some models do not incorporate explicit expectations behavior, so that changes in policy (or private) behavior could cause shifts in reduced-form parameters (i.e., the critique of Robert E. Lucas 1976). Others incorporate expectations, but derive key relationships from ad hoc behavioral assumptions, rather than from explicit optimizing problems for consumers and firms (Fuhrer and George R. Moore 1995b is an example). Explicit expectations and optimizing behavior are both desirable, other things equal, for a model of monetary analysis. First, analyzing potential improvements to monetary policy relative to historical policies requires a model that is stable across alternative policy regimes. This underlines the importance of explicit expectations formation. Second, the “optimal” in optimal monetary policy must ultimately refer to social welfare. Many have approximated social welfare with weighted averages of output and inflation variances, but one cannot know how good these approximations are without more explicit modeling of welfare. This implies that the model be closely tied to the underlying objectives of consumers and firms, hence the emphasis on optimization-based models. A critical test for whether a model reflects underlying objectives is its ability to accurately reflect the dominant dynamic interactions in the data. A number of recent papers (see, for example, Robert G. King and Alexander L. Wolman (1996), Bennett T. McCallum and Edward Nelson (1999a, 1999b); Julio R. Rotemberg and Michael Woodford (1997)) have developed models that incorporate explicit expectations, optimizing behavior, and frictions that allow monetary policy to have real effects. This paper continues in that line of research by documenting the empirical importance of a key feature of aggregate data: the “hump-shaped,” gradual response of spending and inflation to shocks. It then develops a monetary policy model that can capture this feature, as well as all of the features (e.g. the real effects of monetary policy, the persistence of inflation and output) embodied in earlier models. The key to the model’s success on the spending side is the inclusion of habit formation in the consumer’s utility function. This modification",
"title": ""
},
{
"docid": "f709802a6da7db7c71dfa67930111b04",
"text": "Generative adversarial networks (GANs) are a class of unsupervised machine learning algorithms that can produce realistic images from randomly-sampled vectors in a multi-dimensional space. Until recently, it was not possible to generate realistic high-resolution images using GANs, which has limited their applicability to medical images that contain biomarkers only detectable at native resolution. Progressive growing of GANs is an approach wherein an image generator is trained to initially synthesize low resolution synthetic images (8x8 pixels), which are then fed to a discriminator that distinguishes these synthetic images from real downsampled images. Additional convolutional layers are then iteratively introduced to produce images at twice the previous resolution until the desired resolution is reached. In this work, we demonstrate that this approach can produce realistic medical images in two different domains; fundus photographs exhibiting vascular pathology associated with retinopathy of prematurity (ROP), and multi-modal magnetic resonance images of glioma. We also show that fine-grained details associated with pathology, such as retinal vessels or tumor heterogeneity, can be preserved and enhanced by including segmentation maps as additional channels. We envisage several applications of the approach, including image augmentation and unsupervised classification of pathology.",
"title": ""
},
{
"docid": "81243e721527e74f0997d6aeb250cc23",
"text": "This paper compares the attributes of 36 slot, 33 slot and 12 slot brushless interior permanent magnet motor designs, each with an identical 10 pole interior magnet rotor. The aim of the paper is to quantify the trade-offs between alternative distributed and concentrated winding configurations taking into account aspects such as thermal performance, field weakening behaviour, acoustic noise, and efficiency. It is found that the concentrated 12 slot design gives the highest theoretical performance however significant rotor losses are found during testing and a large amount of acoustic noise and vibration is generated. The 33 slot design is found to have marginally better performance than the 36 slot but it also generates some unbalanced magnetic pull on the rotor which may lead to mechanical issues at higher speeds.",
"title": ""
},
{
"docid": "22c6ae71c708d5e2d1bc7e5e085c4842",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "31e8d60af8a1f9576d28c4c1e0a3db86",
"text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.",
"title": ""
},
{
"docid": "fbebf8aaeadbd4816a669bd0b23e0e2b",
"text": "In traditional cloud storage systems, attribute-based encryption (ABE) is regarded as an important technology for solving the problem of data privacy and fine-grained access control. However, in all ABE schemes, the private key generator has the ability to decrypt all data stored in the cloud server, which may bring serious problems such as key abuse and privacy data leakage. Meanwhile, the traditional cloud storage model runs in a centralized storage manner, so single point of failure may leads to the collapse of system. With the development of blockchain technology, decentralized storage mode has entered the public view. The decentralized storage approach can solve the problem of single point of failure in traditional cloud storage systems and enjoy a number of advantages over centralized storage, such as low price and high throughput. In this paper, we study the data storage and sharing scheme for decentralized storage systems and propose a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology. In this framework, the data owner has the ability to distribute secret key for data users and encrypt shared data by specifying access policy, and the scheme achieves fine-grained access control over data. At the same time, based on smart contract on the Ethereum blockchain, the keyword search function on the cipher text of the decentralized storage systems is implemented, which solves the problem that the cloud server may not return all of the results searched or return wrong results in the traditional cloud storage systems. Finally, we simulated the scheme in the Linux system and the Ethereum official test network Rinkeby, and the experimental results show that our scheme is feasible.",
"title": ""
},
{
"docid": "0342f89c44e0b86026953196de34b608",
"text": "In this paper, we introduce an approach for recognizing the absence of opposing arguments in persuasive essays. We model this task as a binary document classification and show that adversative transitions in combination with unigrams and syntactic production rules significantly outperform a challenging heuristic baseline. Our approach yields an accuracy of 75.6% and 84% of human performance in a persuasive essay corpus with various topics.",
"title": ""
}
] | scidocsrr |
1433b929b171815ba51b87a2f3459e9b | Automatic video description generation via LSTM with joint two-stream encoding | [
{
"docid": "4f58d355a60eb61b1c2ee71a457cf5fe",
"text": "Real-world videos often have complex dynamics, methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both input (sequence of frames) and output (sequence of words) of variable length. To approach this problem we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural networks, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets (M-VAD and MPII-MD).",
"title": ""
},
{
"docid": "9734f4395c306763e6cc5bf13b0ca961",
"text": "Generating descriptions for videos has many applications including assisting blind people and human-robot interaction. The recent advances in image captioning as well as the release of large-scale movie description datasets such as MPII-MD [28] allow to study this task in more depth. Many of the proposed methods for image captioning rely on pre-trained object classifier CNNs and Long-Short Term Memory recurrent networks (LSTMs) for generating descriptions. While image description focuses on objects, we argue that it is important to distinguish verbs, objects, and places in the challenging setting of movie description. In this work we show how to learn robust visual classifiers from the weak annotations of the sentence descriptions. Based on these visual classifiers we learn how to generate a description using an LSTM. We explore different design choices to build and train the LSTM and achieve the best performance to date on the challenging MPII-MD dataset. We compare and analyze our approach and prior work along various dimensions to better understand the key challenges of the movie description task.",
"title": ""
},
{
"docid": "7ebff2391401cef25b27d510675e9acd",
"text": "We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann’s hierarchical clustering/aspect model, a translation model adapted from statistical machine translation (Brown et al.), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real c ©2003 Kobus Barnard, Pinar Duygulu, David Forsyth, Nando de Freitas, David Blei and Michael Jordan. BARNARD, DUYGULU, FORSYTH, DE FREITAS, BLEI AND JORDAN scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"title": ""
},
{
"docid": "cd45dd9d63c85bb0b23ccb4a8814a159",
"text": "Parameter set learned using all WMT12 data (Callison-Burch et al., 2012): • 100,000 binary rankings covering 8 language directions. •Restrict scoring for all languages to exact and paraphrase matching. Parameters encode human preferences that generalize across languages: •Prefer recall over precision. •Prefer word choice over word order. •Prefer correct translations of content words over function words. •Prefer exact matches over paraphrase matches, while still giving significant credit to paraphrases. Visualization",
"title": ""
}
] | [
{
"docid": "af6b26efef62f3017a0eccc5d2ae3c33",
"text": "Universal, intelligent, and multifunctional devices controlling power distribution and measurement will become the enabling technology of the Smart Grid ICT. In this paper, we report on a novel automation architecture which supports distributed multiagent intelligence, interoperability, and configurability and enables efficient simulation of distributed automation systems. The solution is based on the combination of IEC 61850 object-based modeling and interoperable communication with IEC 61499 function block executable specification. Using the developed simulation environment, we demonstrate the possibility of multiagent control to achieve self-healing grid through collaborative fault location and power restoration.",
"title": ""
},
{
"docid": "4761b8398018e4a15a1d67a127dd657d",
"text": "The increasing popularity of social networks, such as Facebook and Orkut, has raised several privacy concerns. Traditional ways of safeguarding privacy of personal information by hiding sensitive attributes are no longer adequate. Research shows that probabilistic classification techniques can effectively infer such private information. The disclosed sensitive information of friends, group affiliations and even participation in activities, such as tagging and commenting, are considered background knowledge in this process. In this paper, we present a privacy protection tool, called Privometer, that measures the amount of sensitive information leakage in a user profile and suggests self-sanitization actions to regulate the amount of leakage. In contrast to previous research, where inference techniques use publicly available profile information, we consider an augmented model where a potentially malicious application installed in the user's friend profiles can access substantially more information. In our model, merely hiding the sensitive information is not sufficient to protect the user privacy. We present an implementation of Privometer in Facebook.",
"title": ""
},
{
"docid": "f8ecc204d84c239b9f3d544fd8d74a5c",
"text": "Storyline detection from news articles aims at summarizing events described under a certain news topic and revealing how those events evolve over time. It is a difficult task because it requires first the detection of events from news articles published in different time periods and then the construction of storylines by linking events into coherent news stories. Moreover, each storyline has different hierarchical structures which are dependent across epochs. Existing approaches often ignore the dependency of hierarchical structures in storyline generation. In this paper, we propose an unsupervised Bayesian model, called dynamic storyline detection model, to extract structured representations and evolution patterns of storylines. The proposed model is evaluated on a large scale news corpus. Experimental results show that our proposed model outperforms several baseline approaches.",
"title": ""
},
{
"docid": "d8b19c953cc66b6157b87da402dea98a",
"text": "In this paper we propose a new semi-supervised GAN architecture (ss-InfoGAN) for image synthesis that leverages information from few labels (as little as 0.22%, max. 10% of the dataset) to learn semantically meaningful and controllable data representations where latent variables correspond to label categories. The architecture builds on Information Maximizing Generative Adversarial Networks (InfoGAN) and is shown to learn both continuous and categorical codes and achieves higher quality of synthetic samples compared to fully unsupervised settings. Furthermore, we show that using small amounts of labeled data speeds-up training convergence. The architecture maintains the ability to disentangle latent variables for which no labels are available. Finally, we contribute an information-theoretic reasoning on how introducing semi-supervision increases mutual information between synthetic and real data.",
"title": ""
},
{
"docid": "285da3b342a3b3bd14fb14bca73914cd",
"text": "This paper presents expressions for the waveforms and design equations to satisfy the ZVS/ZDS conditions in the class-E power amplifier, taking into account the MOSFET gate-to-drain linear parasitic capacitance and the drain-to-source nonlinear parasitic capacitance. Expressions are given for power output capability and power conversion efficiency. Design examples are presented along with the PSpice-simulation and experimental waveforms at 2.3 W output power and 4 MHz operating frequency. It is shown from the expressions that the slope of the voltage across the MOSFET gate-to-drain parasitic capacitance during the switch-off state affects the switch-voltage waveform. Therefore, it is necessary to consider the MOSFET gate-to-drain capacitance for achieving the class-E ZVS/ZDS conditions. As a result, the power output capability and the power conversion efficiency are also affected by the MOSFET gate-to-drain capacitance. The waveforms obtained from PSpice simulations and circuit experiments showed the quantitative agreements with the theoretical predictions, which verify the expressions given in this paper.",
"title": ""
},
{
"docid": "175551435f1a4c73110b79e01306412f",
"text": "The development of MEMS actuators is rapidly evolving and continuously new progress in terms of efficiency, power and force output is reported. Pneumatic and hydraulic are an interesting class of microactuators that are easily overlooked. Despite the 20 years of research, and hundreds of publications on this topic, these actuators are only popular in microfluidic systems. In other MEMS applications, pneumatic and hydraulic actuators are rare in comparison with electrostatic, thermal or piezo-electric actuators. However, several studies have shown that hydraulic and pneumatic actuators deliver among the highest force and power densities at microscale. It is believed that this asset is particularly important in modern industrial and medical microsystems, and therefore, pneumatic and hydraulic actuators could start playing an increasingly important role. This paper shows an in-depth overview of the developments in this field ranging from the classic inflatable membrane actuators to more complex piston–cylinder and drag-based microdevices. (Some figures in this article are in colour only in the electronic version)",
"title": ""
},
{
"docid": "1675d99203da64eab8f9722b77edaab5",
"text": "Estimation of the semantic relatedness between biomedical concepts has utility for many informatics applications. Automated methods fall into two broad categories: methods based on distributional statistics drawn from text corpora, and methods based on the structure of existing knowledge resources. In the former case, taxonomic structure is disregarded. In the latter, semantically relevant empirical information is not considered. In this paper, we present a method that retrofits the context vector representation of MeSH terms by using additional linkage information from UMLS/MeSH hierarchy such that linked concepts have similar vector representations. We evaluated the method relative to previously published physician and coder’s ratings on sets of MeSH terms. Our experimental results demonstrate that the retrofitted word vector measures obtain a higher correlation with physician judgments. The results also demonstrate a clear improvement on the correlation with experts’ ratings from the retrofitted vector representation in comparison to the vector representation without retrofitting.",
"title": ""
},
{
"docid": "47e84cacb4db05a30bedfc0731dd2717",
"text": "Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.",
"title": ""
},
{
"docid": "c78a4446be38b8fff2a949cba30a8b65",
"text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.",
"title": ""
},
{
"docid": "c5443c3bdfed74fd643e7b6c53a70ccc",
"text": "Background\nAbsorbable suture suspension (Silhouette InstaLift, Sinclair Pharma, Irvine, CA) is a novel, minimally invasive system that utilizes a specially manufactured synthetic suture to help address the issues of facial aging, while minimizing the risks associated with historic thread lifting modalities.\n\n\nObjectives\nThe purpose of the study was to assess the safety, efficacy, and patient satisfaction of the absorbable suture suspension system in regards to facial rejuvenation and midface volume enhancement.\n\n\nMethods\nThe first 100 treated patients who underwent absorbable suture suspension, by the senior author, were critically evaluated. Subjects completed anonymous surveys evaluating their experience with the new modality.\n\n\nResults\nSurvey results indicate that absorbable suture suspension is a tolerable (96%) and manageable (89%) treatment that improves age related changes (83%), which was found to be in concordance with our critical review.\n\n\nConclusions\nAbsorbable suture suspension generates high patient satisfaction by nonsurgically lifting mid and lower face and neck skin and has the potential to influence numerous facets of aesthetic medicine. The study provides a greater understanding concerning patient selection, suture trajectory, and possible adjuvant therapies.\n\n\nLevel of Evidence 4",
"title": ""
},
{
"docid": "246866da7509b2a8a2bda734a664de9c",
"text": "In this paper we present an approach of procedural game content generation that focuses on a gameplay loops formal language (GLFL). In fact, during an iterative game design process, game designers suggest modifications that often require high development costs. The proposed language and its operational semantic allow reducing the gap between game designers' requirement and game developers' needs, enhancing therefore video games productivity. Using gameplay loops concept for game content generation offers a low cost solution to adjust game challenges, objectives and rewards in video games. A pilot experiment have been conducted to study the impact of this approach on game development.",
"title": ""
},
{
"docid": "b776b58f6f78e77c81605133c6e4edce",
"text": "The phase response of noisy speech has largely been ignored, but recent research shows the importance of phase for perceptual speech quality. A few phase enhancement approaches have been developed. These systems, however, require a separate algorithm for enhancing the magnitude response. In this paper, we present a novel framework for performing monaural speech separation in the complex domain. We show that much structure is exhibited in the real and imaginary components of the short-time Fourier transform, making the complex domain appropriate for supervised estimation. Consequently, we define the complex ideal ratio mask (cIRM) that jointly enhances the magnitude and phase of noisy speech. We then employ a single deep neural network to estimate both the real and imaginary components of the cIRM. The evaluation results show that complex ratio masking yields high quality speech enhancement, and outperforms related methods that operate in the magnitude domain or separately enhance magnitude and phase.",
"title": ""
},
{
"docid": "4783e35e54d0c7f555015427cbdc011d",
"text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].",
"title": ""
},
{
"docid": "2ed36e909f52e139b5fd907436e80443",
"text": "It is difficult to draw sweeping general conclusions about the blastogenesis of CT, principally because so few thoroughly studied cases are reported. It is to be hoped that methods such as painstaking gross or electronic dissection will increase the number of well-documented cases. Nevertheless, the following conclusions can be proposed: 1. Most CT can be classified into a few main anatomic types (or paradigms), and there are also rare transitional types that show gradation between the main types. 2. Most CT have two full notochordal axes (Fig. 5); the ventral organs induced along these axes may be severely disorientated, malformed, or aplastic in the process of being arranged within one body. Reported anatomic types of CT represent those notochordal arrangements that are compatible with reasonably complete embryogenesis. New ventro-lateral axes are formed in many types of CT because of space constriction in the ventral zones. The new structures represent areas of \"mutual recognition and organization\" rather than \"fusion\" (Fig. 17). 3. Orientations of the pairs of axes in the embryonic disc can be deduced from the resulting anatomy. Except for dicephalus, the axes are not side by side. Notochords are usually \"end-on\" or ventro-ventral in orientation (Fig. 5). 4. A single gastrulation event or only partial duplicated gastrulation event seems to occur in dicephalics, despite a full double notochord. 5. The anatomy of diprosopus requires further clarification, particularly in cases with complete crania rather than anencephaly-equivalent. Diprosopus CT offer the best opportunity to study the effects of true forking of the notochord, if this actually occurs. 6. In cephalothoracopagus, thoracopagus, and ischiopagus, remarkably complete new body forms are constructed at right angles to the notochordal axes. The extent of expression of viscera in these types depends on the degree of noncongruity of their ventro-ventral axes (Figs. 4, 11, 15b). 7. Some organs and tissues fail to develop (interaction aplasia) because of conflicting migrational pathways or abnormal concentrations of morphogens in and around the neoaxes. 8. Where the cardiovascular system is discordantly expressed in dicephalus and thoracopagus twins, the right heart is more severely malformed, depending on the degree of interaction of the two embryonic septa transversa. 9. The septum transversum provides mesenchymal components to the heawrt and liver; the epithelial components (derived fro the foregut[s]) may vary in number from the number of mesenchymal septa transversa contributing to the liver of the CT embryo.(ABSTRACT TRUNCATED AT 400 WORDS)",
"title": ""
},
{
"docid": "33e45b66cca92f15270500c32a1c0b94",
"text": "We study a dataset of billions of program binary files that appeared on 100 million computers over the course of 12 months, discovering that 94% of these files were present on a single machine. Though malware polymorphism is one cause for the large number of singleton files, additional factors also contribute to polymorphism, given that the ratio of benign to malicious singleton files is 80:1. The huge number of benign singletons makes it challenging to reliably identify the minority of malicious singletons. We present a large-scale study of the properties, characteristics, and distribution of benign and malicious singleton files. We leverage the insights from this study to build a classifier based purely on static features to identify 92% of the remaining malicious singletons at a 1.4% percent false positive rate, despite heavy use of obfuscation and packing techniques by most malicious singleton files that we make no attempt to de-obfuscate. Finally, we demonstrate robustness of our classifier to important classes of automated evasion attacks.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
},
{
"docid": "a02fb872137fe7bc125af746ba814849",
"text": "23% of the total global burden of disease is attributable to disorders in people aged 60 years and older. Although the proportion of the burden arising from older people (≥60 years) is highest in high-income regions, disability-adjusted life years (DALYs) per head are 40% higher in low-income and middle-income regions, accounted for by the increased burden per head of population arising from cardiovascular diseases, and sensory, respiratory, and infectious disorders. The leading contributors to disease burden in older people are cardiovascular diseases (30·3% of the total burden in people aged 60 years and older), malignant neoplasms (15·1%), chronic respiratory diseases (9·5%), musculoskeletal diseases (7·5%), and neurological and mental disorders (6·6%). A substantial and increased proportion of morbidity and mortality due to chronic disease occurs in older people. Primary prevention in adults aged younger than 60 years will improve health in successive cohorts of older people, but much of the potential to reduce disease burden will come from more effective primary, secondary, and tertiary prevention targeting older people. Obstacles include misplaced global health priorities, ageism, the poor preparedness of health systems to deliver age-appropriate care for chronic diseases, and the complexity of integrating care for complex multimorbidities. Although population ageing is driving the worldwide epidemic of chronic diseases, substantial untapped potential exists to modify the relation between chronological age and health. This objective is especially important for the most age-dependent disorders (ie, dementia, stroke, chronic obstructive pulmonary disease, and vision impairment), for which the burden of disease arises more from disability than from mortality, and for which long-term care costs outweigh health expenditure. The societal cost of these disorders is enormous.",
"title": ""
},
{
"docid": "afae66e9ff49274bbb546cd68490e5e4",
"text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.",
"title": ""
},
{
"docid": "6d13952afa196a6a77f227e1cc9f43bd",
"text": "Spreadsheets contain valuable data on many topics, but they are difficult to integrate with other sources. Converting spreadsheet data to the relational model would allow relational integration tools to be used, but using manual methods to do this requires large amounts of work for each integration candidate. Automatic data extraction would be useful but it is very challenging: spreadsheet designs generally requires human knowledge to understand the metadata being described. Even if it is possible to obtain this metadata information automatically, a single mistake can yield an output relation with a huge number of incorrect tuples. We propose a two-phase semiautomatic system that extracts accurate relational metadata while minimizing user effort. Based on conditional random fields (CRFs), our system enables downstream spreadsheet integration applications. First, the automatic extractor uses hints from spreadsheets’ graphical style and recovered metadata to extract the spreadsheet data as accurately as possible. Second, the interactive repair component identifies similar regions in distinct spreadsheets scattered across large spreadsheet corpora, allowing a user’s single manual repair to be amortized over many possible extraction errors. Through our method of integrating the repair workflow into the extraction system, a human can obtain the accurate extraction with just 31% of the manual operations required by a standard classification based technique. We demonstrate and evaluate our system using two corpora: more than 1,000 spreadsheets published by the US government and more than 400,000 spreadsheets downloaded from the Web.",
"title": ""
},
{
"docid": "1d3b2a5906d7db650db042db9ececed1",
"text": "Music consists of precisely patterned sequences of both movement and sound that engage the mind in a multitude of experiences. We move in response to music and we move in order to make music. Because of the intimate coupling between perception and action, music provides a panoramic window through which we can examine the neural organization of complex behaviors that are at the core of human nature. Although the cognitive neuroscience of music is still in its infancy, a considerable behavioral and neuroimaging literature has amassed that pertains to neural mechanisms that underlie musical experience. Here we review neuroimaging studies of explicit sequence learning and temporal production—findings that ultimately lay the groundwork for understanding how more complex musical sequences are represented and produced by the brain. These studies are also brought into an existing framework concerning the interaction of attention and time-keeping mechanisms in perceiving complex patterns of information that are distributed in time, such as those that occur in music.",
"title": ""
}
] | scidocsrr |
2ded977da124ff15c126f89368f3889b | Hierarchical load forecasting : Gradient boosting machines and Gaussian processes | [
{
"docid": "65c38bb314856c1b5b79ad6473ec9121",
"text": "Despite its importance, choosing the structural form of the kernel in nonparametric regression remains a black art. We define a space of kernel structures which are built compositionally by adding and multiplying a small number of base kernels. We present a method for searching over this space of structures which mirrors the scientific discovery process. The learned structures can often decompose functions into interpretable components and enable long-range extrapolation on time-series datasets. Our structure search method outperforms many widely used kernels and kernel combination methods on a variety of prediction tasks.",
"title": ""
}
] | [
{
"docid": "e3393edb6166e225907f86b5187534ea",
"text": "Th is book supports an emerging trend toward emphasizing the plurality of digital literacy; recognizing the advantages of understanding digital literacy as digital literacies. In the book world this trend is still marginal. In December 2007, Allan Martin and Dan Madigan’s collection Digital Literacies for Learning (2006) was the only English-language book with “digital literacies” in the title to show up in a search on Amazon.com. Th e plural form fares better among English-language journal articles (e.g., Anderson & Henderson, 2004; Ba, Tally, & Tsikalas, 2002; Bawden, 2001; Doering et al., 2007; Myers, 2006; Snyder, 1999; Th omas, 2004) and conference presentations (e.g., Erstad, 2007; Lin & Lo, 2004; Steinkeuhler, 2005), however, and is now reasonably common in talk on blogs and wikis (e.g., Couros, 2007; Davies, 2007). Nonetheless, talk of digital literacy, in the singular, remains the default mode. Th e authors invited to contribute to this book were chosen in light of three reasons we (the editors) identify as important grounds for promoting the idea of digital literacies in the plural. Th is, of course, does not mean the contributing authors would necessarily subscribe to some or all of these reasons. Th at was",
"title": ""
},
{
"docid": "98fec87d72f6247e1a8baa1a07a41c70",
"text": "As multicast applications are deployed for mainstream use, the need to secure multicast communications will become critical. Multicast, however, does not fit the point-to-point model of most network security protocols which were designed with unicast communications in mind. As we will show, securing multicast (or group) communications is fundamentally different from securing unicast (or paired) communications. In turn, these differences can result in scalability problems for many typical applications.In this paper, we examine and model the differences between unicast and multicast security and then propose Iolus: a novel framework for scalable secure multicasting. Protocols based on Iolus can be used to achieve a variety of security objectives and may be used either to directly secure multicast communications or to provide a separate group key management service to other \"security-aware\" applications. We describe the architecture and operation of Iolus in detail and also describe our experience with a protocol based on the Iolus framework.",
"title": ""
},
{
"docid": "f013f58d995693a79cd986a028faff38",
"text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.",
"title": ""
},
{
"docid": "af3b0fb6b2babe8393b2e715f92a2c97",
"text": "Collaboration is the “mutual engagement of participants in a coordinated effort to solve a problem together.” Collaborative interactions are characterized by shared goals, symmetry of structure, and a high degree of negotiation, interactivity, and interdependence. Interactions producing elaborated explanations are particularly valuable for improving student learning. Nonresponsive feedback, on the other hand, can be detrimental to student learning in collaborative situations. Collaboration can have powerful effects on student learning, particularly for low-achieving students. However, a number of factors may moderate the impact of collaboration on student learning, including student characteristics, group composition, and task characteristics. Although historical frameworks offer some guidance as to when and how children acquire and develop collaboration skills, there is scant empirical evidence to support such predictions. However, because many researchers appear to believe children can be taught to collaborate, they urge educators to provide explicit instruction that encourages development of skills such as coordination, communication, conflict resolution, decision-making, problemsolving, and negotiation. Such training should also emphasize desirable qualities of interaction, such as providing elaborated explanations, asking direct and specific questions, and responding appropriately to the requests of others. Teachers should structure tasks in ways that will support the goals of collaboration, specify “ground rules” for interaction, and regulate such interactions. There are a number of challenges in using group-based tasks to assess collaboration. Several suggestions for assessing collaboration skills are made.",
"title": ""
},
{
"docid": "263f58a9cf856e66a5570e666ad1cec9",
"text": "This paper presents an approach for online estimation of the extrinsic calibration parameters of a multi-camera rig. Given a coarse initial estimate of the parameters, the relative poses between cameras are refined through recursive filtering. The approach is purely vision based and relies on plane induced homographies between successive frames. Overlapping fields of view are not required. Instead, the ground plane serves as a natural reference object. In contrast to other approaches, motion, relative camera poses, and the ground plane are estimated simultaneously using a single iterated extended Kalman filter. This reduces not only the number of parameters but also the computational complexity. Furthermore, an arbitrary number of cameras can be incorporated. Several experiments on synthetic as well as real data were conducted using a setup of four synchronized wide angle fisheye cameras, mounted on a moving platform. Results were obtained, using both, a planar and a general motion model with full six degrees of freedom. Additionally, the effects of uncertain intrinsic parameters and nonplanar ground were evaluated experimentally.",
"title": ""
},
{
"docid": "bfd1ec5a23731185b5ef2d24d3c63d9a",
"text": "Taurine is a natural amino acid present as free form in many mammalian tissues and in particular in skeletal muscle. Taurine exerts many physiological functions, including membrane stabilization, osmoregulation and cytoprotective effects, antioxidant and anti-inflammatory actions as well as modulation of intracellular calcium concentration and ion channel function. In addition taurine may control muscle metabolism and gene expression, through yet unclear mechanisms. This review summarizes the effects of taurine on specific muscle targets and pathways as well as its therapeutic potential to restore skeletal muscle function and performance in various pathological conditions. Evidences support the link between alteration of intracellular taurine level in skeletal muscle and different pathophysiological conditions, such as disuse-induced muscle atrophy, muscular dystrophy and/or senescence, reinforcing the interest towards its exogenous supplementation. In addition, taurine treatment can be beneficial to reduce sarcolemmal hyper-excitability in myotonia-related syndromes. Although further studies are necessary to fill the gaps between animals and humans, the benefit of the amino acid appears to be due to its multiple actions on cellular functions while toxicity seems relatively low. Human clinical trials using taurine in various pathologies such as diabetes, cardiovascular and neurological disorders have been performed and may represent a guide-line for designing specific studies in patients of neuromuscular diseases.",
"title": ""
},
{
"docid": "4e8eed4acd7251432042428054e8fb68",
"text": "Designing a practical test automation architecture provides a solid foundation for a successful automation effort. This paper describes key elements of automated testing that need to be considered, models for testing that can be used for designing a test automation architecture, and considerations for successfully combining the elements to form an automated test environment. The paper first develops a general framework for discussion of software testing and test automation. This includes a definition of test automation, a model for software tests, and a discussion of test oracles. The remainder of the paper focuses on using the framework to plan for a test automation architecture that addresses the requirements for the specific software under test (SUT).",
"title": ""
},
{
"docid": "be20cb4f75ff0d4d1637095d5928b005",
"text": "Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.",
"title": ""
},
{
"docid": "560cadfecdf5207851d333b4a122a06d",
"text": "Over the past years, state-of-the-art information extraction (IE) systems such as NELL [5] and ReVerb [9] have achieved impressive results by producing very large knowledge resources at web scale with minimal supervision. However, these resources lack the schema information, exhibit a high degree of ambiguity, and are difficult even for humans to interpret. Working with such resources becomes easier if there is a structured information base to which the resources can be linked. In this paper, we introduce the integration of open information extraction projects with Wikipedia-based IE projects that maintain a logical schema, as an important challenge for the NLP, semantic web, and machine learning communities. We describe the problem, present a gold-standard benchmark, and take the first steps towards a data-driven solution to the problem. This is especially promising, since NELL and ReVerb typically achieve a very large coverage, but still still lack a fullfledged clean ontological structure which, on the other hand, could be provided by large-scale ontologies like DBpedia [2] or YAGO [13].",
"title": ""
},
{
"docid": "ad23230c4ee2ed2216378a3ab833d3eb",
"text": "We present a framework for precomputed volume radiance transfer that achieves real-time rendering of global illumination effects for volume data sets such as multiple scattering, volumetric shadows, and so on. Our approach incorporates the volumetric photon mapping method into the classical precomputed radiance transfer pipeline. We contribute several techniques for light approximation, radiance transfer precomputation, and real-time radiance estimation, which are essential to make the approach practical and to achieve high frame rates. For light approximation, we propose a new discrete spherical function that has better performance for construction and evaluation when compared with existing rotational invariant spherical functions such as spherical harmonics and spherical radial basis functions. In addition, we present a fast splatting-based radiance transfer precomputation method and an early evaluation technique for real-time radiance estimation in the clustered principal component analysis space. Our techniques are validated through comprehensive evaluations and rendering tests. We also apply our rendering approach to volume visualization.",
"title": ""
},
{
"docid": "dad1c5e4aa43b9fc2b3592799f9a3a69",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.068 ⇑ Tel.: +886 7 3814526. E-mail address: [email protected] Due to the explosive growth of social-media applications, enhancing event-awareness by social mining has become extremely important. The contents of microblogs preserve valuable information associated with past disastrous events and stories. To learn the experiences from past events for tackling emerging real-world events, in this work we utilize the social-media messages to characterize real-world events through mining their contents and extracting essential features for relatedness analysis. On one hand, we established an online clustering approach on Twitter microblogs for detecting emerging events, and meanwhile we performed event relatedness evaluation using an unsupervised clustering approach. On the other hand, we developed a supervised learning model to create extensible measure metrics for offline evaluation of event relatedness. By means of supervised learning, our developed measure metrics are able to compute relatedness of various historical events, allowing the event impacts on specified domains to be quantitatively measured for event comparison. By combining the strengths of both methods, the experimental results showed that the combined framework in our system is sensible for discovering more unknown knowledge about event impacts and enhancing event awareness. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "0626c39604a1dde16a5d27de1c4cef24",
"text": "Two dimensional (2D) materials with a monolayer of atoms represent an ultimate control of material dimension in the vertical direction. Molybdenum sulfide (MoS2) monolayers, with a direct bandgap of 1.8 eV, offer an unprecedented prospect of miniaturizing semiconductor science and technology down to a truly atomic scale. Recent studies have indeed demonstrated the promise of 2D MoS2 in fields including field effect transistors, low power switches, optoelectronics, and spintronics. However, device development with 2D MoS2 has been delayed by the lack of capabilities to produce large-area, uniform, and high-quality MoS2 monolayers. Here we present a self-limiting approach that can grow high quality monolayer and few-layer MoS2 films over an area of centimeters with unprecedented uniformity and controllability. This approach is compatible with the standard fabrication process in semiconductor industry. It paves the way for the development of practical devices with 2D MoS2 and opens up new avenues for fundamental research.",
"title": ""
},
{
"docid": "017d1bb9180e5d1f8a01604630ebc40d",
"text": "This paper presents an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) images based on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "9e3263866208bbc6a9019b3c859d2a66",
"text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.",
"title": ""
},
{
"docid": "b311ce7a34d3bdb21678ed765bcd0f0b",
"text": "This paper focuses on the micro-blogging service Twitter, looking at source credibility for information shared in relation to the Fukushima Daiichi nuclear power plant disaster in Japan. We look at the sources, credibility, and between-language differences in information shared in the month following the disaster. Messages were categorized by user, location, language, type, and credibility of information source. Tweets with reference to third-party information made up the bulk of messages sent, and it was also found that a majority of those sources were highly credible, including established institutions, traditional media outlets, and highly credible individuals. In general, profile anonymity proved to be correlated with a higher propensity to share information from low credibility sources. However, Japanese-language tweeters, while more likely to have anonymous profiles, referenced lowcredibility sources less often than non-Japanese tweeters, suggesting proximity to the disaster mediating the degree of credibility of shared content.",
"title": ""
},
{
"docid": "c35619bf5830f6415a1c2f80cbaea31b",
"text": "Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search.",
"title": ""
},
{
"docid": "b07ae3888b52faa598893bbfbf04eae2",
"text": "This paper presents a compliant locomotion framework for torque-controlled humanoids using model-based whole-body control. In order to stabilize the centroidal dynamics during locomotion, we compute linear momentum rate of change objectives using a novel time-varying controller for the Divergent Component of Motion (DCM). Task-space objectives, including the desired momentum rate of change, are tracked using an efficient quadratic program formulation that computes optimal joint torque setpoints given frictional contact constraints and joint position / torque limits. In order to validate the effectiveness of the proposed approach, we demonstrate push recovery and compliant walking using THOR, a 34 DOF humanoid with series elastic actuation. We discuss details leading to the successful implementation of optimization-based whole-body control on our hardware platform, including the design of a “simple” joint impedance controller that introduces inner-loop velocity feedback into the actuator force controller.",
"title": ""
},
{
"docid": "155938bc107c7e7cfca22758937f4d32",
"text": "A general theory of addictions is proposed, using the compulsive gambler as the prototype. Addiction is defined as a dependent state acquired over time to relieve stress. Two interrelated sets of factors predispose persons to addictions: an abnormal physiological resting state, and childhood experiences producing a deep sense of inadequacy. All addictions are hypothesized to follow a similar three-stage course. A matrix strategy is outlined to collect similar information from different kinds of addicts and normals. The ultimate objective is to identify high risk youth and prevent the development of addictions.",
"title": ""
},
{
"docid": "de96ac151e5a3a2b38f2fa309862faee",
"text": "Venue recommendation is an important application for Location-Based Social Networks (LBSNs), such as Yelp, and has been extensively studied in recent years. Matrix Factorisation (MF) is a popular Collaborative Filtering (CF) technique that can suggest relevant venues to users based on an assumption that similar users are likely to visit similar venues. In recent years, deep neural networks have been successfully applied to tasks such as speech recognition, computer vision and natural language processing. Building upon this momentum, various approaches for recommendation have been proposed in the literature to enhance the effectiveness of MF-based approaches by exploiting neural network models such as: word embeddings to incorporate auxiliary information (e.g. textual content of comments); and Recurrent Neural Networks (RNN) to capture sequential properties of observed user-venue interactions. However, such approaches rely on the traditional inner product of the latent factors of users and venues to capture the concept of collaborative filtering, which may not be sufficient to capture the complex structure of user-venue interactions. In this paper, we propose a Deep Recurrent Collaborative Filtering framework (DRCF) with a pairwise ranking function that aims to capture user-venue interactions in a CF manner from sequences of observed feedback by leveraging Multi-Layer Perception and Recurrent Neural Network architectures. Our proposed framework consists of two components: namely Generalised Recurrent Matrix Factorisation (GRMF) and Multi-Level Recurrent Perceptron (MLRP) models. In particular, GRMF and MLRP learn to model complex structures of user-venue interactions using element-wise and dot products as well as the concatenation of latent factors. In addition, we propose a novel sequence-based negative sampling approach that accounts for the sequential properties of observed feedback and geographical location of venues to enhance the quality of venue suggestions, as well as alleviate the cold-start users problem. Experiments on three large checkin and rating datasets show the effectiveness of our proposed framework by outperforming various state-of-the-art approaches.",
"title": ""
},
{
"docid": "e0a08bac6769382c3168922bdee1939d",
"text": "This paper presents the state of art research progress on multilingual multi-document summarization. Our method utilizes hLDA (hierarchical Latent Dirichlet Allocation) algorithm to model the documents firstly. A new feature is proposed from the hLDA modeling results, which can reflect semantic information to some extent. Then it combines this new feature with different other features to perform sentence scoring. According to the results of sentence score, it extracts candidate summary sentences from the documents to generate a summary. We have also attempted to verify the effectiveness and robustness of the new feature through experiments. After the comparison with other summarization methods, our method reveals better performance in some respects.",
"title": ""
}
] | scidocsrr |
08e7c93152438f6295877905b1ca7584 | Predicting Bike Usage for New York City's Bike Sharing System | [
{
"docid": "db422d1fcb99b941a43e524f5f2897c2",
"text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.",
"title": ""
}
] | [
{
"docid": "3925371ff139ca9cd23222db78f8694a",
"text": "In this paper, we investigate how the Gauss–Newton Hessian matrix affects the basin of convergence in Newton-type methods. Although the Newton algorithm is theoretically superior to the Gauss–Newton algorithm and the Levenberg–Marquardt (LM) method as far as their asymptotic convergence rate is concerned, the LM method is often preferred in nonlinear least squares problems in practice. This paper presents a theoretical analysis of the advantage of the Gauss–Newton Hessian matrix. It is proved that the Gauss–Newton approximation function is the only nonnegative convex quadratic approximation that retains a critical property of the original objective function: taking the minimal value of zero on an (n − 1)-dimensional manifold (or affine subspace). Due to this property, the Gauss–Newton approximation does not change the zero-on-(n − 1)-D “structure” of the original problem, explaining the reason why the Gauss–Newton Hessian matrix is preferred for nonlinear least squares problems, especially when the initial point is far from the solution.",
"title": ""
},
{
"docid": "b917ec2f16939a819625b6750597c40c",
"text": "In an increasing number of scientific disciplines, large data collections are emerging as important community resources. In domains as diverse as global climate change, high energy physics, and computational genomics, the volume of interesting data is already measured in terabytes and will soon total petabytes. The communities of researchers that need to access and analyze this data (often using sophisticated and computationally expensive techniques) are often large and are almost always geographically distributed, as are the computing and storage resources that these communities rely upon to store and analyze their data [17]. This combination of large dataset size, geographic distribution of users and resources, and computationally intensive analysis results in complex and stringent performance demands that are not satisfied by any existing data management infrastructure. A large scientific collaboration may generate many queries, each involving access to—or supercomputer-class computations on—gigabytes or terabytes of data. Efficient and reliable execution of these queries may require careful management of terabyte caches, gigabit/s data transfer over wide area networks, coscheduling of data transfers and supercomputer computation, accurate performance estimations to guide the selection of dataset replicas, and other advanced techniques that collectively maximize use of scarce storage, networking, and computing resources. The literature offers numerous point solutions that address these issues (e.g., see [17, 14, 19, 3]). But no integrating architecture exists that allows us to identify requirements and components common to different systems and hence apply different technologies in a coordinated fashion to a range of dataintensive petabyte-scale application domains. Motivated by these considerations, we have launched a collaborative effort to design and produce such an integrating architecture. We call this architecture the data grid, to emphasize its role as a specialization and extension of the “Grid” that has emerged recently as an integrating infrastructure for distributed computation [10, 20, 15]. Our goal in this effort is to define the requirements that a data grid must satisfy and the components and APIs that will be required in its implementation. We hope that the definition of such an architecture will accelerate progress on petascale data-intensive computing by enabling the integration of currently disjoint approaches, encouraging the deployment of basic enabling technologies, and revealing technology gaps that require further research and development. In addition, we plan to construct a reference implementation for this architecture so as to enable large-scale experimentation.",
"title": ""
},
{
"docid": "14a45e3e7aadee56b7d2e28c692aba9f",
"text": "Radiation therapy as a mode of cancer treatment is well-established. Telecobalt and telecaesium units were used extensively during the early days. Now, medical linacs offer more options for treatment delivery. However, such systems are prohibitively expensive and beyond the reach of majority of the worlds population living in developing and under-developed countries. In India, there is shortage of cancer treatment facilities, mainly due to the high cost of imported machines. Realizing the need of technology for affordable radiation therapy machines, Bhabha Atomic Research Centre (BARC), the premier nuclear research institute of Government of India, started working towards a sophisticated telecobalt machine. The Bhabhatron is the outcome of the concerted efforts of BARC and Panacea Medical Technologies Pvt. Ltd., India. It is not only less expensive, but also has a number of advanced features. It incorporates many safety and automation features hitherto unavailable in the most advanced telecobalt machine presently available. This paper describes various features available in Bhabhatron-II. The authors hope that this machine has the potential to make safe and affordable radiation therapy accessible to the common people in India as well as many other countries.",
"title": ""
},
{
"docid": "15f75935c0a17f52790be930d656d171",
"text": "It is a well-known issue that attack primitives which exploit memory corruption vulnerabilities can abuse the ability of processes to automatically restart upon termination. For example, network services like FTP and HTTP servers are typically restarted in case a crash happens and this can be used to defeat Address Space Layout Randomization (ASLR). Furthermore, recently several techniques evolved that enable complete process memory scanning or code-reuse attacks against diversified and unknown binaries based on automated restarts of server applications. Until now, it is believed that client applications are immune against exploit primitives utilizing crashes. Due to their hard crash policy, such applications do not restart after memory corruption faults, making it impossible to touch memory more than once with wrong permissions. In this paper, we show that certain client application can actually survive crashes and are able to tolerate faults, which are normally critical and force program termination. To this end, we introduce a crash-resistance primitive and develop a novel memory scanning method with memory oracles without the need for control-flow hijacking. We show the practicability of our methods for 32-bit Internet Explorer 11 on Windows 8.1, and Mozilla Firefox 64-bit (Windows 8.1 and Linux 3.17.1). Furthermore, we demonstrate the advantages an attacker gains to overcome recent code-reuse defenses. Latest advances propose fine-grained re-randomization of the address space and code layout, or hide sensitive information such as code pointers to thwart tampering or misuse. We show that these defenses need improvements since crash-resistance weakens their security assumptions. To this end, we introduce the concept of CrashResistant Oriented Programming (CROP). We believe that our results and the implications of memory oracles will contribute to future research on defensive schemes against code-reuse attacks.",
"title": ""
},
{
"docid": "c56d4eff5b23f804834c698e77f3d806",
"text": " In many applications within the engineering world, an isolated generator is needed (e.g. in ships). Diesel units (diesel engine and synchronous generator) are the most common solution. However, the diesel engine can be eliminated if the energy from another source (e.g. the prime mover in a ship) is used to move the generator. This is the case for the Shaft Coupled Generator, where the coupling between the mover and the generator is made via a hydrostatic transmission. So that the mover can have different speeds and the generator is able to keep a constant frequency. The main problem of this system is the design of a speed governor that make possible the desired behaviour. In this paper a simulation model is presented in order to analyse the behaviour of this kind of systems and to help in the speed governor design. The model is achieved with an parameter identification process also depicted in the paper. A comparison between simulation results and measurements is made to shown the model validity. KeywordsModelling, Identification, Hydrostatic Transmission.",
"title": ""
},
{
"docid": "83da776714bf49c3bbb64976d20e26a2",
"text": "Orthogonal frequency division multiplexing (OFDM) has been widely adopted in modern wireless communication systems due to its robustness against the frequency selectivity of wireless channels. For coherent detection, channel estimation is essential for receiver design. Channel estimation is also necessary for diversity combining or interference suppression where there are multiple receive antennas. In this paper, we will present a survey on channel estimation for OFDM. This survey will first review traditional channel estimation approaches based on channel frequency response (CFR). Parametric model (PM)-based channel estimation, which is particularly suitable for sparse channels, will be also investigated in this survey. Following the success of turbo codes and low-density parity check (LDPC) codes, iterative processing has been widely adopted in the design of receivers, and iterative channel estimation has received a lot of attention since that time. Iterative channel estimation will be emphasized in this survey as the emerging iterative receiver improves system performance significantly. The combination of multiple-input multiple-output (MIMO) and OFDM has been widely accepted in modern communication systems, and channel estimation in MIMO-OFDM systems will also be addressed in this survey. Open issues and future work are discussed at the end of this paper.",
"title": ""
},
{
"docid": "9e11005f60aa3f53481ac3543a18f32f",
"text": "Deep residual networks (ResNets) have significantly pushed forward the state-ofthe-art on image classification, increasing in performance as networks grow both deeper and wider. However, memory consumption becomes a bottleneck, as one needs to store the activations in order to calculate gradients using backpropagation. We present the Reversible Residual Network (RevNet), a variant of ResNets where each layer’s activations can be reconstructed exactly from the next layer’s. Therefore, the activations for most layers need not be stored in memory during backpropagation. We demonstrate the effectiveness of RevNets on CIFAR-10, CIFAR-100, and ImageNet, establishing nearly identical classification accuracy to equally-sized ResNets, even though the activation storage requirements are independent of depth.",
"title": ""
},
{
"docid": "9dfef5bc76b78e7577b9eb377b830a9e",
"text": "Patients with Parkinson's disease may have difficulties in speaking because of the reduced coordination of the muscles that control breathing, phonation, articulation and prosody. Symptoms that may occur because of changes are weakening of the volume of the voice, voice monotony, changes in the quality of the voice, speed of speech, uncontrolled repetition of words. The evaluation of some of the disorders mentioned can be achieved through measuring the variation of parameters in an objective manner. It may be done to evaluate the response to the treatments with intra-daily frequency pre / post-treatment, as well as in the long term. Software systems allow these measurements also by recording the patient's voice. This allows to carry out a large number of tests by means of a larger number of patients and a higher frequency of the measurements. The main goal of our work was to design and realize Voxtester, an effective and simple to use software system useful to measure whether changes in voice emission are sensitive to pharmacologic treatments. Doctors and speech therapists can easily use it without going into the technical details, and we think that this goal is reached only by Voxtester, up to date.",
"title": ""
},
{
"docid": "e62ad0c67fa924247f05385bda313a38",
"text": "Artificial neural networks have been recognized as a powerful tool for pattern classification problems, but a number of researchers have also suggested that straightforward neural-network approaches to pattern recognition are largely inadequate for difficult problems such as handwritten numeral recognition. In this paper, we present three sophisticated neural-network classifiers to solve complex pattern recognition problems: multiple multilayer perceptron (MLP) classifier, hidden Markov model (HMM)/MLP hybrid classifier, and structure-adaptive self-organizing map (SOM) classifier. In order to verify the superiority of the proposed classifiers, experiments were performed with the unconstrained handwritten numeral database of Concordia University, Montreal, Canada. The three methods have produced 97.35%, 96.55%, and 96.05% of the recognition rates, respectively, which are better than those of several previous methods reported in the literature on the same database.",
"title": ""
},
{
"docid": "b7bf7d430e4132a4d320df3a155ee74c",
"text": "We present Wave menus, a variant of multi-stroke marking menus designed for improving the novice mode of marking while preserving their efficiency in the expert mode of marking. Focusing on the novice mode, a criteria-based analysis of existing marking menus motivates the design of Wave menus. Moreover a user experiment is presented that compares four hierarchical marking menus in novice mode. Results show that Wave and compound-stroke menus are significantly faster and more accurate than multi-stroke menus in novice mode, while it has been shown that in expert mode the multi-stroke menus and therefore the Wave menus outperform the compound-stroke menus. Wave menus also require significantly less screen space than compound-stroke menus. As a conclusion, Wave menus offer the best performance for both novice and expert modes in comparison with existing multi-level marking menus, while requiring less screen space than compound-stroke menus.",
"title": ""
},
{
"docid": "b2a670d90d53825c53d8ce0082333db6",
"text": "Social media platforms facilitate the emergence of citizen communities that discuss real-world events. Their content reflects a variety of intent ranging from social good (e.g., volunteering to help) to commercial interest (e.g., criticizing product features). Hence, mining intent from social data can aid in filtering social media to support organizations, such as an emergency management unit for resource planning. However, effective intent mining is inherently challenging due to ambiguity in interpretation, and sparsity of relevant behaviors in social data. In this paper, we address the problem of multiclass classification of intent with a use-case of social data generated during crisis events. Our novel method exploits a hybrid feature representation created by combining top-down processing using knowledge-guided patterns with bottom-up processing using a bag-of-tokens model. We employ pattern-set creation from a variety of knowledge sources including psycholinguistics to tackle the ambiguity challenge, social behavior about conversations to enrich context, and contrast patterns to tackle the sparsity challenge. Our results show a significant absolute gain up to 7% in the F1 score relative to a baseline using bottom-up processing alone, within the popular multiclass frameworks of One-vs-One and One-vs-All. Intent mining can help design efficient cooperative information systems between citizens and organizations for serving organizational information needs.",
"title": ""
},
{
"docid": "0d78cb5ff93351db949ffc1c01c3d540",
"text": "Self-Organizing Map is an unsupervised neural network which combines vector quantization and vector projection. This makes it a powerful visualization tool. SOM Toolbox implements the SOM in the Matlab 5 computing environment. In this paper, computational complexity of SOM and the applicability of the Toolbox are investigated. It is seen that the Toolbox is easily applicable to small data sets (under 10000 records) but can also be applied in case of medium sized data sets. The prime limiting factor is map size: the Toolbox is mainly suitable for training maps with 1000 map units or less.",
"title": ""
},
{
"docid": "88b167a7eb0debcd5c5e0f5f5605a14b",
"text": "Understanding language requires both linguistic knowledge and knowledge about how the world works, also known as common-sense knowledge. We attempt to characterize the kinds of common-sense knowledge most often involved in recognizing textual entailments. We identify 20 categories of common-sense knowledge that are prevalent in textual entailment, many of which have received scarce attention from researchers building collections of knowledge.",
"title": ""
},
{
"docid": "dd9e3513c4be6100b5d3b3f25469f028",
"text": "Software testing is the process to uncover requirement, design and coding errors in the program. It is used to identify the correctness, completeness, security and quality of software products against a specification. Software testing is the process used to measure the quality of developed computer software. It exhibits all mistakes, errors and flaws in the developed software. There are many approaches to software testing, but effective testing of complex product is essentially a process of investigation, not merely a matter of creating and following route procedure. It is not possible to find out all the errors in the program. This fundamental problem in testing thus throws an open question, as to what would be the strategy we should adopt for testing. In our paper, we have described and compared the three most prevalent and commonly used software testing techniques for detecting errors, they are: white box testing, black box testing and grey box testing. KeywordsBlack Box; Grey Box; White Box.",
"title": ""
},
{
"docid": "a931f939e2e0c0f2f8940796ee23e957",
"text": "PURPOSE OF REVIEW\nMany patients requiring cardiac arrhythmia device surgery are on chronic oral anticoagulation therapy. The periprocedural management of their anticoagulation presents a dilemma to physicians, particularly in the subset of patients with moderate-to-high risk of arterial thromboembolic events. Physicians have responded by treating patients with bridging anticoagulation while oral anticoagulation is temporarily discontinued. However, there are a number of downsides to bridging anticoagulation around device surgery; there is a substantial risk of significant device pocket hematoma with important clinical sequelae; bridging anticoagulation may lead to more arterial thromboembolic events and bridging anticoagulation is expensive.\n\n\nRECENT FINDINGS\nIn response to these issues, a number of centers have explored the option of performing device surgery without cessation of oral anticoagulation. The observational data suggest a greatly reduced hematoma rate with this strategy. Despite these encouraging results, most physicians are reluctant to move to operating on continued Coumadin in the absence of confirmatory data from a randomized trial.\n\n\nSUMMARY\nWe have designed a prospective, single-blind, randomized, controlled trial to address this clinical question. In the conventional arm, patients will be bridged. In the experimental arm, patients will continue on oral anticoagulation and the primary outcome is clinically significant hematoma. Our study has clinical relevance to at least 70 000 patients per year in North America.",
"title": ""
},
{
"docid": "ecb93affc7c9b0e4bf86949d3f2006d4",
"text": "We present data-dependent learning bounds for the general scenario of non-stationary nonmixing stochastic processes. Our learning guarantees are expressed in terms of a datadependent measure of sequential complexity and a discrepancy measure that can be estimated from data under some mild assumptions. We also also provide novel analysis of stable time series forecasting algorithm using this new notion of discrepancy that we introduce. We use our learning bounds to devise new algorithms for non-stationary time series forecasting for which we report some preliminary experimental results. An extended abstract has appeared in (Kuznetsov and Mohri, 2015).",
"title": ""
},
{
"docid": "d5ca07ff7bf01edcebb81dad6bff3a22",
"text": "The goals of our work are twofold: gain insight into how humans interact with complex data and visualizations thereof in order to make discoveries; and use our findings to develop a dialogue system for exploring data visualizations. Crucial to both goals is understanding and modeling of multimodal referential expressions, in particular those that include deictic gestures. In this paper, we discuss how context information affects the interpretation of requests and their attendant referring expressions in our data. To this end, we have annotated our multimodal dialogue corpus for context and both utterance and gesture information; we have analyzed whether a gesture co-occurs with a specific request or with the context surrounding the request; we have started addressing multimodal co-reference resolution by using Kinect to detect deictic gestures; and we have started identifying themes found in the annotated context, especially in what follows the request.",
"title": ""
},
{
"docid": "8b08fbd7610e68e39026011fec7034ec",
"text": "Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.",
"title": ""
},
{
"docid": "b19aab238e0eafef52974a87300750a3",
"text": "This paper introduces a method to detect a fault associated with critical components/subsystems of an engineered system. It is required, in this case, to detect the fault condition as early as possible, with specified degree of confidence and a prescribed false alarm rate. Innovative features of the enabling technologies include a Bayesian estimation algorithm called particle filtering, which employs features or condition indicators derived from sensor data in combination with simple models of the system's degrading state to detect a deviation or discrepancy between a baseline (no-fault) distribution and its current counterpart. The scheme requires a fault progression model describing the degrading state of the system in the operation. A generic model based on fatigue analysis is provided and its parameters adaptation is discussed in detail. The scheme provides the probability of abnormal condition and the presence of a fault is confirmed for a given confidence level. The efficacy of the proposed approach is illustrated with data acquired from bearings typically found on aircraft and monitored via a properly instrumented test rig.",
"title": ""
},
{
"docid": "ed1a3ca3e558eeb33e2841fa4b9c28d2",
"text": "© 2010 ETRI Journal, Volume 32, Number 4, August 2010 In this paper, we present a low-voltage low-dropout voltage regulator (LDO) for a system-on-chip (SoC) application which, exploiting the multiplication of the Miller effect through the use of a current amplifier, is frequency compensated up to 1-nF capacitive load. The topology and the strategy adopted to design the LDO and the related compensation frequency network are described in detail. The LDO works with a supply voltage as low as 1.2 V and provides a maximum load current of 50 mA with a drop-out voltage of 200 mV: the total integrated compensation capacitance is about 40 pF. Measurement results as well as comparison with other SoC LDOs demonstrate the advantage of the proposed topology.",
"title": ""
}
] | scidocsrr |
36f97bf0a09158177f72c49d2613db44 | Automatic Sentiment Analysis for Unstructured Data | [
{
"docid": "a178871cd82edaa05a0b0befacb7fc38",
"text": "The main applications and challenges of one of the hottest research areas in computer science.",
"title": ""
}
] | [
{
"docid": "7e7d4a3ab8fe57c6168835fa1ab3b413",
"text": "Massively parallel architectures such as the GPU are becoming increasingly important due to the recent proliferation of data. In this paper, we propose a key class of hybrid parallel graphlet algorithms that leverages multiple CPUs and GPUs simultaneously for computing k-vertex induced subgraph statistics (called graphlets). In addition to the hybrid multi-core CPU-GPU framework, we also investigate single GPU methods (using multiple cores) and multi-GPU methods that leverage all available GPUs simultaneously for computing induced subgraph statistics. Both methods leverage GPU devices only, whereas the hybrid multicore CPU-GPU framework leverages all available multi-core CPUs and multiple GPUs for computing graphlets in large networks. Compared to recent approaches, our methods are orders of magnitude faster, while also more cost effective enjoying superior performance per capita and per watt. In particular, the methods are up to 300 times faster than a recent state-of-the-art method. To the best of our knowledge, this is the first work to leverage multiple CPUs and GPUs simultaneously for computing induced subgraph statistics.",
"title": ""
},
{
"docid": "1186fa429d435d0e2009e8b155cf92cc",
"text": "Recommender Systems are software tools and techniques for suggesting items to users by considering their preferences in an automated fashion. The suggestions provided are aimed at support users in various decisionmaking processes. Technically, recommender system has their origins in different fields such as Information Retrieval (IR), text classification, machine learning and Decision Support Systems (DSS). Recommender systems are used to address the Information Overload (IO) problem by recommending potentially interesting or useful items to users. They have proven to be worthy tools for online users to deal with the IO and have become one of the most popular and powerful tools in E-commerce. Many existing recommender systems rely on the Collaborative Filtering (CF) and have been extensively used in E-commerce .They have proven to be very effective with powerful techniques in many famous E-commerce companies. This study presents an overview of the field of recommender systems with current generation of recommendation methods and examines comprehensively CF systems with its algorithms.",
"title": ""
},
{
"docid": "06597c7f7d76cb3749d13b597b903570",
"text": "2.1 Summary ............................................... 5 2.2 Definition .............................................. 6 2.3 History ................................................... 6 2.4 Overview of Currently Used Classification Systems and Terminology 7 2.5 Currently Used Terms in Classification of Osteomyelitis of the Jaws .................. 11 2.5.1 Acute/Subacute Osteomyelitis .............. 11 2.5.2 Chronic Osteomyelitis ........................... 11 2.5.3 Chronic Suppurative Osteomyelitis: Secondary Chronic Osteomyelitis .......... 11 2.5.4 Chronic Non-suppurative Osteomyelitis 11 2.5.5 Diffuse Sclerosing Osteomyelitis, Primary Chronic Osteomyelitis, Florid Osseous Dysplasia, Juvenile Chronic Osteomyelitis ............. 11 2.5.6 SAPHO Syndrome, Chronic Recurrent Multifocal Osteomyelitis (CRMO) ........... 13 2.5.7 Periostitis Ossificans, Garrès Osteomyelitis ............................. 13 2.5.8 Other Commonly Used Terms ................ 13 2.6 Osteomyelitis of the Jaws: The Zurich Classification System ........... 16 2.6.1 General Aspects of the Zurich Classification System ............................. 16 2.6.2 Acute Osteomyelitis and Secondary Chronic Osteomyelitis ........................... 17 2.6.3 Clinical Presentation ............................. 26 2.6.4 Primary Chronic Osteomyelitis .............. 34 2.7 Differential Diagnosis ............................ 48 2.7.1 General Considerations ......................... 48 2.7.2 Differential Diagnosis of Acute and Secondary Chronic Osteomyelitis ... 50 2.7.3 Differential Diagnosis of Primary Chronic Osteomyelitis ........................... 50 2.1 Summary",
"title": ""
},
{
"docid": "bb3295be91f0365d0d101e08ca4f5f5f",
"text": "Autonomous driving with high velocity is a research hotspot which challenges the scientists and engineers all over the world. This paper proposes a scheme of indoor autonomous car based on ROS which combines the method of Deep Learning using Convolutional Neural Network (CNN) with statistical approach using liDAR images and achieves a robust obstacle avoidance rate in cruise mode. In addition, the design and implementation of autonomous car are also presented in detail which involves the design of Software Framework, Hector Simultaneously Localization and Mapping (Hector SLAM) by Teleoperation, Autonomous Exploration, Path Plan, Pose Estimation, Command Processing, and Data Recording (Co- collection). what’s more, the schemes of outdoor autonomous car, communication, and security are also discussed. Finally, all functional modules are integrated in nVidia Jetson TX1.",
"title": ""
},
{
"docid": "74f95681ad04646bd5a221870948e43b",
"text": "Crimes will somehow influence organizations and institutions when occurred frequently in a society. Thus, it seems necessary to study reasons, factors and relations between occurrence of different crimes and finding the most appropriate ways to control and avoid more crimes. The main objective of this paper is to classify clustered crimes based on occurrence frequency during different years. Data mining is used extensively in terms of analysis, investigation and discovery of patterns for occurrence of different crimes. We applied a theoretical model based on data mining techniques such as clustering and classification to real crime dataset recorded by police in England and Wales within 1990 to 2011. We assigned weights to the features in order to improve the quality of the model and remove low value of them. The Genetic Algorithm (GA) is used for optimizing of Outlier Detection operator parameters using RapidMiner tool. Keywords—crime; clustering; classification; genetic algorithm; weighting; rapidminer",
"title": ""
},
{
"docid": "009f1283d0bd29d99a2de3695157ffd7",
"text": "Convolutional networks for image classification progressively reduce resolution until the image is represented by tiny feature maps in which the spatial structure of the scene is no longer discernible. Such loss of spatial acuity can limit image classification accuracy and complicate the transfer of the model to downstream applications that require detailed scene understanding. These problems can be alleviated by dilation, which increases the resolution of output feature maps without reducing the receptive field of individual neurons. We show that dilated residual networks (DRNs) outperform their non-dilated counterparts in image classification without increasing the models depth or complexity. We then study gridding artifacts introduced by dilation, develop an approach to removing these artifacts (degridding), and show that this further increases the performance of DRNs. In addition, we show that the accuracy advantage of DRNs is further magnified in downstream applications such as object localization and semantic segmentation.",
"title": ""
},
{
"docid": "099bd9e751b8c1e3a07ee06f1ba4b55b",
"text": "This paper presents a robust stereo-vision-based drivable road detection and tracking system that was designed to navigate an intelligent vehicle through challenging traffic scenarios and increment road safety in such scenarios with advanced driver-assistance systems (ADAS). This system is based on a formulation of stereo with homography as a maximum a posteriori (MAP) problem in a Markov random held (MRF). Under this formulation, we develop an alternating optimization algorithm that alternates between computing the binary labeling for road/nonroad classification and learning the optimal parameters from the current input stereo pair itself. Furthermore, online extrinsic camera parameter reestimation and automatic MRF parameter tuning are performed to enhance the robustness and accuracy of the proposed system. In the experiments, the system was tested on our experimental intelligent vehicles under various real challenging scenarios. The results have substantiated the effectiveness and the robustness of the proposed system with respect to various challenging road scenarios such as heterogeneous road materials/textures, heavy shadows, changing illumination and weather conditions, and dynamic vehicle movements.",
"title": ""
},
{
"docid": "5d1b66986357f2566ac503727a80bb87",
"text": "Natural Language Inference (NLI) task requires an agent to determine the logical relationship between a natural language premise and a natural language hypothesis. We introduce Interactive Inference Network (IIN), a novel class of neural network architectures that is able to achieve high-level understanding of the sentence pair by hierarchically extracting semantic features from interaction space. We show that an interaction tensor (attention weight) contains semantic information to solve natural language inference, and a denser interaction tensor contains richer semantic information. One instance of such architecture, Densely Interactive Inference Network (DIIN), demonstrates the state-of-the-art performance on large scale NLI copora and large-scale NLI alike corpus. It’s noteworthy that DIIN achieve a greater than 20% error reduction on the challenging Multi-Genre NLI (MultiNLI; Williams et al. 2017) dataset with respect to the strongest published system.",
"title": ""
},
{
"docid": "d7e53788cbe072bdf26ea71c0a91c2b3",
"text": "3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation.",
"title": ""
},
{
"docid": "261a35dabf9129c6b9efc5f29540634c",
"text": "To date, the growth of electronic personal data leads to a trend that data owners prefer to remotely outsource their data to clouds for the enjoyment of the high-quality retrieval and storage service without worrying the burden of local data management and maintenance. However, secure share and search for the outsourced data is a formidable task, which may easily incur the leakage of sensitive personal information. Efficient data sharing and searching with security is of critical importance. This paper, for the first time, proposes a searchable attribute-based proxy reencryption system. When compared with the existing systems only supporting either searchable attribute-based functionality or attribute-based proxy reencryption, our new primitive supports both abilities and provides flexible keyword update service. In particular, the system enables a data owner to efficiently share his data to a specified group of users matching a sharing policy and meanwhile, the data will maintain its searchable property but also the corresponding search keyword(s) can be updated after the data sharing. The new mechanism is applicable to many real-world applications, such as electronic health record systems. It is also proved chosen ciphertext secure in the random oracle model.",
"title": ""
},
{
"docid": "18fcdcadc3290f9c8dd09f0aa1a27e8f",
"text": "The Industry 4.0 is a vision that includes connecting more intensively physical systems with their virtual counterparts in computers. This computerization of manufacturing will bring many advantages, including allowing data gathering, integration and analysis in the scale not seen earlier. In this paper we describe our Semantic Big Data Historian that is intended to handle large volumes of heterogeneous data gathered from distributed data sources. We describe the approach and implementation with a special focus on using Semantic Web technologies for integrating the data.",
"title": ""
},
{
"docid": "4b2510dfa7b0d9de17a9a1e43a362e85",
"text": "Stakeholder marketing has established foundational support for redefining and broadening the marketing discipline. An extensive literature review of 58 marketing articles that address six primary stakeholder groups (i.e., customers, suppliers, employees, shareholders, regulators, and the local community) provides evidence of the important role the groups play in stakeholder marketing. Based on this review and in conjunction with established marketing theory, we define stakeholder marketing as “activities and processes within a system of social institutions that facilitate and maintain value through exchange relationships with multiple stakeholders.” In an effort to focus on the stakeholder marketing field of study, we offer both a conceptual framework for understanding the pivotal role of stakeholder marketing and research questions for examining the linkages among stakeholder exchanges, value creation, and marketing outcomes.",
"title": ""
},
{
"docid": "753b167933f5dd92c4b8021f6b448350",
"text": "The advent of social media and microblogging platforms has radically changed the way we consume information and form opinions. In this paper, we explore the anatomy of the information space on Facebook by characterizing on a global scale the news consumption patterns of 376 million users over a time span of 6 y (January 2010 to December 2015). We find that users tend to focus on a limited set of pages, producing a sharp community structure among news outlets. We also find that the preferences of users and news providers differ. By tracking how Facebook pages \"like\" each other and examining their geolocation, we find that news providers are more geographically confined than users. We devise a simple model of selective exposure that reproduces the observed connectivity patterns.",
"title": ""
},
{
"docid": "147b207125fcda1dece25a6c5cd17318",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "8ca60b68f1516d63af36b7ead860686b",
"text": "The automatic patch-based exploit generation problem is: given a program P and a patched version of the program P', automatically generate an exploit for the potentially unknown vulnerability present in P but fixed in P'. In this paper, we propose techniques for automatic patch-based exploit generation, and show that our techniques can automatically generate exploits for 5 Microsoft programs based upon patches provided via Windows Update. Although our techniques may not work in all cases, a fundamental tenant of security is to conservatively estimate the capabilities of attackers. Thus, our results indicate that automatic patch-based exploit generation should be considered practical. One important security implication of our results is that current patch distribution schemes which stagger patch distribution over long time periods, such as Windows Update, may allow attackers who receive the patch first to compromise the significant fraction of vulnerable hosts who have not yet received the patch.",
"title": ""
},
{
"docid": "152182336e620ee94f24e3865b7b377f",
"text": "In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization. This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF 123 1216. H.M. is supported in part by ARO Grant W911NF-15-10385.",
"title": ""
},
{
"docid": "9cc997e886bea0ac5006c9ee734b7906",
"text": "Additive manufacturing technology using inkjet offers several improvements to electronics manufacturing compared to current nonadditive masking technologies. Manufacturing processes can be made more efficient, straightforward and flexible compared to subtractive masking processes, several time-consuming and expensive steps can be omitted. Due to the additive process, material loss is minimal, because material is never removed as with etching processes. The amounts of used material and waste are smaller, which is advantageous in both productivity and environmental means. Furthermore, the additive inkjet manufacturing process is flexible allowing fast prototyping, easy design changes and personalization of products. Additive inkjet processing offers new possibilities to electronics integration, by enabling direct writing on various surfaces, and component interconnection without a specific substrate. The design and manufacturing of inkjet printed modules differs notably from the traditional way to manufacture electronics. In this study a multilayer inkjet interconnection process to integrate functional systems was demonstrated, and the issues regarding the design and manufacturing were considered. r 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "dd7ab988d8a40e6181cd37f8a1b1acfa",
"text": "In areas approaching malaria elimination, human mobility patterns are important in determining the proportion of malaria cases that are imported or the result of low-level, endemic transmission. A convenience sample of participants enrolled in a longitudinal cohort study in the catchment area of Macha Hospital in Choma District, Southern Province, Zambia, was selected to carry a GPS data logger for one month from October 2013 to August 2014. Density maps and activity space plots were created to evaluate seasonal movement patterns. Time spent outside the household compound during anopheline biting times, and time spent in malaria high- and low-risk areas, were calculated. There was evidence of seasonal movement patterns, with increased long-distance movement during the dry season. A median of 10.6% (interquartile range (IQR): 5.8-23.8) of time was spent away from the household, which decreased during anopheline biting times to 5.6% (IQR: 1.7-14.9). The per cent of time spent in malaria high-risk areas for participants residing in high-risk areas ranged from 83.2% to 100%, but ranged from only 0.0% to 36.7% for participants residing in low-risk areas. Interventions targeted at the household may be more effective because of restricted movement during the rainy season, with limited movement between high- and low-risk areas.",
"title": ""
},
{
"docid": "cfce53c88e07b9cd837c3182a24d9901",
"text": "The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "8b252e706868440162e50a2c23255cb3",
"text": "Currently, most top-performing text detection networks tend to employ fixed-size anchor boxes to guide the search for text instances. ey usually rely on a large amount of anchors with different scales to discover texts in scene images, thus leading to high computational cost. In this paper, we propose an end-to-end boxbased text detector with scale-adaptive anchors, which can dynamically adjust the scales of anchors according to the sizes of underlying texts by introducing an additional scale regression layer. e proposed scale-adaptive anchors allow us to use a few number of anchors to handle multi-scale texts and therefore significantly improve the computational efficiency. Moreover, compared to discrete scales used in previous methods, the learned continuous scales are more reliable, especially for small texts detection. Additionally, we propose Anchor convolution to beer exploit necessary feature information by dynamically adjusting the sizes of receptive fields according to the learned scales. Extensive experiments demonstrate that the proposed detector is fast, taking only 0.28 second per image, while outperforming most state-of-the-art methods in accuracy.",
"title": ""
}
] | scidocsrr |
945a19b9004f75cd7a1f0e7743ed3221 | Learning Sense-specific Word Embeddings By Exploiting Bilingual Resources | [
{
"docid": "a1a1ba8a6b7515f676ba737434c6d86a",
"text": "Semantic hierarchy construction aims to build structures of concepts linked by hypernym–hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym–hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74%, outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29%.",
"title": ""
}
] | [
{
"docid": "c6e14529a55b0e6da44dd0966896421a",
"text": "Context-based pairing solutions increase the usability of IoT device pairing by eliminating any human involvement in the pairing process. This is possible by utilizing on-board sensors (with same sensing modalities) to capture a common physical context (e.g., ambient sound via each device's microphone). However, in a smart home scenario, it is impractical to assume that all devices will share a common sensing modality. For example, a motion detector is only equipped with an infrared sensor while Amazon Echo only has microphones. In this paper, we develop a new context-based pairing mechanism called Perceptio that uses time as the common factor across differing sensor types. By focusing on the event timing, rather than the specific event sensor data, Perceptio creates event fingerprints that can be matched across a variety of IoT devices. We propose Perceptio based on the idea that devices co-located within a physically secure boundary (e.g., single family house) can observe more events in common over time, as opposed to devices outside. Devices make use of the observed contextual information to provide entropy for Perceptio's pairing protocol. We design and implement Perceptio, and evaluate its effectiveness as an autonomous secure pairing solution. Our implementation demonstrates the ability to sufficiently distinguish between legitimate devices (placed within the boundary) and attacker devices (placed outside) by imposing a threshold on fingerprint similarity. Perceptio demonstrates an average fingerprint similarity of 94.9% between legitimate devices while even a hypothetical impossibly well-performing attacker yields only 68.9% between itself and a valid device.",
"title": ""
},
{
"docid": "68bb5cb195c910e0a52c81a42a9e141c",
"text": "With advances in brain-computer interface (BCI) research, a portable few- or single-channel BCI system has become necessary. Most recent BCI studies have demonstrated that the common spatial pattern (CSP) algorithm is a powerful tool in extracting features for multiple-class motor imagery. However, since the CSP algorithm requires multi-channel information, it is not suitable for a few- or single-channel system. In this study, we applied a short-time Fourier transform to decompose a single-channel electroencephalography signal into the time-frequency domain and construct multi-channel information. Using the reconstructed data, the CSP was combined with a support vector machine to obtain high classification accuracies from channels of both the sensorimotor and forehead areas. These results suggest that motor imagery can be detected with a single channel not only from the traditional sensorimotor area but also from the forehead area.",
"title": ""
},
{
"docid": "ce94ff17f677b6c2c6c81295fa53b8df",
"text": "The Information Artifact Ontology (IAO) was created to serve as a domain‐neutral resource for the representation of types of information content entities (ICEs) such as documents, data‐bases, and digital im‐ ages. We identify a series of problems with the current version of the IAO and suggest solutions designed to advance our understanding of the relations between ICEs and associated cognitive representations in the minds of human subjects. This requires embedding IAO in a larger framework of ontologies, including most importantly the Mental Func‐ tioning Ontology (MFO). It also requires a careful treatment of the aboutness relations between ICEs and associated cognitive representa‐ tions and their targets in reality.",
"title": ""
},
{
"docid": "7664e1bb09bf8547bbc7333a41404f2f",
"text": "A Nyquist ADC with time-based pipelined architecture is proposed. The proposed hybrid pipeline stage, incorporating time-domain amplification based on a charge pump, enables power efficient analog to digital conversion. The proposed ADC also adopts a minimalist switched amplifier with 24dB open-loop dc gain in the first stage MDAC that is based on a new V-T operation, instead of a conventional high gain amplifier. The measured results of the prototype ADC implemented in a 0.13μm CMOS demonstrate peak SNDR of 69.3dB at 6.38mW power, with a near rail-to-rail 1MHz input of 2.4VP-P at 70MHz sampling frequency and 1.3V supply. This results in 38.2fJ/conversion-step FOM.",
"title": ""
},
{
"docid": "e485aca373cf4543e1a8eeadfa0e6772",
"text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.",
"title": ""
},
{
"docid": "9bc681a751d8fe9e2c93204ea06786b8",
"text": "In this paper, a complimentary split ring resonator (CSRR) enhanced wideband log-periodic antenna with coupled microstrip line feeding is presented. Here in this work, coupled line feeding to the patches is proposed to avoid individual microstrip feed matching complexities. Three CSRR elements were etched in the ground plane. Individual patches were designed according to the conventional log-periodic design rules. FR4 dielectric substrate is used to design a five-element log-periodic patch with CSRR printed on the ground plane. The result shows a wide operating band ranging from 4.5 GHz to 9 GHz. Surface current distribution of the antenna shows a strong resonance of CSRR's placed in the ground plane. The design approach of the antenna is reported and performance of the proposed antenna has been evaluated through three dimensional electromagnetic simulation validating performance enhancement of the antenna due to presence of CSRRs. Antennas designed in this work may be used in satellite and indoor wireless communication.",
"title": ""
},
{
"docid": "36f2be7a14eeb10ad975aa00cfd30f36",
"text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.",
"title": ""
},
{
"docid": "eef87d8905b621d2d0bb2b66108a56c1",
"text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.",
"title": ""
},
{
"docid": "f0ef9541461cd9d9e42ea355ea31ac41",
"text": "We introduce and create a framework for deriving probabilistic models of Information Retrieval. The models are nonparametric models of IR obtained in the language model approach. We derive term-weighting models by measuring the divergence of the actual term distribution from that obtained under a random process. Among the random processes we study the binomial distribution and Bose--Einstein statistics. We define two types of term frequency normalization for tuning term weights in the document--query matching process. The first normalization assumes that documents have the same length and measures the information gain with the observed term once it has been accepted as a good descriptor of the observed document. The second normalization is related to the document length and to other statistics. These two normalization methods are applied to the basic models in succession to obtain weighting formulae. Results show that our framework produces different nonparametric models forming baseline alternatives to the standard tf-idf model.",
"title": ""
},
{
"docid": "ed929cce16774307d93719f50415e138",
"text": "BACKGROUND\nMore than one in five patients who undergo treatment for breast cancer will develop breast cancer-related lymphedema (BCRL). BCRL can occur as a result of breast cancer surgery and/or radiation therapy. BCRL can negatively impact comfort, function, and quality of life (QoL). Manual lymphatic drainage (MLD), a type of hands-on therapy, is frequently used for BCRL and often as part of complex decongestive therapy (CDT). CDT is a fourfold conservative treatment which includes MLD, compression therapy (consisting of compression bandages, compression sleeves, or other types of compression garments), skin care, and lymph-reducing exercises (LREs). Phase 1 of CDT is to reduce swelling; Phase 2 is to maintain the reduced swelling.\n\n\nOBJECTIVES\nTo assess the efficacy and safety of MLD in treating BCRL.\n\n\nSEARCH METHODS\nWe searched Medline, EMBASE, CENTRAL, WHO ICTRP (World Health Organization's International Clinical Trial Registry Platform), and Cochrane Breast Cancer Group's Specialised Register from root to 24 May 2013. No language restrictions were applied.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials (RCTs) or quasi-RCTs of women with BCRL. The intervention was MLD. The primary outcomes were (1) volumetric changes, (2) adverse events. Secondary outcomes were (1) function, (2) subjective sensations, (3) QoL, (4) cost of care.\n\n\nDATA COLLECTION AND ANALYSIS\nWe collected data on three volumetric outcomes. (1) LE (lymphedema) volume was defined as the amount of excess fluid left in the arm after treatment, calculated as volume in mL of affected arm post-treatment minus unaffected arm post-treatment. (2) Volume reduction was defined as the amount of fluid reduction in mL from before to after treatment calculated as the pretreatment LE volume of the affected arm minus the post-treatment LE volume of the affected arm. (3) Per cent reduction was defined as the proportion of fluid reduced relative to the baseline excess volume, calculated as volume reduction divided by baseline LE volume multiplied by 100. We entered trial data into Review Manger 5.2 (RevMan), pooled data using a fixed-effect model, and analyzed continuous data as mean differences (MDs) with 95% confidence intervals (CIs). We also explored subgroups to determine whether mild BCRL compared to moderate or severe BCRL, and BCRL less than a year compared to more than a year was associated with a better response to MLD.\n\n\nMAIN RESULTS\nSix trials were included. Based on similar designs, trials clustered in three categories.(1) MLD + standard physiotherapy versus standard physiotherapy (one trial) showed significant improvements in both groups from baseline but no significant between-groups differences for per cent reduction.(2) MLD + compression bandaging versus compression bandaging (two trials) showed significant per cent reductions of 30% to 38.6% for compression bandaging alone, and an additional 7.11% reduction for MLD (MD 7.11%, 95% CI 1.75% to 12.47%; two RCTs; 83 participants). Volume reduction was borderline significant (P = 0.06). LE volume was not significant. Subgroup analyses was significant showing that participants with mild-to-moderate BCRL were better responders to MLD than were moderate-to-severe participants.(3) MLD + compression therapy versus nonMLD treatment + compression therapy (three trials) were too varied to pool. One of the trials compared compression sleeve plus MLD to compression sleeve plus pneumatic pump. Volume reduction was statistically significant favoring MLD (MD 47.00 mL, 95% CI 15.25 mL to 78.75 mL; 1 RCT; 24 participants), per cent reduction was borderline significant (P=0.07), and LE volume was not significant. A second trial compared compression sleeve plus MLD to compression sleeve plus self-administered simple lymphatic drainage (SLD), and was significant for MLD for LE volume (MD -230.00 mL, 95% CI -450.84 mL to -9.16 mL; 1 RCT; 31 participants) but not for volume reduction or per cent reduction. A third trial of MLD + compression bandaging versus SLD + compression bandaging was not significant (P = 0.10) for per cent reduction, the only outcome measured (MD 11.80%, 95% CI -2.47% to 26.07%, 28 participants).MLD was well tolerated and safe in all trials.Two trials measured function as range of motion with conflicting results. One trial reported significant within-groups gains for both groups, but no between-groups differences. The other trial reported there were no significant within-groups gains and did not report between-groups results. One trial measured strength and reported no significant changes in either group.Two trials measured QoL, but results were not usable because one trial did not report any results, and the other trial did not report between-groups results.Four trials measured sensations such as pain and heaviness. Overall, the sensations were significantly reduced in both groups over baseline, but with no between-groups differences. No trials reported cost of care.Trials were small ranging from 24 to 45 participants. Most trials appeared to randomize participants adequately. However, in four trials the person measuring the swelling knew what treatment the participants were receiving, and this could have biased results.\n\n\nAUTHORS' CONCLUSIONS\nMLD is safe and may offer additional benefit to compression bandaging for swelling reduction. Compared to individuals with moderate-to-severe BCRL, those with mild-to-moderate BCRL may be the ones who benefit from adding MLD to an intensive course of treatment with compression bandaging. This finding, however, needs to be confirmed by randomized data.In trials where MLD and sleeve were compared with a nonMLD treatment and sleeve, volumetric outcomes were inconsistent within the same trial. Research is needed to identify the most clinically meaningful volumetric measurement, to incorporate newer technologies in LE assessment, and to assess other clinically relevant outcomes such as fibrotic tissue formation.Findings were contradictory for function (range of motion), and inconclusive for quality of life.For symptoms such as pain and heaviness, 60% to 80% of participants reported feeling better regardless of which treatment they received.One-year follow-up suggests that once swelling had been reduced, participants were likely to keep their swelling down if they continued to use a custom-made sleeve.",
"title": ""
},
{
"docid": "63a548ee4f8857823e4bcc7ccbc31d36",
"text": "The growing amounts of textual data require automatic methods for structuring relevant information so that it can be further processed by computers and systematically accessed by humans. The scenario dealt with in this dissertation is known as Knowledge Base Population (KBP), where relational information about entities is retrieved from a large text collection and stored in a database, structured according to a prespecified schema. Most of the research in this dissertation is placed in the context of the KBP benchmark of the Text Analysis Conference (TAC KBP), which provides a test-bed to examine all steps in a complex end-to-end relation extraction setting. In this dissertation a new state of the art for the TAC KBP benchmark was achieved by focussing on the following research problems: (1) The KBP task was broken down into a modular pipeline of sub-problems, and the most pressing issues were identified and quantified at all steps. (2) The quality of semi-automatically generated training data was increased by developing noise-reduction methods, decreasing the influence of false-positive training examples. (3) A focus was laid on fine-grained entity type modelling, entity expansion, entity matching and tagging, to maintain as much recall as possible on the relational argument level. (4) A new set of effective methods for generating training data, encoding features and training relational classifiers was developed and compared with previous state-of-the-art methods.",
"title": ""
},
{
"docid": "8406ce55a8de0995d07896761ac76051",
"text": "The genesis of the internet and web has created huge information on the web, including users’ digital or textual opinions and reviews. This leads to compiling many features in document-level. Consequently, we will have a high-dimensional feature space. In this paper, we propose an algorithm based on standard deviation method to solve the high-dimensional feature space. The algorithm constructs feature subsets based on dispersion of features. In other words, algorithm selects the features with higher value of standard deviation for construction of the subsets. To do this, the paper presents an experiment of performance estimation on sentiment analysis dataset using ensemble of classifiers when dimensionality reduction is performed on the input space using three different methods. Also different types of base classifiers and classifier combination rules were used.",
"title": ""
},
{
"docid": "4b9d994288fc555c89554cc2c7e41712",
"text": "The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. In 2004, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-I (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. We confirmed that WE-4RII can effectively express its emotion.",
"title": ""
},
{
"docid": "35c8c5f950123154f4445b6c6b2399c2",
"text": "Online social media have democratized the broadcasting of information, encouraging users to view the world through the lens of social networks. The exploitation of this lens, termed social sensing, presents challenges for researchers at the intersection of computer science and the social sciences.",
"title": ""
},
{
"docid": "51a67685249e0108c337d53b5b1c7c92",
"text": "CONTEXT\nEvidence suggests that early adverse experiences play a preeminent role in development of mood and anxiety disorders and that corticotropin-releasing factor (CRF) systems may mediate this association.\n\n\nOBJECTIVE\nTo determine whether early-life stress results in a persistent sensitization of the hypothalamic-pituitary-adrenal axis to mild stress in adulthood, thereby contributing to vulnerability to psychopathological conditions.\n\n\nDESIGN AND SETTING\nProspective controlled study conducted from May 1997 to July 1999 at the General Clinical Research Center of Emory University Hospital, Atlanta, Ga.\n\n\nPARTICIPANTS\nForty-nine healthy women aged 18 to 45 years with regular menses, with no history of mania or psychosis, with no active substance abuse or eating disorder within 6 months, and who were free of hormonal and psychotropic medications were recruited into 4 study groups (n = 12 with no history of childhood abuse or psychiatric disorder [controls]; n = 13 with diagnosis of current major depression who were sexually or physically abused as children; n = 14 without current major depression who were sexually or physically abused as children; and n = 10 with diagnosis of current major depression and no history of childhood abuse).\n\n\nMAIN OUTCOME MEASURES\nAdrenocorticotropic hormone (ACTH) and cortisol levels and heart rate responses to a standardized psychosocial laboratory stressor compared among the 4 study groups.\n\n\nRESULTS\nWomen with a history of childhood abuse exhibited increased pituitary-adrenal and autonomic responses to stress compared with controls. This effect was particularly robust in women with current symptoms of depression and anxiety. Women with a history of childhood abuse and a current major depression diagnosis exhibited a more than 6-fold greater ACTH response to stress than age-matched controls (net peak of 9.0 pmol/L [41.0 pg/mL]; 95% confidence interval [CI], 4.7-13.3 pmol/L [21.6-60. 4 pg/mL]; vs net peak of 1.4 pmol/L [6.19 pg/mL]; 95% CI, 0.2-2.5 pmol/L [1.0-11.4 pg/mL]; difference, 8.6 pmol/L [38.9 pg/mL]; 95% CI, 4.6-12.6 pmol/L [20.8-57.1 pg/mL]; P<.001).\n\n\nCONCLUSIONS\nOur findings suggest that hypothalamic-pituitary-adrenal axis and autonomic nervous system hyperreactivity, presumably due to CRF hypersecretion, is a persistent consequence of childhood abuse that may contribute to the diathesis for adulthood psychopathological conditions. Furthermore, these results imply a role for CRF receptor antagonists in the prevention and treatment of psychopathological conditions related to early-life stress. JAMA. 2000;284:592-597",
"title": ""
},
{
"docid": "3cdab5427efd08edc4f73266b7ed9176",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
},
{
"docid": "d4b98b3872a94da9c8f7f93ff4f09cf5",
"text": "Hadapt is a start-up company currently commercializing the Yale University research project called HadoopDB. The company focuses on building a platform for Big Data analytics in the cloud by introducing a storage layer optimized for structured data and by providing a framework for executing SQL queries efficiently. This work considers processing data warehousing queries over very large datasets. Our goal is to maximize perfor mance while, at the same time, not giving up fault tolerance and scalability. We analyze the complexity of this problem in the split execution environment of HadoopDB. Here, incoming queries are examined; parts of the query are pushed down and executed inside the higher performing database layer; and the rest of the query is processed in a more generic MapReduce framework.\n In this paper, we discuss in detail performance-oriented query execution strategies for data warehouse queries in split execution environments, with particular focus on join and aggregation operations. The efficiency of our techniques is demonstrated by running experiments using the TPC-H benchmark with 3TB of data. In these experiments we compare our results with a standard commercial parallel database and an open-source MapReduce implementation featuring a SQL interface (Hive). We show that HadoopDB successfully competes with other systems.",
"title": ""
},
{
"docid": "ac2d144c5c06fcfb2d0530b115f613dc",
"text": "In medical imaging, Computer Aided Diagnosis (CAD) is a rapidly growing dynamic area of research. In recent years, significant attempts are made for the enhancement of computer aided diagnosis applications because errors in medical diagnostic systems can result in seriously misleading medical treatments. Machine learning is important in Computer Aided Diagnosis. After using an easy equation, objects such as organs may not be indicated accurately. So, pattern recognition fundamentally involves learning from examples. In the field of bio-medical, pattern recognition and machine learning promise the improved accuracy of perception and diagnosis of disease. They also promote the objectivity of decision-making process. For the analysis of high-dimensional and multimodal bio-medical data, machine learning offers a worthy approach for making classy and automatic algorithms. This survey paper provides the comparative analysis of different machine learning algorithms for diagnosis of different diseases such as heart disease, diabetes disease, liver disease, dengue disease and hepatitis disease. It brings attention towards the suite of machine learning algorithms and tools that are used for the analysis of diseases and decision-making process accordingly.",
"title": ""
},
{
"docid": "88f7c90be37cc4cb863fccbaf3a3a9e0",
"text": "A tensegrity is finite configuration of points in Ed suspended rigidly by inextendable cables and incompressable struts. Here it is explained how a stress-energy function, given by a symmetric stress matrix, can be used to create tensegrities that are globally rigid in the sense that the only configurations that satisfy the cable and strut constraints are congruent copies.",
"title": ""
},
{
"docid": "e85a019405a29e19670c99f9eabfff78",
"text": "Online shopping, different from traditional shopping behavior, is characterized with uncertainty, anonymity, and lack of control and potential opportunism. Therefore, trust is an important factor to facilitate online transactions. The purpose of this study is to explore the role of trust in consumer online purchase behavior. This study undertook a comprehensive survey of online customers having e-shopping experiences in Taiwan and we received 1258 valid questionnaires. The empirical results, using structural equation modeling, indicated that perceived ease of use and perceived usefulness affect have a significant impact on trust in e-commerce. Trust also has a significant influence on attitude towards online purchase. However, there is no significant impact from trust on the intention of online purchase.",
"title": ""
}
] | scidocsrr |
202a652cfa3e199a78fd20234f5c1dd8 | A Sentence Simplification System for Improving Relation Extraction | [
{
"docid": "5aeffba75c1e6d5f0e7bde54662da8e8",
"text": "A large number of Open Relation Extraction approaches have been proposed recently, covering a wide range of NLP machinery, from “shallow” (e.g., part-of-speech tagging) to “deep” (e.g., semantic role labeling–SRL). A natural question then is what is the tradeoff between NLP depth (and associated computational cost) versus effectiveness. This paper presents a fair and objective experimental comparison of 8 state-of-the-art approaches over 5 different datasets, and sheds some light on the issue. The paper also describes a novel method, EXEMPLAR, which adapts ideas from SRL to less costly NLP machinery, resulting in substantial gains both in efficiency and effectiveness, over binary and n-ary relation extraction tasks.",
"title": ""
},
{
"docid": "4261755b137a5cde3d9f33c82bc53cd7",
"text": "We study the problem of automatically extracting information networks formed by recognizable entities as well as relations among them from social media sites. Our approach consists of using state-of-the-art natural language processing tools to identify entities and extract sentences that relate such entities, followed by using text-clustering algorithms to identify the relations within the information network. We propose a new term-weighting scheme that significantly improves on the state-of-the-art in the task of relation extraction, both when used in conjunction with the standard tf ċ idf scheme and also when used as a pruning filter. We describe an effective method for identifying benchmarks for open information extraction that relies on a curated online database that is comparable to the hand-crafted evaluation datasets in the literature. From this benchmark, we derive a much larger dataset which mimics realistic conditions for the task of open information extraction. We report on extensive experiments on both datasets, which not only shed light on the accuracy levels achieved by state-of-the-art open information extraction tools, but also on how to tune such tools for better results.",
"title": ""
},
{
"docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
},
{
"docid": "40405c31dfd3439252eb1810a373ec0e",
"text": "Traditional relation extraction seeks to identify pre-specified semantic relations within natural language text, while open Information Extraction (Open IE) takes a more general approach, and looks for a variety of relations without restriction to a fixed relation set. With this generalization comes the question, what is a relation? For example, should the more general task be restricted to relations mediated by verbs, nouns, or both? To help answer this question, we propose two levels of subtasks for Open IE. One task is to determine if a sentence potentially contains a relation between two entities? The other task looks to confirm explicit relation words for two entities. We propose multiple SVM models with dependency tree kernels for both tasks. For explicit relation extraction, our system can extract both noun and verb relations. Our results on three datasets show that our system is superior when compared to state-of-the-art systems like REVERB and OLLIE for both tasks. For example, in some experiments our system achieves 33% improvement on nominal relation extraction over OLLIE. In addition we propose an unsupervised rule-based approach which can serve as a strong baseline for Open IE systems.",
"title": ""
},
{
"docid": "ed189b8fa606cc2d86706d199dd71a89",
"text": "This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7%. PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75%. The PATTY resource is freely available for interactive access and download.",
"title": ""
}
] | [
{
"docid": "ffdd14d8d74a996971284a8e5e950996",
"text": "Ten years on from a review in the twentieth issue of this journal, this contribution assess the direction research in the field of glucose sensing for diabetes is headed and various technologies to be seen in the future. The emphasis of this review was placed on the home blood glucose testing market. After an introduction to diabetes and glucose sensing, this review analyses state of the art and pipeline devices; in particular their user friendliness and technological advancement. This review complements conventional reviews based on scholarly published papers in journals.",
"title": ""
},
{
"docid": "39cacae62d16bc187f88884fe72ace59",
"text": "Microplastics are present throughout the marine environment and ingestion of these plastic particles (<1 mm) has been demonstrated in a laboratory setting for a wide array of marine organisms. Here, we investigate the presence of microplastics in two species of commercially grown bivalves: Mytilus edulis and Crassostrea gigas. Microplastics were recovered from the soft tissues of both species. At time of human consumption, M. edulis contains on average 0.36 ± 0.07 particles g(-1) (wet weight), while a plastic load of 0.47 ± 0.16 particles g(-1) ww was detected in C. gigas. As a result, the annual dietary exposure for European shellfish consumers can amount to 11,000 microplastics per year. The presence of marine microplastics in seafood could pose a threat to food safety, however, due to the complexity of estimating microplastic toxicity, estimations of the potential risks for human health posed by microplastics in food stuffs is not (yet) possible.",
"title": ""
},
{
"docid": "66b104459bdfc063cf7559c363c5802f",
"text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.",
"title": ""
},
{
"docid": "93d498adaee9070ffd608c5c1fe8e8c9",
"text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.",
"title": ""
},
{
"docid": "6eab3ef8777363641b734ff4eacc90fe",
"text": "Big data, because it can mine new knowledge for economic growth and technical innovation, has recently received considerable attention, and many research efforts have been directed to big data processing due to its high volume, velocity, and variety (referred to as \"3V\") challenges. However, in addition to the 3V challenges, the flourishing of big data also hinges on fully understanding and managing newly arising security and privacy challenges. If data are not authentic, new mined knowledge will be unconvincing; while if privacy is not well addressed, people may be reluctant to share their data. Because security has been investigated as a new dimension, \"veracity,\" in big data, in this article, we aim to exploit new challenges of big data in terms of privacy, and devote our attention toward efficient and privacy-preserving computing in the big data era. Specifically, we first formalize the general architecture of big data analytics, identify the corresponding privacy requirements, and introduce an efficient and privacy-preserving cosine similarity computing protocol as an example in response to data mining's efficiency and privacy requirements in the big data era.",
"title": ""
},
{
"docid": "e6d309d24e7773d7fc78c3ebeb926ba0",
"text": "INTRODUCTION\nLiver disease is the third most common cause of premature mortality in the UK. Liver failure accelerates frailty, resulting in skeletal muscle atrophy, functional decline and an associated risk of liver transplant waiting list mortality. However, there is limited research investigating the impact of exercise on patient outcomes pre and post liver transplantation. The waitlist period for patients listed for liver transplantation provides a unique opportunity to provide and assess interventions such as prehabilitation.\n\n\nMETHODS AND ANALYSIS\nThis study is a phase I observational study evaluating the feasibility of conducting a randomised control trial (RCT) investigating the use of a home-based exercise programme (HBEP) in the management of patients awaiting liver transplantation. Twenty eligible patients will be randomly selected from the Queen Elizabeth University Hospital Birmingham liver transplant waiting list. Participants will be provided with an individually tailored 12-week HBEP, including step targets and resistance exercises. Activity trackers and patient diaries will be provided to support data collection. For the initial 6 weeks, telephone support will be given to discuss compliance with the study intervention, achievement of weekly targets, and to address any queries or concerns regarding the intervention. During weeks 6-12, participants will continue the intervention without telephone support to evaluate longer term adherence to the study intervention. On completing the intervention, all participants will be invited to engage in a focus group to discuss their experiences and the feasibility of an RCT.\n\n\nETHICS AND DISSEMINATION\nThe protocol is approved by the National Research Ethics Service Committee North West - Greater Manchester East and Health Research Authority (REC reference: 17/NW/0120). Recruitment into the study started in April 2017 and ended in July 2017. Follow-up of participants is ongoing and due to finish by the end of 2017. The findings of this study will be disseminated through peer-reviewed publications and international presentations. In addition, the protocol will be placed on the British Liver Trust website for public access.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02949505; Pre-results.",
"title": ""
},
{
"docid": "0195e112c19f512b7de6a7f00e9f1099",
"text": "Medication-related osteonecrosis of the jaw (MRONJ) is a severe adverse drug reaction, consisting of progressive bone destruction in the maxillofacial region of patients. ONJ can be caused by two pharmacological agents: Antiresorptive (including bisphosphonates (BPs) and receptor activator of nuclear factor kappa-B ligand inhibitors) and antiangiogenic. MRONJ pathophysiology is not completely elucidated. There are several suggested hypothesis that could explain its unique localization to the jaws: Inflammation or infection, microtrauma, altered bone remodeling or over suppression of bone resorption, angiogenesis inhibition, soft tissue BPs toxicity, peculiar biofilm of the oral cavity, terminal vascularization of the mandible, suppression of immunity, or Vitamin D deficiency. Dental screening and adequate treatment are fundamental to reduce the risk of osteonecrosis in patients under antiresorptive or antiangiogenic therapy, or before initiating the administration. The treatment of MRONJ is generally difficult and the optimal therapy strategy is still to be established. For this reason, prevention is even more important. It is suggested that a multidisciplinary team approach including a dentist, an oncologist, and a maxillofacial surgeon to evaluate and decide the best therapy for the patient. The choice between a conservative treatment and surgery is not easy, and it should be made on a case by case basis. However, the initial approach should be as conservative as possible. The most important goals of treatment for patients with established MRONJ are primarily the control of infection, bone necrosis progression, and pain. The aim of this paper is to represent the current knowledge about MRONJ, its preventive measures and management strategies.",
"title": ""
},
{
"docid": "4d5bf5e40ca09c6acd5d86e1147ab1d6",
"text": "In the next few decades, the proportion of Americans age 65 or older is expected to increase from 12% (36 million) to 20% (80 million) of the total US population [1]. As life expectancy increases, an even greater need arises for cost-effective interventions to improve function and quality of life among older adults [2-4]. All older adults face numerous health problems that can reduce or limit both the quality and quantity of life they will experience. Some of the main problems faced by older adults include reduced physical function and well-being, challenges with mental and emotional functioning and well-being, and more limited social functioning. Not surprisingly, these factors comprise the primary components of comprehensive health-related quality of life [5,6].",
"title": ""
},
{
"docid": "d17622889db09b8484d94392cadf1d78",
"text": "Software development has always inherently required multitasking: developers switch between coding, reviewing, testing, designing, and meeting with colleagues. The advent of software ecosystems like GitHub has enabled something new: the ability to easily switch between projects. Developers also have social incentives to contribute to many projects; prolific contributors gain social recognition and (eventually) economic rewards. Multitasking, however, comes at a cognitive cost: frequent context-switches can lead to distraction, sub-standard work, and even greater stress. In this paper, we gather ecosystem-level data on a group of programmers working on a large collection of projects. We develop models and methods for measuring the rate and breadth of a developers' context-switching behavior, and we study how context-switching affects their productivity. We also survey developers to understand the reasons for and perceptions of multitasking. We find that the most common reason for multitasking is interrelationships and dependencies between projects. Notably, we find that the rate of switching and breadth (number of projects) of a developer's work matter. Developers who work on many projects have higher productivity if they focus on few projects per day. Developers that switch projects too much during the course of a day have lower productivity as they work on more projects overall. Despite these findings, developers perceptions of the benefits of multitasking are varied.",
"title": ""
},
{
"docid": "546f0d09b23ed639ca78882746331cff",
"text": "This paper deals with the use of Petri nets in modelling railway network and designing appropriate control logic for it to avoid collision. Here, the whole railway network is presented as a combination of the elementary models – tracks, stations and points (switch) within the station including sensors and semaphores. We use generalized mutual exclusion constraints and constraints containing the firing vector to ensure safeness of the railway network. In this research work, we have actually introduced constraints at the points within the station. These constraints ensure that when a track is occupied, we control the switch so that another train will not enter into the same track and thus avoid collision.",
"title": ""
},
{
"docid": "45d72f6c70c034122c86301be9531e97",
"text": "Multiple Classifier Systems (MCS) have been widely studied as an alternative for increasing accuracy in pattern recognition. One of the most promising MCS approaches is Dynamic Selection (DS), in which the base classifiers are selected on the fly, according to each new sample to be classified. This paper provides a review of the DS techniques proposed in the literature from a theoretical and empirical point of view. We propose an updated taxonomy based on the main characteristics found in a dynamic selection system: (1) The methodology used to define a local region for the estimation of the local competence of the base classifiers; (2) The source of information used to estimate the level of competence of the base classifiers, such as local accuracy, oracle, ranking and probabilistic models, and (3) The selection approach, which determines whether a single or an ensemble of classifiers is selected. We categorize the main dynamic selection techniques in the DS literature based on the proposed taxonomy. We also conduct an extensive experimental analysis, considering a total of 18 state-of-the-art dynamic selection techniques, as well as static ensemble combination and single classification models. To date, this is the first analysis comparing all the key DS techniques under the same experimental protocol. Furthermore, we also present several perspectives and open research questions that can be used as a guide for future works in this domain. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f1369ac01e63236d9b6e20bcac25b8b1",
"text": "Traumatic dislocation of the testis is a rare event in which the full extent of the dislocation is present immediately following the initial trauma. We present a case in which the testicular dislocation progressed over a period of four days following the initial scrotal trauma.",
"title": ""
},
{
"docid": "0815549f210c57b28a7e2fc87c20f616",
"text": "Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time–frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.",
"title": ""
},
{
"docid": "30842064b771dd6b47e514574257928f",
"text": "To be successful in financial market trading it is necessary to correctly predict future market trends. Most professional traders use technical analysis to forecast future market prices. In this paper, we present a new hybrid intelligent method to forecast financial time series, especially for the Foreign Exchange Market (FX). To emulate the way real traders make predictions, this method uses both historical market data and chart patterns to forecast market trends. First, wavelet full decomposition of time series analysis was used as an Adaptive Network-based Fuzzy Inference System (ANFIS) input data for forecasting future market prices. Also, Quantum-behaved Particle Swarm Optimization (QPSO) for tuning the ANFIS membership functions has been used. The second part of this paper proposes a novel hybrid Dynamic Time Warping (DTW)-Wavelet Transform (WT) method for automatic pattern extraction. The results indicate that the presented hybrid method is a very useful and effective one for financial price forecasting and financial pattern extraction. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "04b7ad51d2464052ebd3d32baeb5b57b",
"text": "Rob Antrobus Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected] Sylvain Frey Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected] Benjamin Green Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk [email protected]",
"title": ""
},
{
"docid": "6dc9ebf5dea1c78e1688a560f241f804",
"text": "This paper reports finding from a study carried out in a remote rural area of Bangladesh during December 2000. Nineteen key informants were interviewed for collecting data on domestic violence against women. Each key informant provided information about 10 closest neighbouring ever-married women covering a total of 190 women. The questionnaire included information about frequency of physical violence, verbal abuse, and other relevant information, including background characteristics of the women and their husbands. 50.5% of the women were reported to be battered by their husbands and 2.1% by other family members. Beating by the husband was negatively related with age of husband: the odds of beating among women with husbands aged less than 30 years were six times of those with husbands aged 50 years or more. Members of micro-credit societies also had higher odds of being beaten than non-members. The paper discusses the possibility of community-centred interventions by raising awareness about the violation of human rights issues and other legal and psychological consequences to prevent domestic violence against women.",
"title": ""
},
{
"docid": "4396d53b9cfeb4997b4e7c7293d67586",
"text": "Title Type cities and complexity understanding cities with cellular automata agent-based models and fractals PDF the complexity of cooperation agent-based models of competition and collaboration PDF party competition an agent-based model princeton studies in complexity PDF sharing cities a case for truly smart and sustainable cities urban and industrial environments PDF global metropolitan globalizing cities in a capitalist world questioning cities PDF state of the worlds cities 201011 cities for all bridging the urban divide PDF new testament cities in western asia minor light from archaeology on cities of paul and the seven churches of revelation PDF",
"title": ""
},
{
"docid": "f83f099437475aebb81fe92be355f331",
"text": "The main receptors for amyloid-beta peptide (Abeta) transport across the blood-brain barrier (BBB) from brain to blood and blood to brain are low-density lipoprotein receptor related protein-1 (LRP1) and receptor for advanced glycation end products (RAGE), respectively. In normal human plasma a soluble form of LRP1 (sLRP1) is a major endogenous brain Abeta 'sinker' that sequesters some 70 to 90 % of plasma Abeta peptides. In Alzheimer's disease (AD), the levels of sLRP1 and its capacity to bind Abeta are reduced which increases free Abeta fraction in plasma. This in turn may increase brain Abeta burden through decreased Abeta efflux and/or increased Abeta influx across the BBB. In Abeta immunotherapy, anti-Abeta antibody sequestration of plasma Abeta enhances the peripheral Abeta 'sink action'. However, in contrast to endogenous sLRP1 which does not penetrate the BBB, some anti-Abeta antibodies may slowly enter the brain which reduces the effectiveness of their sink action and may contribute to neuroinflammation and intracerebral hemorrhage. Anti-Abeta antibody/Abeta immune complexes are rapidly cleared from brain to blood via FcRn (neonatal Fc receptor) across the BBB. In a mouse model of AD, restoring plasma sLRP1 with recombinant LRP-IV cluster reduces brain Abeta burden and improves functional changes in cerebral blood flow (CBF) and behavioral responses, without causing neuroinflammation and/or hemorrhage. The C-terminal sequence of Abeta is required for its direct interaction with sLRP and LRP-IV cluster which is completely blocked by the receptor-associated protein (RAP) that does not directly bind Abeta. Therapies to increase LRP1 expression or reduce RAGE activity at the BBB and/or restore the peripheral Abeta 'sink' action, hold potential to reduce brain Abeta and inflammation, and improve CBF and functional recovery in AD models, and by extension in AD patients.",
"title": ""
},
{
"docid": "5898f4adaf86393972bcbf4c4ab91540",
"text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.",
"title": ""
}
] | scidocsrr |
e1a1b3ef0672815c3094694ae21d711a | Densely Connected Convolutional Neural Network for Multi-purpose Image Forensics under Anti-forensic Attacks | [
{
"docid": "8fda8068ce2cc06b3bcdf06b7e761ca0",
"text": "Image forensics has attracted wide attention during the past decade. However, most existing works aim at detecting a certain operation, which means that their proposed features usually depend on the investigated image operation and they consider only binary classification. This usually leads to misleading results if irrelevant features and/or classifiers are used. For instance, a JPEG decompressed image would be classified as an original or median filtered image if it was fed into a median filtering detector. Hence, it is important to develop forensic methods and universal features that can simultaneously identify multiple image operations. Based on extensive experiments and analysis, we find that any image operation, including existing anti-forensics operations, will inevitably modify a large number of pixel values in the original images. Thus, some common inherent statistics such as the correlations among adjacent pixels cannot be preserved well. To detect such modifications, we try to analyze the properties of local pixels within the image in the residual domain rather than the spatial domain considering the complexity of the image contents. Inspired by image steganalytic methods, we propose a very compact universal feature set and then design a multiclass classification scheme for identifying many common image operations. In our experiments, we tested the proposed features as well as several existing features on 11 typical image processing operations and four kinds of anti-forensic methods. The experimental results show that the proposed strategy significantly outperforms the existing forensic methods in terms of both effectiveness and universality.",
"title": ""
},
{
"docid": "ef0d7de77d25cc574fe361178138d310",
"text": "This paper proposes a new, conceptually simple and effective forensic method to address both the generality and the fine-grained tampering localization problems of image forensics. Corresponding to each kind of image operation, a rich GMM (Gaussian Mixture Model) is learned as the image statistical model for small image patches. Thereafter, the binary classification problem, whether a given image block has been previously processed, can be solved by comparing the average patch log-likelihood values calculated on overlapping image patches under different GMMs of original and processed images. With comparisons to a powerful steganalytic feature, experimental results demonstrate the efficiency of the proposed method, for multiple image operations, on whole images and small blocks.",
"title": ""
}
] | [
{
"docid": "1dcbd0c9fad30fcc3c0b6f7c79f5d04c",
"text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.",
"title": ""
},
{
"docid": "aa12fd5752d85d80ff33f620546cc288",
"text": "Sentiment Analysis(SA) is a combination of emotions, opinions and subjectivity of text. Today, social networking sites like Twitter are tremendously used in expressing the opinions about a particular entity in the form of tweets which are limited to 140 characters. Reviews and opinions play a very important role in understanding peoples satisfaction regarding a particular entity. Such opinions have high potential for knowledge discovery. The main target of SA is to find opinions from tweets, extract sentiments from them and then define their polarity, i.e, positive, negative or neutral. Most of the work in this domain has been done for English Language. In this paper, we discuss and propose sentiment analysis using Hindi language. We will discuss an unsupervised lexicon method for classification.",
"title": ""
},
{
"docid": "7ec9f6b40242a732282520f1a4808d49",
"text": "In this paper, a novel technique to enhance the bandwidth of substrate integrated waveguide cavity backed slot antenna is demonstrated. The feeding technique to the cavity backed antenna has been modified by introducing offset feeding of microstrip line along with microstrip to grounded coplanar waveguide transition which helps to excite TE120 mode in the cavity and also to get improvement in impedance matching to the slot antenna simultaneously. The proposed antenna is designed to resonate in X band (8-12 GHz) and shows a resonance at 10.2 GHz with a bandwidth of 4.2% and a gain of 5.6 dBi, 15.6 dB front to back ratio and -30 dB maximum cross polarization level.",
"title": ""
},
{
"docid": "2c328d1dd45733ad8063ea89a6b6df43",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "a57e7eae346ee2aa7bbcaf08b8ac3481",
"text": "A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, the planning of genetic experiments, and the satisfiability problem. A number of different approaches have been developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a combination of these two techniques. This article overviews many of these approaches in a tutorial fashion. Articles",
"title": ""
},
{
"docid": "f8ac5a0dbd0bf8228b8304c1576189b9",
"text": "The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.",
"title": ""
},
{
"docid": "fabe804ea92785764b9e7e1b0b0fea9c",
"text": "Many emerging applications such as intruder detection and border protection drive the fast increasing development of device-free passive (DfP) localization techniques. In this paper, we present Pilot, a Channel State Information (CSI)-based DfP indoor localization system in WLAN. Pilot design is motivated by the observations that PHY layer CSI is capable of capturing the environment variance due to frequency diversity of wideband channel, such that the position where the entity located can be uniquely identified by monitoring the CSI feature pattern shift. Therefore, a ``passive'' radio map is constructed as prerequisite which include fingerprints for entity located in some crucial reference positions, as well as clear environment. Unlike device-based approaches that directly percepts the current state of entities, the first challenge for DfP localization is to detect their appearance in the area of interest. To this end, we design an essential anomaly detection block as the localization trigger relying on the CSI feature shift when entity emerges. Afterwards, a probabilistic algorithm is proposed to match the abnormal CSI to the fingerprint database to estimate the positions of potential existing entities. Finally, a data fusion block is developed to address the multiple entities localization challenge. We have implemented Pilot system with commercial IEEE 802.11n NICs and evaluated the performance in two typical indoor scenarios. It is shown that our Pilot system can greatly outperform the corresponding best RSS-based scheme in terms of anomaly detection and localization accuracy.",
"title": ""
},
{
"docid": "161643d403819a0b9815da64c9c472ae",
"text": "The Domain Name System (DNS) is an essential network infrastructure component since it supports the operation of the Web, Email, Voice over IP (VoIP) and other business- critical applications running over the network. Events that compromise the security of DNS can have a significant impact on the Internet since they can affect its availability and its intended operation. This paper describes algorithms used to monitor and detect certain types of attacks to the DNS infrastructure using flow data. Our methodology is based on algorithms that do not rely on known signature attack vectors. The effectiveness of our solution is illustrated with real and simulated traffic examples. In one example, we were able to detect a tunneling attack well before the appearance of public reports of it.",
"title": ""
},
{
"docid": "096b09f064643cbd2cd80f310981c5a6",
"text": "A Ku-band 200-W pulsed solid-state power amplifier has been presented and designed by using a hybrid radial-/rectangular-waveguide spatially power-combining technique. The hybrid radial-/rectangular-waveguide power-dividing/power-combining circuit employed in this design provides not only a high power-combining efficiency over a wide bandwidth but also efficient heat sinking for the active power devices. A simple design approach of the presented power-dividing/power-combining structure has been developed. The measured small-signal gain of the pulsed power amplifier is about 51.3 dB over the operating frequency range, while the measured maximum output power at 1-dB compression is 209 W at 13.9 GHz, with an active power-combining efficiency of about 91%. Furthermore, the active power-combining efficiency is greater than 82% from 13.75 to 14.5 GHz.",
"title": ""
},
{
"docid": "8a9680ae0d35a1c53773ccf7dcef4df7",
"text": "Support Vector Machines SVMs have proven to be highly e ective for learning many real world datasets but have failed to establish them selves as common machine learning tools This is partly due to the fact that they are not easy to implement and their standard imple mentation requires the use of optimization packages In this paper we present simple iterative algorithms for training support vector ma chines which are easy to implement and guaranteed to converge to the optimal solution Furthermore we provide a technique for automati cally nding the kernel parameter and best learning rate Extensive experiments with real datasets are provided showing that these al gorithms compare well with standard implementations of SVMs in terms of generalisation accuracy and computational cost while being signi cantly simpler to implement",
"title": ""
},
{
"docid": "8e65630f39f96c281e206bdacf7a1748",
"text": "Precise measurement of the local position of moveable targets in three dimensions is still considered to be a challenge. With the presented local position measurement technology, a novel system, consisting of small and lightweight measurement transponders and a number of fixed base stations, is introduced. The system is operating in the 5.8-GHz industrial-scientific-medical band and can handle up to 1000 measurements per second with accuracies down to a few centimeters. Mathematical evaluation is based on a mechanical equivalent circuit. Measurement results obtained with prototype boards demonstrate the feasibility of the proposed technology in a practical application at a race track.",
"title": ""
},
{
"docid": "df896e48cb4b5a364006b3a8e60a96ac",
"text": "This paper describes a monocular vision based parking-slot-markings recognition algorithm, which is used to automate the target position selection of automatic parking assist system. Peak-pair detection and clustering in Hough space recognize marking lines. Specially, one-dimensional filter in Hough space is designed to utilize a priori knowledge about the characteristics of marking lines in bird's eye view edge image. Modified distance between point and line-segment is used to distinguish guideline from recognized marking line-segments. Once the guideline is successfully recognized, T-shape template matching easily recognizes dividing marking line-segments. Experiments show that proposed algorithm successfully recognizes parking slots even when adjacent vehicles occlude parking-slot-markings severely",
"title": ""
},
{
"docid": "1997a007b2eb9a314c4e9320d22293b4",
"text": "Face detection constitutes a key visual information analysis task in Machine Learning. The rise of Big Data has resulted in the accumulation of a massive volume of visual data which requires proper and fast analysis. Deep Learning methods are powerful approaches towards this task as training with large amounts of data exhibiting high variability has been shown to significantly enhance their effectiveness, but often requires expensive computations and leads to models of high complexity. When the objective is to analyze visual content in massive datasets, the complexity of the model becomes crucial to the success of the model. In this paper, a lightweight deep Convolutional Neural Network (CNN) is introduced for the purpose of face detection, designed with a view to minimize training and testing time, and outperforms previously published deep convolutional networks in this task, in terms of both effectiveness and efficiency. To train this lightweight deep network without compromising its efficiency, a new training method of progressive positive and hard negative sample mining is introduced and shown to drastically improve training speed and accuracy. Additionally, a separate deep network was trained to detect individual facial features and a model that combines the outputs of the two networks was created and evaluated. Both methods are capable of detecting faces under severe occlusion and unconstrained pose variation and meet the difficulties of large scale real-world, real-time face detection, and are suitable for deployment even in mobile environments such as Unmanned Aerial Vehicles (UAVs).",
"title": ""
},
{
"docid": "5203f520e6992ae6eb2e8cb28f523f6a",
"text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.",
"title": ""
},
{
"docid": "a85e4925e82baf96f507494c91126361",
"text": "Contractile myocytes provide a test of the hypothesis that cells sense their mechanical as well as molecular microenvironment, altering expression, organization, and/or morphology accordingly. Here, myoblasts were cultured on collagen strips attached to glass or polymer gels of varied elasticity. Subsequent fusion into myotubes occurs independent of substrate flexibility. However, myosin/actin striations emerge later only on gels with stiffness typical of normal muscle (passive Young's modulus, E approximately 12 kPa). On glass and much softer or stiffer gels, including gels emulating stiff dystrophic muscle, cells do not striate. In addition, myotubes grown on top of a compliant bottom layer of glass-attached myotubes (but not softer fibroblasts) will striate, whereas the bottom cells will only assemble stress fibers and vinculin-rich adhesions. Unlike sarcomere formation, adhesion strength increases monotonically versus substrate stiffness with strongest adhesion on glass. These findings have major implications for in vivo introduction of stem cells into diseased or damaged striated muscle of altered mechanical composition.",
"title": ""
},
{
"docid": "fb43f7e740f4a2cc6c63e3cad9bc3fc7",
"text": "The prediction task in national language processing means to guess the missing letter, word, phrase, or sentence that likely follow in a given segment of a text. Since 1980s many systems with different methods were developed for different languages. In this paper an overview of the existing prediction methods that have been used for more than two decades are described and a general classification of the approaches is presented. The three main categories of the classification are statistical modeling, knowledge-based modeling, and heuristic modeling (adaptive).",
"title": ""
},
{
"docid": "b963250b3fd1cb874c6caa93796ca1e7",
"text": "Context awareness was introduced recently in several fields in quotidian human activities. Among context aware applications, health care systems are the most important ones. Such applications, in order to perceive the context, rely on sensors which may be physical or virtual. However, these applications lack of standardization in handling the context and the perceived sensors data. In this work, we propose a formal context aware application architecture model to deal with the context taking into account the scalability and interoperability as key features towards an abstraction of the context relatively to end user applications. As a proof of concept, we present also a case study and simulation explaining the operational aspect of this architecture in health care systems.",
"title": ""
},
{
"docid": "e8b486ce556a0193148ffd743661bce9",
"text": "This chapter presents the fundamentals and applications of the State Machine Replication (SMR) technique for implementing consistent fault-tolerant services. Our focus here is threefold. First we present some fundamentals about distributed computing and three “practical” SMR protocols for different fault models. Second, we discuss some recent work aiming to improve the performance, modularity and robustness of SMR protocols. Finally, we present some prominent applications for SMR and an example of the real code needed for implementing a dependable service using the BFT-SMART replication library.",
"title": ""
},
{
"docid": "d2b06786b6daa023dfd9f58ac99e8186",
"text": "A systematic method for deriving soft-switching three-port converters (TPCs), which can interface multiple energy, is proposed in this paper. Novel full-bridge (FB) TPCs featuring single-stage power conversion, reduced conduction loss, and low-voltage stress are derived. Two nonisolated bidirectional power ports and one isolated unidirectional load port are provided by integrating an interleaved bidirectional Buck/Boost converter and a bridgeless Boost rectifier via a high-frequency transformer. The switching bridges on the primary side are shared; hence, the number of active switches is reduced. Primary-side pulse width modulation and secondary-side phase shift control strategy are employed to provide two control freedoms. Voltage and power regulations over two of the three power ports are achieved. Furthermore, the current/voltage ripples on the primary-side power ports are reduced due to the interleaving operation. Zero-voltage switching and zero-current switching are realized for the active switches and diodes, respectively. A typical FB-TPC with voltage-doubler rectifier developed by the proposed method is analyzed in detail. Operation principles, control strategy, and characteristics of the FB-TPC are presented. Experiments have been carried out to demonstrate the feasibility and effectiveness of the proposed topology derivation method.",
"title": ""
},
{
"docid": "921f141ac96c707aa2abc0c4071053d5",
"text": "When a mesh of simplicial elements (triangles or tetrahedra) is used to form a piecewise linear approximation of a function, the accuracy of the approximation depends on the sizes and shapes of the elements. In finite element methods, the conditioning of the stiffness matrices also depends on the sizes and shapes of the elements. This paper explains the mathematical connections between mesh geometry, interpolation errors, and stiffness matrix conditioning. These relationships are expressed by error bounds and element quality measures that determine the fitness of a triangle or tetrahedron for interpolation or for achieving low condition numbers. Unfortunately, the quality measures for these two purposes do not agree with each other; for instance, small angles are bad for matrix conditioning but not for interpolation. Several of the upper and lower bounds on interpolation errors and element stiffness matrix conditioning given here are tighter than those that have appeared in the literature before, so the quality measures are likely to be unusually precise indicators of element fitness.",
"title": ""
}
] | scidocsrr |
45152911817d270e1896874a457c297a | Type-Aware Distantly Supervised Relation Extraction with Linked Arguments | [
{
"docid": "afd00b4795637599f357a7018732922c",
"text": "We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation, (ii) broad use of lexical features, including jointly conditioning on multiple consecutive words, (iii) effective use of priors in conditional loglinear models, and (iv) fine-grained modeling of unknown word features. Using these ideas together, the resulting tagger gives a 97.24% accuracy on the Penn Treebank WSJ, an error reduction of 4.4% on the best previous single automatically learned tagging result.",
"title": ""
},
{
"docid": "79ad9125b851b6d2c3ed6fb1c5cf48e1",
"text": "In this paper, we extend distant supervision (DS) based on Wikipedia for Relation Extraction (RE) by considering (i) relations defined in external repositories, e.g. YAGO, and (ii) any subset of Wikipedia documents. We show that training data constituted by sentences containing pairs of named entities in target relations is enough to produce reliable supervision. Our experiments with state-of-the-art relation extraction models, trained on the above data, show a meaningful F1 of 74.29% on a manually annotated test set: this highly improves the state-of-art in RE using DS. Additionally, our end-to-end experiments demonstrated that our extractors can be applied to any general text document.",
"title": ""
},
{
"docid": "c4a925ced6eb9bea9db96136905c3e19",
"text": "Knowledge of objects and their parts, meronym relations, are at the heart of many question-answering systems, but manually encoding these facts is impractical. Past researchers have tried hand-written patterns, supervised learning, and bootstrapped methods, but achieving both high precision and recall has proven elusive. This paper reports on a thorough exploration of distant supervision to learn a meronym extractor for the domain of college biology. We introduce a novel algorithm, generalizing the ``at least one'' assumption of multi-instance learning to handle the case where a fixed (but unknown) percentage of bag members are positive examples. Detailed experiments compare strategies for mention detection, negative example generation, leveraging out-of-domain meronyms, and evaluate the benefit of our multi-instance percentage model.",
"title": ""
},
{
"docid": "44582f087f9bb39d6e542ff7b600d1c7",
"text": "We propose a new deterministic approach to coreference resolution that combines the global information and precise features of modern machine-learning models with the transparency and modularity of deterministic, rule-based systems. Our sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision, where each model builds on the previous model's cluster output. The two stages of our sieve-based architecture, a mention detection stage that heavily favors recall, followed by coreference sieves that are precision-oriented, offer a powerful way to achieve both high precision and high recall. Further, our approach makes use of global information through an entity-centric model that encourages the sharing of features across all mentions that point to the same real-world entity. Despite its simplicity, our approach gives state-of-the-art performance on several corpora and genres, and has also been incorporated into hybrid state-of-the-art coreference systems for Chinese and Arabic. Our system thus offers a new paradigm for combining knowledge in rule-based systems that has implications throughout computational linguistics.",
"title": ""
},
{
"docid": "9c44aba7a9802f1fe95fbeb712c23759",
"text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.",
"title": ""
},
{
"docid": "904db9e8b0deb5027d67bffbd345b05f",
"text": "Entity Recognition (ER) is a key component of relation extraction systems and many other natural-language processing applications. Unfortunately, most ER systems are restricted to produce labels from to a small set of entity classes, e.g., person, organization, location or miscellaneous. In order to intelligently understand text and extract a wide range of information, it is useful to more precisely determine the semantic classes of entities mentioned in unstructured text. This paper defines a fine-grained set of 112 tags, formulates the tagging problem as multi-class, multi-label classification, describes an unsupervised method for collecting training data, and presents the FIGER implementation. Experiments show that the system accurately predicts the tags for entities. Moreover, it provides useful information for a relation extraction system, increasing the F1 score by 93%. We make FIGER and its data available as a resource for future work.",
"title": ""
}
] | [
{
"docid": "5e14acfc68e8cb1ae7ea9b34eba420e0",
"text": "Education University of California, Berkeley (2008-2013) Ph.D. in Computer Science Thesis: Surface Web Semantics for Structured Natural Language Processing Advisor: Dan Klein. Committee members: Dan Klein, Marti Hearst, Line Mikkelsen, Nelson Morgan University of California, Berkeley (2012) Master of Science (M.S.) in Computer Science Thesis: An All-Fragments Grammar for Simple and Accurate Parsing Advisor: Dan Klein Indian Institute of Technology, Kanpur (2004-2008) Bachelor of Technology (B.Tech.) in Computer Science and Engineering GPA: 3.96/4.00 (Institute and Department Rank 2) Cornell University (Summer 2007) CS490 (Independent Research and Reading) GPA: 4.00/4.00 Advisors: Lillian Lee, Claire Cardie",
"title": ""
},
{
"docid": "5f94ad6047ec9cf565b9960e89bbc913",
"text": "In this paper, we compare the geometrical performance between the rigorous sensor model (RSM) and rational function model (RFM) in the sensor modeling of FORMOSAT-2 satellite images. For the RSM, we provide a least squares collocation procedure to determine the precise orbits. As for the RFM, we analyze the model errors when a large amount of quasi-control points, which are derived from the satellite ephemeris and attitude data, are employed. The model errors with respect to the length of the image strip are also demonstrated. Experimental results show that the RFM is well behaved, indicating that its positioning errors is similar to that of the RSM. Introduction Sensor orientation modeling is a prerequisite for the georeferencing of satellite images or 3D object reconstruction from satellite stereopairs. Nowadays, most of the high-resolution satellites use linear array pushbroom scanners. Based on the pushbroom scanning geometry, a number of investigations have been reported regarding the geometric accuracy of linear array images (Westin, 1990; Chen and Lee, 1993; Li, 1998; Tao et al., 2000; Toutin, 2003; Grodecki and Dial, 2003). The geometric modeling of the sensor orientation may be divided into two categories, namely, the rigorous sensor model (RSM) and the rational function model (RFM) (Toutin, 2004). Capable of fully delineating the imaging geometry between the image space and object space, the RSM has been recognized in providing the most precise geometrical processing of satellite images. Based on the collinearity condition, an image point corresponds to a ground point using the employment of the orientation parameters, which are expressed as a function of the sampling time. Due to the dynamic sampling, the RSM contains many mathematical calculations, which can cause problems for researchers who are not familiar with the data preprocessing. Moreover, with the increasing number of Earth resource satellites, researchers need to familiarize themselves with the uniqueness and complexity of each sensor model. Therefore, a generic sensor model of the geometrical processing is needed for simplification. (Dowman and Michalis, 2003). The RFM is a generalized sensor model that is used as an alternative for the RSM. The model uses a pair of ratios of two polynomials to approximate the collinearity condition equations. The RFM has been successfully applied to several high-resolution satellite images such as Ikonos (Di et al., 2003; Grodecki and Dial, 2003; Fraser and Hanley, 2003) and QuickBird (Robertson, 2003). Due to its simple impleThe Geometrical Comparisons of RSM and RFM for FORMOSAT-2 Satellite Images Liang-Chien Chen, Tee-Ann Teo, and Chien-Liang Liu mentation and standardization (NIMA, 2000), the approach has been widely used in the remote sensing community. Launched on 20 May 2004, FORMOSAT-2 is operated by the National Space Organization of Taiwan. The satellite operates in a sun-synchronous orbit at an altitude of 891 km and with an inclination of 99.1 degrees. It has a swath width of 24 km and orbits the Earth exactly 14 times per day, which makes daily revisits possible (NSPO, 2005). Its panchromatic images have a resolution of 2 meters, while the multispectral sensor produces 8 meter resolution images covering the blue, green, red, and NIR bands. Its high performance provides an excellent data resource for the remote sensing researchers. The major objective of this investigation is to compare the geometrical performances between the RSM and RFM when FORMOSAT-2 images are employed. A least squares collocation-based RSM will also be proposed in the paper. In the reconstruction of the RFM, rational polynomial coefficients are generated by using the on-board ephemeris and attitude data. In addition to the comparison of the two models, the modeling error of the RFM is analyzed when long image strips are used. Rigorous Sensor Models The proposed method comprises essentially of two parts. The first involves the development of the mathematical model for time-dependent orientations. The second performs the least squares collocation to compensate the local systematic errors. Orbit Fitting There are two types of sensor models for pushbroom satellite images, i.e., orbital elements (Westin, 1990) and state vectors (Chen and Chang, 1998). The orbital elements use the Kepler elements as the orbital parameters, while the state vectors calculate the orbital parameters directly by using the position vector. Although both sensor models are robust, the state vector model provides simpler mathematical calculations. For this reason, we select the state vector approach in this investigation. Three steps are included in the orbit modeling: (a) Initialization of the orientation parameters using on-board ephemeris data; (b) Compensation of the systematic errors of the orbital parameters and attitude data via ground control points (GCPs); and (c) Modification of the orbital parameters by using the Least Squares Collocation (Mikhail and Ackermann, 1982) technique. PHOTOGRAMMETRIC ENGINEER ING & REMOTE SENS ING May 2006 573 Center for Space and Remote Sensing Research National Central University, Chung-Li, Taiwan ([email protected]). Photogrammetric Engineering & Remote Sensing Vol. 72, No. 5, May 2006, pp. 573–579. 0099-1112/06/7205–0573/$3.00/0 © 2006 American Society for Photogrammetry and Remote Sensing HR-05-016.qxd 4/10/06 2:55 PM Page 573",
"title": ""
},
{
"docid": "945b2067076bd47485b39c33fb062ec1",
"text": "Computation of floating-point transcendental functions has a relevant importance in a wide variety of scientific applications, where the area cost, error and latency are important requirements to be attended. This paper describes a flexible FPGA implementation of a parameterizable floating-point library for computing sine, cosine, arctangent and exponential functions using the CORDIC algorithm. The novelty of the proposed architecture is that by sharing the same resources the CORDIC algorithm can be used in two operation modes, allowing it to compute the sine, cosine or arctangent functions. Additionally, in case of the exponential function, the architectures change automatically between the CORDIC or a Taylor approach, which helps to improve the precision characteristics of the circuit, specifically for small input values after the argument reduction. Synthesis of the circuits and an experimental analysis of the errors have demonstrated the correctness and effectiveness of the implemented cores and allow the designer to choose, for general-purpose applications, a suitable bit-width representation and number of iterations of the CORDIC algorithm.",
"title": ""
},
{
"docid": "e3e4d19aa9a5db85f30698b7800d2502",
"text": "In this paper we examine the use of a mathematical procedure, called Principal Component Analysis, in Recommender Systems. The resulting filtering algorithm applies PCA on user ratings and demographic data, aiming to improve various aspects of the recommendation process. After a brief introduction to PCA, we provide a discussion of the proposed PCADemog algorithm, along with possible ways of combining it with different sources of filtering data. The experimental part of this work tests distinct parameterizations for PCA-Demog, identifying those with the best performance. Finally, the paper compares their results with those achieved by other filtering approaches, and draws interesting conclusions.",
"title": ""
},
{
"docid": "b4e3d2f5e4bb1238cb6f4dad5c952c4c",
"text": "Data-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to 'meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no 'best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.",
"title": ""
},
{
"docid": "a39364020ec95a3d35dfe929d4a000c0",
"text": "The Internet of Things (IoTs) refers to the inter-connection of billions of smart devices. The steadily increasing number of IoT devices with heterogeneous characteristics requires that future networks evolve to provide a new architecture to cope with the expected increase in data generation. Network function virtualization (NFV) provides the scale and flexibility necessary for IoT services by enabling the automated control, management and orchestration of network resources. In this paper, we present a novel NFV enabled IoT architecture targeted for a state-of-the art operating room environment. We use web services based on the representational state transfer (REST) web architecture as the IoT application's southbound interface and illustrate its applicability via two different scenarios.",
"title": ""
},
{
"docid": "6c5cabfa5ee5b9d67ef25658a4b737af",
"text": "Sentence compression is the task of producing a summary of a single sentence. The compressed sentence should be shorter, contain the important content from the original, and itself be grammatical. The three papers discussed here take different approaches to identifying important content, determining which sentences are grammatical, and jointly optimizing these objectives. One family of approaches we will discuss is those that are tree-based, which create a compressed sentence by making edits to the syntactic tree of the original sentence. A second type of approach is sentence-based, which generates strings directly. Orthogonal to either of these two approaches is whether sentences are treated in isolation or if the surrounding discourse affects compressions. We compare a tree-based, a sentence-based, and a discourse-based approach and conclude with ideas for future work in this area. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MSCIS-10-20. This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/929 Methods for Sentence Compression",
"title": ""
},
{
"docid": "684555a1b5eb0370eebee8cbe73a82ff",
"text": "This paper identifies and examines the key principles underlying building a state-of-the-art grammatical error correction system. We do this by analyzing the Illinois system that placed first among seventeen teams in the recent CoNLL-2013 shared task on grammatical error correction. The system focuses on five different types of errors common among non-native English writers. We describe four design principles that are relevant for correcting all of these errors, analyze the system along these dimensions, and show how each of these dimensions contributes to the performance.",
"title": ""
},
{
"docid": "0d2b90dad65e01289008177a4ebbbade",
"text": "A good test suite is one that detects real faults. Because the set of faults in a program is usually unknowable, this definition is not useful to practitioners who are creating test suites, nor to researchers who are creating and evaluating tools that generate test suites. In place of real faults, testing research often uses mutants, which are artificial faults -- each one a simple syntactic variation -- that are systematically seeded throughout the program under test. Mutation analysis is appealing because large numbers of mutants can be automatically-generated and used to compensate for low quantities or the absence of known real faults. Unfortunately, there is little experimental evidence to support the use of mutants as a replacement for real faults. This paper investigates whether mutants are indeed a valid substitute for real faults, i.e., whether a test suite’s ability to detect mutants is correlated with its ability to detect real faults that developers have fixed. Unlike prior studies, these investigations also explicitly consider the conflating effects of code coverage on the mutant detection rate. Our experiments used 357 real faults in 5 open-source applications that comprise a total of 321,000 lines of code. Furthermore, our experiments used both developer-written and automatically-generated test suites. The results show a statistically significant correlation between mutant detection and real fault detection, independently of code coverage. The results also give concrete suggestions on how to improve mutation analysis and reveal some inherent limitations.",
"title": ""
},
{
"docid": "d8567a34caacdb22a0aea281a1dbbccb",
"text": "Traditionally, interference protection is guaranteed through a policy of spectrum licensing, whereby wireless systems get exclusive access to spectrum. This is an effective way to prevent interference, but it leads to highly inefficient use of spectrum. Cognitive radio along with software radio, spectrum sensors, mesh networks, and other emerging technologies can facilitate new forms of spectrum sharing that greatly improve spectral efficiency and alleviate scarcity, if policies are in place that support these forms of sharing. On the other hand, new technology that is inconsistent with spectrum policy will have little impact. This paper discusses policies that can enable or facilitate use of many spectrum-sharing arrangements, where the arrangements are categorized as being based on coexistence or cooperation and as sharing among equals or primary-secondary sharing. A shared spectrum band may be managed directly by the regulator, or this responsibility may be delegated in large part to a license-holder. The type of sharing arrangement and the entity that manages it have a great impact on which technical approaches are viable and effective. The most efficient and cost-effective form of spectrum sharing will depend on the type of systems involved, where systems under current consideration are as diverse as television broadcasters, cellular carriers, public safety systems, point-to-point links, and personal and local-area networks. In addition, while cognitive radio offers policy-makers the opportunity to improve spectral efficiency, cognitive radio also provides new challenges for policy enforcement. A responsible regulator will not allow a device into the marketplace that might harm other systems. Thus, designers must seek innovative ways to assure regulators that new devices will comply with policy requirements and will not cause harmful interference.",
"title": ""
},
{
"docid": "395dcc7c09562f358c07af9c999fbdc7",
"text": "Protecting source code against reverse engineering and theft is an important problem. The goal is to carry out computations using confidential algorithms on an untrusted party while ensuring confidentiality of algorithms. This problem has been addressed for Boolean circuits known as ‘circuit privacy’. Circuits corresponding to real-world programs are impractical. Well-known obfuscation techniques are highly practicable, but provide only limited security, e.g., no piracy protection. In this work, we modify source code yielding programs with adjustable performance and security guarantees ranging from indistinguishability obfuscators to (non-secure) ordinary obfuscation. The idea is to artificially generate ‘misleading’ statements. Their results are combined with the outcome of a confidential statement using encrypted selector variables. Thus, an attacker must ‘guess’ the encrypted selector variables to disguise the confidential source code. We evaluated our method using more than ten programmers as well as pattern mining across open source code repositories to gain insights of (micro-)coding patterns that are relevant for generating misleading statements. The evaluation reveals that our approach is effective in that it successfully preserves source code confidentiality.",
"title": ""
},
{
"docid": "5cdb981566dfd741c9211902c0c59d50",
"text": "Since parental personality traits are assumed to play a role in parenting behaviors, the current study examined the relation between parental personality and parenting style among 688 Dutch parents of adolescents in the SMILE study. The study assessed Big Five personality traits and derived parenting styles (authoritative, authoritarian, indulgent, and uninvolved) from scores on the underlying dimensions of support and strict control. Regression analyses were used to determine which personality traits were associated with parenting dimensions and styles. As regards dimensions, the two aspects of personality reflecting interpersonal interactions (extraversion and agreeableness) were related to supportiveness. Emotional stability was associated with lower strict control. As regards parenting styles, extraverted, agreeable, and less emotionally stable individuals were most likely to be authoritative parents. Conscientiousness and openness did not relate to general parenting, but might be associated with more content-specific acts of parenting.",
"title": ""
},
{
"docid": "2bd3f3e72d99401cdf6f574982bc65ff",
"text": "In the future smart grid, both users and power companies can potentially benefit from the economical and environmental advantages of smart pricing methods to more effectively reflect the fluctuations of the wholesale price into the customer side. In addition, smart pricing can be used to seek social benefits and to implement social objectives. To achieve social objectives, the utility company may need to collect various information about users and their energy consumption behavior, which can be challenging. In this paper, we propose an efficient pricing method to tackle this problem. We assume that each user is equipped with an energy consumption controller (ECC) as part of its smart meter. All smart meters are connected to not only the power grid but also a communication infrastructure. This allows two-way communication among smart meters and the utility company. We analytically model each user's preferences and energy consumption patterns in form of a utility function. Based on this model, we propose a Vickrey-Clarke-Groves (VCG) mechanism which aims to maximize the social welfare, i.e., the aggregate utility functions of all users minus the total energy cost. Our design requires that each user provides some information about its energy demand. In return, the energy provider will determine each user's electricity bill payment. Finally, we verify some important properties of our proposed VCG mechanism for demand side management such as efficiency, user truthfulness, and nonnegative transfer. Simulation results confirm that the proposed pricing method can benefit both users and utility companies.",
"title": ""
},
{
"docid": "4f6979ca99ec7fb0010fd102e7796248",
"text": "Cryptographic systems are essential for computer and communication security, for instance, RSA is used in PGP Email clients and AES is employed in full disk encryption. In practice, the cryptographic keys are loaded and stored in RAM as plain-text, and therefore vulnerable to physical memory attacks (e.g., cold-boot attacks). To tackle this problem, we propose Copker, which implements asymmetric cryptosystems entirely within the CPU, without storing plain-text private keys in the RAM. In its active mode, Copker stores kilobytes of sensitive data, including the private key and the intermediate states, only in onchip CPU caches (and registers). Decryption/signing operations are performed without storing sensitive information in system memory. In the suspend mode, Copker stores symmetrically encrypted private keys in memory, while employs existing solutions to keep the key-encryption key securely in CPU registers. Hence, Copker releases the system resources in the suspend mode. In this paper, we implement Copker with the most common asymmetric cryptosystem, RSA, with the support of multiple private keys. We show that Copker provides decryption/signing services that are secure against physical memory attacks. Meanwhile, with intensive experiments, we demonstrate that our implementation of Copker is secure and requires reasonable overhead. Keywords—Cache-as-RAM; cold-boot attack; key management; asymmetric cryptography implementation.",
"title": ""
},
{
"docid": "5565f51ad8e1aaee43f44917befad58a",
"text": "We explore the application of deep residual learning and dilated convolutions to the keyword spotting task, using the recently-released Google Speech Commands Dataset as our benchmark. Our best residual network (ResNet) implementation significantly outperforms Google's previous convolutional neural networks in terms of accuracy. By varying model depth and width, we can achieve compact models that also outperform previous small-footprint variants. To our knowledge, we are the first to examine these approaches for keyword spotting, and our results establish an open-source state-of-the-art reference to support the development of future speech-based interfaces.",
"title": ""
},
{
"docid": "4daec6170f18cc8896411e808e53355f",
"text": "The goal of this note is to point out that any distributed representation can be turned into a classifier through inversion via Bayes rule. The approach is simple and modular, in that it will work with any language representation whose training can be formulated as optimizing a probability model. In our application to 2 million sentences from Yelp reviews, we also find that it performs as well as or better than complex purpose-built algorithms.",
"title": ""
},
{
"docid": "a53f26ef068d11ea21b9ba8609db6ddf",
"text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "77754266da79a87b99e51b0088888550",
"text": "The paper proposed a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the moving and stationary target acquisition and recognition (MSTAR) public release database. First MSTAR image chips are represented as fine and raw feature vectors, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) network as the base learner. Since the RBF network is a binary classifier, the multiclass problem was decomposed into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF network for each binary problem into a code word, which is then \"decoded\" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature",
"title": ""
},
{
"docid": "ba16a6634b415dd2c478c83e1f65cb3c",
"text": "Reasoning and inference are central to human and artificial intelligence. Modeling inference in human language is notoriously challenging but is fundamental to natural language understanding and many applications. With the availability of large annotated data, neural network models have recently advanced the field significantly. In this paper, we present a new state-of-the-art result, achieving the accuracy of 88.3% on the standard benchmark, the Stanford Natural Language Inference dataset. This result is achieved first through our enhanced sequential encoding model, which outperforms the previous best model that employs more complicated network architectures, suggesting that the potential of sequential LSTM-based models have not been fully explored yet in previous work. We further show that by explicitly considering recursive architectures, we achieve additional improvement. Particularly, incorporating syntactic parse information contributes to our best result; it improves the performance even when the parse information is added to an already very strong system.",
"title": ""
}
] | scidocsrr |
e44674f57cf1f061cb1839768d7ad019 | "How Old Do You Think I Am?" A Study of Language and Age in Twitter | [
{
"docid": "16c9b857bbe8d9f13f078ddb193d7483",
"text": "We present TweetMotif, an exploratory search application for Twitter. Unlike traditional approaches to information retrieval, which present a simple list of messages, TweetMotif groups messages by frequent significant terms — a result set’s subtopics — which facilitate navigation and drilldown through a faceted search interface. The topic extraction system is based on syntactic filtering, language modeling, near-duplicate detection, and set cover heuristics. We have used TweetMotif to deflate rumors, uncover scams, summarize sentiment, and track political protests in real-time. A demo of TweetMotif, plus its source code, is available at http://tweetmotif.com. Introduction and Description On the microblogging service Twitter, users post millions of very short messages every day. Organizing and searching through this large corpus is an exciting research problem. Since messages are so small, we believe microblog search requires summarization across many messages at once. Our system, TweetMotif, responds to user queries, first retrieving several hundred recent matching messages from a simple index; we use the Twitter Search API. Instead of simply showing this result set as a list, TweetMotif extracts a set of themes (topics) to group and summarize these messages. A topic is simultaneously characterized by (1) a 1to 3-word textual label, and (2) a set of messages, whose texts must all contain the label. TweetMotif’s user interface is inspired by faceted search, which has been shown to aid Web search tasks (Hearst et al. 2002). The main screen is a two-column layout. The left column is a list of themes that are related to the current search term, while the right column presents actual tweets, grouped by theme. As themes are selected on the left column, a sample of tweets for that theme appears at the top of the right column, pushing down (but not removing) tweet results for any previously selected related themes. This allows users to explore and compare multiple related themes at once. The set of topics is chosen to try to satisfy several criteria, which often conflict: Copyright c © 2010, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Screenshot of TweetMotif. 1. Frequency contrast: Topic label phrases should be frequent in the query subcorpus, but infrequent among general Twitter messages. This ensures relevance to the query while eliminating overly generic terms. 2. Topic diversity: Topics should be chosen such that their messages and label phrases minimally overlap. Overlapping topics repetitively fill the same information niche; only one should be used. 3. Topic size: A topic that includes too few messages is bad; it is overly specific. 4. Small number of topics: Screen real-estate and concomitant user cognitive load are limited resources. The goal is to provide the user a concise summary of themes and variation in the query subcorpus, then allow the user to navigate to individual topics to see their associated messages, and allow recursive drilldown. The approach is related to document clustering (though a message can belong to multiple topics) and text summarization (topic labels are a high-relevance subset of text across messages). We heuristically proceed through several stages of analysis.",
"title": ""
}
] | [
{
"docid": "31ec7ef4e68950919054b59942d4dbfa",
"text": "A promising approach to learn to play board games is to use reinforcement learning algorithms that can learn a game position evaluation function. In this paper we examine and compare three different methods for generating training games: (1) Learning by self-play, (2) Learning by playing against an expert program, and (3) Learning from viewing experts play against themselves. Although the third possibility generates highquality games from the start compared to initial random games generated by self-play, the drawback is that the learning program is never allowed to test moves which it prefers. We compared these three methods using temporal difference methods to learn the game of backgammon. For particular games such as draughts and chess, learning from a large database containing games played by human experts has as a large advantage that during the generation of (useful) training games, no expensive lookahead planning is necessary for move selection. Experimental results in this paper show how useful this method is for learning to play chess and draughts.",
"title": ""
},
{
"docid": "6abb57ab0c62c6a112907f6659864756",
"text": "Rabbani, A, Kargarfard, M, and Twist, C. Reliability and validity of a submaximal warm-up test for monitoring training status in professional soccer players. J Strength Cond Res 32(2): 326-333, 2018-Two studies were conducted to assess the reliability and validity of a submaximal warm-up test (SWT) in professional soccer players. For the reliability study, 12 male players performed an SWT over 3 trials, with 1 week between trials. For the validity study, 14 players of the same team performed an SWT and a 30-15 intermittent fitness test (30-15IFT) 7 days apart. Week-to-week reliability in selected heart rate (HR) responses (exercise heart rate [HRex], heart rate recovery [HRR] expressed as the number of beats recovered within 1 minute [HRR60s], and HRR expressed as the mean HR during 1 minute [HRpost1]) was determined using the intraclass correlation coefficient (ICC) and typical error of measurement expressed as coefficient of variation (CV). The relationships between HR measures derived from the SWT and the maximal speed reached at the 30-15IFT (VIFT) were used to assess validity. The range for ICC and CV values was 0.83-0.95 and 1.4-7.0% in all HR measures, respectively, with the HRex as the most reliable HR measure of the SWT. Inverse large (r = -0.50 and 90% confidence limits [CLs] [-0.78 to -0.06]) and very large (r = -0.76 and CL, -0.90 to -0.45) relationships were observed between HRex and HRpost1 with VIFT in relative (expressed as the % of maximal HR) measures, respectively. The SWT is a reliable and valid submaximal test to monitor high-intensity intermittent running fitness in professional soccer players. In addition, the test's short duration (5 minutes) and simplicity mean that it can be used regularly to assess training status in high-level soccer players.",
"title": ""
},
{
"docid": "7704b6baee77726a546b49bc0376d8cf",
"text": "The increase in high-precision, high-sample-rate telemetry timeseries poses a problem for existing timeseries databases which can neither cope with the throughput demands of these streams nor provide the necessary primitives for effective analysis of them. We present a novel abstraction for telemetry timeseries data and a data structure for providing this abstraction: a timepartitioning version-annotated copy-on-write tree. An implementation in Go is shown to outperform existing solutions, demonstrating a throughput of 53 million inserted values per second and 119 million queried values per second on a four-node cluster. The system achieves a 2.9x compression ratio and satisfies statistical queries spanning a year of data in under 200ms, as demonstrated on a year-long production deployment storing 2.1 trillion data points. The principles and design of this database are generally applicable to a large variety of timeseries types and represent a significant advance in the development of technology for the Internet of Things.",
"title": ""
},
{
"docid": "77d0786af4c5eee510a64790af497e25",
"text": "Mobile computing is a revolutionary technology, born as a result of remarkable advances in computer hardware and wireless communication. Mobile applications have become increasingly popular in recent years. Today, it is not uncommon to see people playing games or reading mails on handphones. With the rapid advances in mobile computing technology, there is an increasing demand for processing realtime transactions in a mobile environment. Hence there is a strong need for efficient transaction management, data access modes and data management, consistency control and other mobile data management issues. This survey paper will cover issues related to concurrency control in mobile database. This paper studies concurrency control problem in mobile database systems, we analyze the features of mobile database and concurrency control techniques. With the increasing number of mobile hosts there are many new solutions and algorithms for concurrency control being proposed and implemented. We wish that our paper has served as a survey of the important solutions in the fields of concurrency control in mobile database. Keywords-component; Distributed Real-time Databases, Mobile Real-time Databases, Concurrency Control, Data Similarity, and Transaction Scheduling.",
"title": ""
},
{
"docid": "d8cd13bcd43052550dbfdc0303ef2bc7",
"text": "We study the Shannon capacity of adaptive transmission techniques in conjunction with diversity combining. This capacity provides an upper bound on spectral efficiency using these techniques. We obtain closed-form solutions for the Rayleigh fading channel capacity under three adaptive policies: optimal power and rate adaptation, constant power with optimal rate adaptation, and channel inversion with fixed rate. Optimal power and rate adaptation yields a small increase in capacity over just rate adaptation, and this increase diminishes as the average received carrier-to-noise ratio (CNR) or the number of diversity branches increases. Channel inversion suffers the largest capacity penalty relative to the optimal technique, however, the penalty diminishes with increased diversity. Although diversity yields large capacity gains for all the techniques, the gain is most pronounced with channel inversion. For example, the capacity using channel inversion with two-branch diversity exceeds that of a single-branch system using optimal rate and power adaptation. Since channel inversion is the least complex scheme to implement, there is a tradeoff between complexity and capacity for the various adaptation methods and diversity-combining techniques.",
"title": ""
},
{
"docid": "b974a8d8b298bfde540abc451f76bf90",
"text": "This chapter provides information on commonly used equipment in industrial mammalian cell culture, with an emphasis on bioreactors. The actual equipment used in the cell culture process can vary from one company to another, but the main steps remain the same. The process involves expansion of cells in seed train and inoculation train processes followed by cultivation of cells in a production bioreactor. Process and equipment options for each stage of the cell culture process are introduced and examples are provided. Finally, the use of disposables during seed train and cell culture production is discussed.",
"title": ""
},
{
"docid": "ad40625ae8500d8724523ae2e663eeae",
"text": "The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by Nature requires a great deal of anatomical detail to be modeled. In this paper, we present a human hand model with underlying anatomical structure. Animation of the hand model is controlled by muscle contraction values. We employ a physically based hybrid muscle model to convert these contraction values into movement of skin and bones. Pseudo muscles directly control the rotation of bones based on anatomical data and mechanical laws, while geometric muscles deform the skin tissue using a mass-spring system. Thus, resulting animations automatically exhibit anatomically and physically correct finger movements and skin deformations. In addition, we present a deformation technique to create individual hand models from photographs. A radial basis warping function is set up from the correspondence of feature points and applied to the complete structure of the reference hand model, making the deformed hand model instantly animatable.",
"title": ""
},
{
"docid": "314ffaaf39e2345f90e85fc5c5fdf354",
"text": "With the fast development pace of deep submicron technology, the size and density of semiconductor memory grows rapidly. However, keeping a high level of yield and reliability for memory products is more and more difficult. Both the redundancy repair and ECC techniques have been widely used for enhancing the yield and reliability of memory chips. Specifically, the redundancy repair and ECC techniques are conventionally used to repair or correct the hard faults and soft errors, respectively. In this paper, we propose an integrated ECC and redundancy repair scheme for memory reliability enhancement. Our approach can identify the hard faults and soft errors during the memory normal operation mode, and repair the hard faults during the memory idle time as long as there are unused redundant elements. We also develop a method for evaluating the memory reliability. Experimental results show that the proposed approach is effective, e.g., the MTTF of a 32K /spl times/ 64 memory is improved by 1.412 hours (7.1%) with our integrated ECC and repair scheme.",
"title": ""
},
{
"docid": "d0bacaa267599486356c175ca5419ede",
"text": "As P4 and its associated compilers move beyond relative immaturity, there is a need for common evaluation criteria. In this paper, we propose Whippersnapper, a set of benchmarks for P4. Rather than simply selecting a set of representative data-plane programs, the benchmark is designed from first principles, identifying and exploring key features and metrics. We believe the benchmark will not only provide a vehicle for comparing implementations and designs, but will also generate discussion within the larger community about the requirements for data-plane languages.",
"title": ""
},
{
"docid": "e8a9dffcb6c061fe720e7536387f5116",
"text": "The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral dataaccuracy, mean response times, and response time distributionsinto components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.",
"title": ""
},
{
"docid": "7000ea96562204dfe2c0c23f7cdb6544",
"text": "In this paper, the dynamic modeling of a doubly-fed induction generator-based wind turbine connected to infinite bus (SMIB) system, is carried out in detail. In most of the analysis, the DFIG stator transients and network transients are neglected. In this paper the interfacing problems while considering stator transients and network transients in the modeling of SMIB system are resolved by connecting a resistor across the DFIG terminals. The effect of simplification of shaft system on the controller gains is also discussed. In addition, case studies are presented to demonstrate the effect of mechanical parameters and controller gains on system stability when accounting the two-mass shaft model for the drive train system.",
"title": ""
},
{
"docid": "96a38946e201b7201e874bee0047a34e",
"text": "Nowadays people work on computers for hours and hours they don’t have time to take care of themselves. Due to hectic schedules and consumption of junk food it affects the health of people and mainly heart. So to we are implementing an heart disease prediction system using data mining technique Naïve Bayes and k-means clustering algorithm. It is the combination of both the algorithms. This paper gives an overview for the same. It helps in predicting the heart disease using various attributes and it predicts the output as in the prediction form. For grouping of various attributes it uses k-means algorithm and for predicting it uses naïve bayes algorithm. Index Terms —Data mining, Comma separated files, naïve bayes, k-means algorithm, heart disease.",
"title": ""
},
{
"docid": "66b909528a566662667a3d8c7c749bf4",
"text": "There exists a big demand for innovative secure electronic communications while the expertise level of attackers increases rapidly and that causes even bigger demands and needs for an extreme secure connection. An ideal security protocol should always be protecting the security of connections in many aspects, and leaves no trapdoor for the attackers. Nowadays, one of the popular cryptography protocols is hybrid cryptosystem that uses private and public key cryptography to change secret message. In available cryptography protocol attackers are always aware of transmission of sensitive data. Even non-interested attackers can get interested to break the ciphertext out of curiosity and challenge, when suddenly catches some scrambled data over the network. First of all, we try to explain the roles of innovative approaches in cryptography. After that we discuss about the disadvantages of public key cryptography to exchange secret key. Furthermore, DNA steganography is explained as an innovative paradigm to diminish the usage of public cryptography to exchange session key. In this protocol, session key between a sender and receiver is hidden by novel DNA data hiding technique. Consequently, the attackers are not aware of transmission of session key through unsecure channel. Finally, the strength point of the DNA steganography is discussed.",
"title": ""
},
{
"docid": "ce53aa803d587301a47166c483ecec34",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "46ea713c4206d57144350a7871433392",
"text": "In this paper, we use a blog corpus to demonstrate that we can often identify the author of an anonymous text even where there are many thousands of candidate authors. Our approach combines standard information retrieval methods with a text categorization meta-learning scheme that determines when to even venture a guess.",
"title": ""
},
{
"docid": "e75669b68e8736ee6044443108c00eb1",
"text": "UNLABELLED\nThe evolution in adhesive dentistry has broadened the indication of esthetic restorative procedures especially with the use of resin composite material. Depending on the clinical situation, some restorative techniques are best indicated. As an example, indirect adhesive restorations offer many advantages over direct techniques in extended cavities. In general, the indirect technique requires two appointments and a laboratory involvement, or it can be prepared chairside in a single visit either conventionally or by the use of computer-aided design/computer-aided manufacturing systems. In both cases, there will be an extra cost as well as the need of specific materials. This paper describes the clinical procedures for the chairside semidirect technique for composite onlay fabrication without the use of special equipments. The use of this technique combines the advantages of the direct and the indirect restoration.\n\n\nCLINICAL SIGNIFICANCE\nThe semidirect technique for composite onlays offers the advantages of an indirect restoration and low cost, and can be the ideal treatment option for extended cavities in case of financial limitations.",
"title": ""
},
{
"docid": "d2bf01dd261701cae64daa8625f4d2f4",
"text": "Canada has been the world’s leader in e-Government maturity for the last five years. The global average for government website usage by citizens is about 30%. In Canada, this statistic is over 51%. The vast majority of Canadians visit government websites to obtain information, rather than interacting or transacting with the government. It seems that the rate of adoption of e-Government has globally fallen below expectations although some countries are doing better than others. Clearly, a better understanding of why and how citizens use government websites, and their general dispositions towards e-Government is an important research issue. This paper initiates discussion of this issue by proposing a conceptual model of e-Government adoption that places users as the focal point for e-Government adoption strategy.",
"title": ""
},
{
"docid": "6bc5f1f780e96cf19dfd5cdf92b80a36",
"text": "We explore the concept of co-design in the context of neural network verification. Specifically, we aim to train deep neural networks that not only are robust to adversarial perturbations but also whose robustness can be verified more easily. To this end, we identify two properties of network models – weight sparsity and so-called ReLU stability – that turn out to significantly impact the complexity of the corresponding verification task. We demonstrate that improving weight sparsity alone already enables us to turn computationally intractable verification problems into tractable ones. Then, improving ReLU stability leads to an additional 4–13x speedup in verification times. An important feature of our methodology is its “universality,” in the sense that it can be used with a broad range of training procedures and verification approaches.",
"title": ""
},
{
"docid": "5124bfe94345f2abe6f91fe717731945",
"text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.",
"title": ""
},
{
"docid": "aacfd1e4670044e597f8a321375bdfc1",
"text": "This article presents the main outcome findings from two inter-related randomized trials conducted at four sites to evaluate the effectiveness and cost-effectiveness of five short-term outpatient interventions for adolescents with cannabis use disorders. Trial 1 compared five sessions of Motivational Enhancement Therapy plus Cognitive Behavioral Therapy (MET/CBT) with a 12-session regimen of MET and CBT (MET/CBT12) and another that included family education and therapy components (Family Support Network [FSN]). Trial II compared the five-session MET/CBT with the Adolescent Community Reinforcement Approach (ACRA) and Multidimensional Family Therapy (MDFT). The 600 cannabis users were predominately white males, aged 15-16. All five CYT interventions demonstrated significant pre-post treatment during the 12 months after random assignment to a treatment intervention in the two main outcomes: days of abstinence and the percent of adolescents in recovery (no use or abuse/dependence problems and living in the community). Overall, the clinical outcomes were very similar across sites and conditions; however, after controlling for initial severity, the most cost-effective interventions were MET/CBT5 and MET/CBT12 in Trial 1 and ACRA and MET/CBT5 in Trial 2. It is possible that the similar results occurred because outcomes were driven more by general factors beyond the treatment approaches tested in this study; or because of shared, general helping factors across therapies that help these teens attend to and decrease their connection to cannabis and alcohol.",
"title": ""
}
] | scidocsrr |
b28541811021f530432657261b8fe919 | Real-Time Machine Learning: The Missing Pieces | [
{
"docid": "a06c9d681bb8a8b89a8ee64a53e3b344",
"text": "This paper introduces CIEL, a universal execution engine for distributed data-flow programs. Like previous execution engines, CIEL masks the complexity of distributed programming. Unlike those systems, a CIEL job can make data-dependent control-flow decisions, which enables it to compute iterative and recursive algorithms. We have also developed Skywriting, a Turingcomplete scripting language that runs directly on CIEL. The execution engine provides transparent fault tolerance and distribution to Skywriting scripts and highperformance code written in other programming languages. We have deployed CIEL on a cloud computing platform, and demonstrate that it achieves scalable performance for both iterative and non-iterative algorithms.",
"title": ""
}
] | [
{
"docid": "c0549844f4e8813bd7b839a95c94a13d",
"text": "In this paper, we present a novel method to fuse observations from an inertial measurement unit (IMU) and visual sensors, such that initial conditions of the inertial integration, including gravity estimation, can be recovered quickly and in a linear manner, thus removing any need for special initialization procedures. The algorithm is implemented using a graphical simultaneous localization and mapping like approach that guarantees constant time output. This paper discusses the technical aspects of the work, including observability and the ability for the system to estimate scale in real time. Results are presented of the system, estimating the platforms position, velocity, and attitude, as well as gravity vector and sensor alignment and calibration on-line in a built environment. This paper discusses the system setup, describing the real-time integration of the IMU data with either stereo or monocular vision data. We focus on human motion for the purposes of emulating high-dynamic motion, as well as to provide a localization system for future human-robot interaction.",
"title": ""
},
{
"docid": "ada35607fa56214e5df8928008735353",
"text": "Osseous free flaps have become the preferred method for reconstructing segmental mandibular defects. Of 457 head and neck free flaps, 150 osseous mandible reconstructions were performed over a 10-year period. This experience was retrospectively reviewed to establish an approach to osseous free flap mandible reconstruction. There were 94 male and 56 female patients (mean age, 50 years; range 3 to 79 years); 43 percent had hemimandibular defects, and the rest had central, lateral, or a combination defect. Donor sites included the fibula (90 percent), radius (4 percent), scapula (4 percent), and ilium (2 percent). Rigid fixation (up to five osteotomy sites) was used in 98 percent of patients. Aesthetic and functional results were evaluated a minimum of 6 months postoperatively. The free flap success rate was 100 percent, and bony union was achieved in 97 percent of the osteotomy sites. Osseointegrated dental implants were placed in 20 patients. A return to an unrestricted diet was achieved in 45 percent of patients; 45 percent returned to a soft diet, and 5 percent were on a liquid diet. Five percent of patients required enteral feeding to maintain weight. Speech was assessed as normal (36 percent), near normal (27 percent), intelligible (28 percent), or unintelligible (9 percent). Aesthetic outcome was judged as excellent (32 percent), good (27 percent), fair (27 percent), or poor (14 percent). This study demonstrates a very high success rate, with good-to-excellent functional and aesthetic results using osseous free flaps for primary mandible reconstruction. The fibula donor site should be the first choice for most cases, particularly those with anterior or large bony defects requiring multiple osteotomies. Use of alternative donor sites (i.e., radius and scapula) is best reserved for cases with large soft-tissue and minimal bone requirements. The ilium is recommended only when other options are unavailable. Thoughtful flap selection and design should supplant the need for multiple, simultaneous free flaps and vein grafting in most cases.",
"title": ""
},
{
"docid": "04629c15852f031fcee042577034f78f",
"text": "The mobility of carriers in a silicon surface inversion layer is one of the most important parameters required to accurately model and predict MOSFET device and circuit performance. It has been found that electron mobility follows a universal curve when plotted as a function of an effective normal field regardless of substrate bias, substrate doping (≤ 1017 cm-3) and nominal process variations [1]. Although accurate modeling of p-channel MOS devices has become important due to the prevalence of CMOS technology, the existence of a universal hole mobility-field relationship has not been demonstrated. Furthermore, the effect on mobility of low-temperature and rapid high-temperature processing, which are commonly used in modern VLSI technology to control impurity diffusion, is unknown.",
"title": ""
},
{
"docid": "23052a651887a5a73831b3c8a6571ba0",
"text": "This paper presentes a novel algorithm for the voxelization of surface models of arbitrary topology. Our algorithm uses the depth and stencil buffers, available in most commercial graphics hardware, to achieve high performance. It is suitable for both polygonal meshes and parametric surfaces. Experiments highlight the advantages and limitations of our approach.",
"title": ""
},
{
"docid": "27a8ec0dc0f4ad0ae67c2a75c25c4553",
"text": "Although the concept of industrial cobots dates back to 1999, most present day hybrid human-machine assembly systems are merely weight compensators. Here, we present results on the development of a collaborative human-robot manufacturing cell for homokinetic joint assembly. The robot alternates active and passive behaviours during assembly, to lighten the burden on the operator in the first case, and to comply to his/her needs in the latter. Our approach can successfully manage direct physical contact between robot and human, and between robot and environment. Furthermore, it can be applied to standard position (and not torque) controlled robots, common in the industry. The approach is validated in a series of assembly experiments. The human workload is reduced, diminishing the risk of strain injuries. Besides, a complete risk analysis indicates that the proposed setup is compatible with the safety standards, and could be certified.",
"title": ""
},
{
"docid": "5c056ba2e29e8e33c725c2c9dd12afa8",
"text": "The large amount of text data which are continuously produced over time in a variety of large scale applications such as social networks results in massive streams of data. Typically massive text streams are created by very large scale interactions of individuals, or by structured creations of particular kinds of content by dedicated organizations. An example in the latter category would be the massive text streams created by news-wire services. Such text streams provide unprecedented challenges to data mining algorithms from an efficiency perspective. In this paper, we review text stream mining algorithms for a wide variety of problems in data mining such as clustering, classification and topic modeling. A recent challenge arises in the context of social streams, which are generated by large social networks such as Twitter. We also discuss a number of future challenges in this area of research.",
"title": ""
},
{
"docid": "b0e30f8c95c972d01e342fc30c2a501c",
"text": "PURPOSE\nThe aim of the study was to explore the impact of a permanent stoma on patients' everyday lives and to gain further insight into their need for ostomy-related education.\n\n\nSUBJECTS AND SETTING\nThe sample population comprised 15 persons with permanent ostomies. Stomas were created to manage colorectal cancer or inflammatory bowel disease. The research setting was the surgical department at a hospital in the Capitol Region of Denmark associated with the University of Copenhagen.\n\n\nMETHODS\nFocus group interviews were conducted using a phenomenological hermeneutic approach. Data were collected and analyzed using qualitative content analysis.\n\n\nRESULTS\nStoma creation led to feelings of stigma, worries about disclosure, a need for control and self-imposed limits. Furthermore, patients experienced difficulties identifying their new lives with their lives before surgery. Participants stated they need to be seen as a whole person, to have close contact with health care professionals, and receive trustworthy information about life with an ostomy. Respondents proposed group sessions conducted after hospital discharge. They further recommended that sessions be delivered by lay teachers who had a stoma themselves.\n\n\nCONCLUSIONS\nSelf-imposed isolation was often selected as a strategy for avoiding disclosing the presence of a stoma. Patient education, using health promotional methods, should take the settings into account and patients' possibility of effective knowledge transfer. Respondents recommend involvement of lay teachers, who have a stoma, and group-based learning processes are proposed, when planning and conducting patient education.",
"title": ""
},
{
"docid": "2657e5090896cc7dc01f3b66d2d97a94",
"text": "In this article, we review gas sensor application of one-dimensional (1D) metal-oxide nanostructures with major emphases on the types of device structure and issues for realizing practical sensors. One of the most important steps in fabricating 1D-nanostructure devices is manipulation and making electrical contacts of the nanostructures. Gas sensors based on individual 1D nanostructure, which were usually fabricated using electron-beam lithography, have been a platform technology for fundamental research. Recently, gas sensors with practical applicability were proposed, which were fabricated with an array of 1D nanostructures using scalable micro-fabrication tools. In the second part of the paper, some critical issues are pointed out including long-term stability, gas selectivity, and room-temperature operation of 1D-nanostructure-based metal-oxide gas sensors.",
"title": ""
},
{
"docid": "9b99371de5da25c3e2cc2d8787da7d21",
"text": "lations, is a critical ecological process (Ims and Yoccoz 1997). It can maintain genetic diversity, rescue declining populations, and re-establish extirpated populations. Sufficient movement of individuals between isolated, extinction-prone populations can allow an entire network of populations to persist via metapopulation dynamics (Hanski 1991). As areas of natural habitat are reduced in size and continuity by human activities, the degree to which the remaining fragments are functionally linked by dispersal becomes increasingly important. The strength of those linkages is determined largely by a property known as “connectivity”, which, despite its intuitive appeal, is inconsistently defined. At one extreme, metapopulation ecologists argue for a habitat patch-level definition, while at the other, landscape ecologists insist that connectivity is a landscape-scale property (Merriam 1984; Taylor et al. 1993; Tischendorf and Fahrig 2000; Moilanen and Hanski 2001; Tischendorf 2001a; Moilanen and Nieminen 2002). Differences in perspective notwithstanding, theoreticians do agree that connectivity has undeniable effects on many population processes (Wiens 1997; Moilanen and Hanski 2001). It is therefore desirable to quantify connectivity and use these measurements as a basis for decision making. Currently, many reserve design algorithms factor in some measure of connectivity when weighing alternative plans (Siitonen et al. 2002, 2003; Singleton et al. 2002; Cabeza 2003). Consideration of connectivity during the reserve design process could highlight situations where it really matters. For example, alternative reserve designs that are similar in other factors such as area, habitat quality, and cost may differ greatly in connectivity (Siitonen et al. 2002). This matters because the low-connectivity scenarios may not be able to support viable populations of certain species over long periods of time. Analyses of this sort could also redirect some project resources towards improving the connectivity of a reserve network by building movement corridors or acquiring small, otherwise undesirable habitat patches that act as links between larger patches (Keitt et al. 1997). Reserve designs could therefore include the demographic and genetic benefits of increased connectivity without substantially increasing the cost of the project (eg Siitonen et al. 2002). If connectivity is to serve as a guide, at least in part, for conservation decision-making, it clearly matters how it is measured. Unfortunately, the ecological literature is awash with different connectivity metrics. How are land managers and decision makers to efficiently choose between these alternatives, when ecologists cannot even agree on a basic definition of connectivity, let alone how it is best measured? Aside from the theoretical perspectives to which they are tied, these metrics differ in two important regards: the type of data they require and the level of detail they provide. Here, we attempt to cut through some of the confusion surrounding connectivity by developing a classification scheme based on these key differences between metrics. 529",
"title": ""
},
{
"docid": "f2d2979ca63d47ba33fffb89c16b9499",
"text": "Shor and Grover demonstrated that a quantum computer can outperform any classical computer in factoring numbers and in searching a database by exploiting the parallelism of quantum mechanics. Whereas Shor's algorithm requires both superposition and entanglement of a many-particle system, the superposition of single-particle quantum states is sufficient for Grover's algorithm. Recently, the latter has been successfully implemented using Rydberg atoms. Here we propose an implementation of Grover's algorithm that uses molecular magnets, which are solid-state systems with a large spin; their spin eigenstates make them natural candidates for single-particle systems. We show theoretically that molecular magnets can be used to build dense and efficient memory devices based on the Grover algorithm. In particular, one single crystal can serve as a storage unit of a dynamic random access memory device. Fast electron spin resonance pulses can be used to decode and read out stored numbers of up to 105, with access times as short as 10-10 seconds. We show that our proposal should be feasible using the molecular magnets Fe8 and Mn12.",
"title": ""
},
{
"docid": "c1a44605e8e9b76a76bf5a2dd3539310",
"text": "This paper presents a stereo matching approach for a novel multi-perspective panoramic stereo vision system, making use of asynchronous and non-simultaneous stereo imaging towards real-time 3D 360° vision. The method is designed for events representing the scenes visual contrast as a sparse visual code allowing the stereo reconstruction of high resolution panoramic views. We propose a novel cost measure for the stereo matching, which makes use of a similarity measure based on event distributions. Thus, the robustness to variations in event occurrences was increased. An evaluation of the proposed stereo method is presented using distance estimation of panoramic stereo views and ground truth data. Furthermore, our approach is compared to standard stereo methods applied on event-data. Results show that we obtain 3D reconstructions of 1024 × 3600 round views and outperform depth reconstruction accuracy of state-of-the-art methods on event data.",
"title": ""
},
{
"docid": "5e53a20b6904a9b8765b0384f5d1d692",
"text": "This paper provides a description of the crowdfunding sector, considering investment-based crowdfunding platforms as well as platforms in which funders do not obtain monetary payments. It lays out key features of this quickly developing sector and explores the economic forces at play that can explain the design of these platforms. In particular, it elaborates on cross-group and within-group external e¤ects and asymmetric information on crowdfunding platforms. Keywords: Crowdfunding, Platform markets, Network e¤ects, Asymmetric information, P2P lending JEL-Classi
cation: L13, D62, G24 Université catholique de Louvain, CORE and Louvain School of Management, and CESifo yRITM, University of Paris Sud and Digital Society Institute zUniversity of Mannheim, Mannheim Centre for Competition and Innovation (MaCCI), and CERRE. Email: [email protected]",
"title": ""
},
{
"docid": "ac24229e51822e44cb09baaf44e9623e",
"text": "Detecting representative frames in videos based on human actions is quite challenging because of the combined factors of human pose in action and the background. This paper addresses this problem and formulates the key frame detection as one of finding the video frames that optimally maximally contribute to differentiating the underlying action category from all other categories. To this end, we introduce a deep two-stream ConvNet for key frame detection in videos that learns to directly predict the location of key frames. Our key idea is to automatically generate labeled data for the CNN learning using a supervised linear discriminant method. While the training data is generated taking many different human action videos into account, the trained CNN can predict the importance of frames from a single video. We specify a new ConvNet framework, consisting of a summarizer and discriminator. The summarizer is a two-stream ConvNet aimed at, first, capturing the appearance and motion features of video frames, and then encoding the obtained appearance and motion features for video representation. The discriminator is a fitting function aimed at distinguishing between the key frames and others in the video. We conduct experiments on a challenging human action dataset UCF101 and show that our method can detect key frames with high accuracy.",
"title": ""
},
{
"docid": "883e244ff530bf243daa367bad2c5c99",
"text": "The demand for computing resources in the university is on the increase on daily basis and the traditional method of acquiring computing resources may no longer meet up with the present demand. This is as a result of high level of researches being carried out by the universities. The 21st century universities are now seen as the centre and base of education, research and development for the society. The university community now has to deal with a large number of people including staff, students and researchers working together on voluminous large amount of data. This actually requires very high computing resources that can only be gotten easily through cloud computing. In this paper, we have taken a close look at exploring the benefits of cloud computing and study the adoption and usage of cloud services in the University Enterprise. We establish a theoretical background to cloud computing and its associated services including rigorous analysis of the latest research on Cloud Computing as an alternative to IT provision, management and security and discuss the benefits of cloud computing in the university enterprise. We also assess the trend of adoption and usage of cloud services in the university enterprise.",
"title": ""
},
{
"docid": "11229bf95164064f954c25681c684a16",
"text": "This article proposes integrating the insights generated by framing, priming, and agenda-setting research through a systematic effort to conceptualize and understand their larger implications for political power and democracy. The organizing concept is bias, that curiously undertheorized staple of public discourse about the media. After showing how agenda setting, framing and priming fit together as tools of power, the article connects them to explicit definitions of news slant and the related but distinct phenomenon of bias. The article suggests improved measures of slant and bias. Properly defined and measured, slant and bias provide insight into how the media influence the distribution of power: who gets what, when, and how. Content analysis should be informed by explicit theory linking patterns of framing in the media text to predictable priming and agenda-setting effects on audiences. When unmoored by such underlying theory, measures and conclusions of media bias are suspect.",
"title": ""
},
{
"docid": "b1ee02bfabb08a8a8e32be14553413cb",
"text": "This report describes and analyzes the MD6 hash function and is part of our submission package for MD6 as an entry in the NIST SHA-3 hash function competition. Significant features of MD6 include: • Accepts input messages of any length up to 2 − 1 bits, and produces message digests of any desired size from 1 to 512 bits, inclusive, including the SHA-3 required sizes of 224, 256, 384, and 512 bits. • Security—MD6 is by design very conservative. We aim for provable security whenever possible; we provide reduction proofs for the security of the MD6 mode of operation, and prove that standard differential attacks against the compression function are less efficient than birthday attacks for finding collisions. We also show that when used as a MAC within NIST recommendedations, the keyed version of MD6 is not vulnerable to linear cryptanalysis. The compression function and the mode of operation are each shown to be indifferentiable from a random oracle under reasonable assumptions. • MD6 has good efficiency: 22.4–44.1M bytes/second on a 2.4GHz Core 2 Duo laptop with 32-bit code compiled with Microsoft Visual Studio 2005 for digest sizes in the range 160–512 bits. When compiled for 64-bit operation, it runs at 61.8–120.8M bytes/second, compiled with MS VS, running on a 3.0GHz E6850 Core Duo processor. • MD6 works extremely well for multicore and parallel processors; we have demonstrated hash rates of over 1GB/second on one 16-core system, and over 427MB/sec on an 8-core system, both for 256-bit digests. We have also demonstrated MD6 hashing rates of 375 MB/second on a typical desktop GPU (graphics processing unit) card. We also show that MD6 runs very well on special-purpose hardware. • MD6 uses a single compression function, no matter what the desired digest size, to map input data blocks of 4096 bits to output blocks of 1024 bits— a fourfold reduction. (The number of rounds does, however, increase for larger digest sizes.) The compression function has auxiliary inputs: a “key” (K), a “number of rounds” (r), a “control word” (V ), and a “unique ID” word (U). • The standard mode of operation is tree-based: the data enters at the leaves of a 4-ary tree, and the hash value is computed at the root. See Figure 2.1. This standard mode of operation is highly parallelizable. 1http://www.csrc.nist.gov/pki/HashWorkshop/index.html",
"title": ""
},
{
"docid": "0a17722ba7fbeda51784cdd699f54b3f",
"text": "One of the greatest challenges food research is facing in this century lies in maintaining sustainable food production and at the same time delivering high quality food products with an added functionality to prevent life-style related diseases such as, cancer, obesity, diabetes, heart disease, stroke. Functional foods that contain bioactive components may provide desirable health benefits beyond basic nutrition and play important roles in the prevention of life-style related diseases. Polyphenols and carotenoids are plant secondary metabolites which are well recognized as natural antioxidants linked to the reduction of the development and progression of life-style related diseases. This chapter focuses on healthpromoting food ingredients (polyphenols and carotenoids), food structure and functionality, and bioavailability of these bioactive ingredients, with examples on their commercial applications, namely on functional foods. Thereafter, in order to support successful development of health-promoting food ingredients, this chapter contributes to an understanding of the relationship between food structures, ingredient functionality, in relation to the breakdown of food structures in the gastrointestinal tract and its impact on the bioavailability of bioactive ingredients. The overview on food processing techniques and the processing of functional foods given here will elaborate novel delivery systems for functional food ingredients and their applications in food. Finally, this chapter concludes with microencapsulation techniques and examples of encapsulation of polyphenols and carotenoids; the physical structure of microencapsulated food ingredients and their impacts on food sensorial properties; yielding an outline on the controlled release of encapsulated bioactive compounds in food products.",
"title": ""
},
{
"docid": "85e51ac7980deac92e140d0965a35708",
"text": "Ensuring that autonomous systems work ethically is both complex and difficult. However, the idea of having an additional ‘governor’ that assesses options the system has, and prunes them to select the most ethical choices is well understood. Recent work has produced such a governor consisting of a ‘consequence engine’ that assesses the likely future outcomes of actions then applies a Safety/Ethical logic to select actions. Although this is appealing, it is impossible to be certain that the most ethical options are actually taken. In this paper we extend and apply a well-known agent verification approach to our consequence engine, allowing us to verify the correctness of its ethical decision-making.",
"title": ""
},
{
"docid": "11ed7e0742ddb579efe6e1da258b0d3c",
"text": "Supervisory Control and Data Acquisition(SCADA) systems are deeply ingrained in the fabric of critical infrastructure sectors. These computerized real-time process control systems, over geographically dispersed continuous distribution operations, are increasingly subject to serious damage and disruption by cyber means due to their standardization and connectivity to other networks. However, SCADA systems generally have little protection from the escalating cyber threats. In order to understand the potential danger and to protect SCADA systems, in this paper, we highlight their difference from standard IT systems and present a set of security property goals. Furthermore, we focus on systematically identifying and classifying likely cyber attacks including cyber-induced cyber-physical attack son SCADA systems. Determined by the impact on control performance of SCADA systems, the attack categorization criteria highlights commonalities and important features of such attacks that define unique challenges posed to securing SCADA systems versus traditional Information Technology(IT) systems.",
"title": ""
}
] | scidocsrr |
2f2cfa7b5b5b9381ebd764bc0abe0c10 | E-Counterfeit: A Mobile-Server Platform for Document Counterfeit Detection | [
{
"docid": "097879c593aa68602564c176b806a74b",
"text": "We study the recognition of surfaces made from different materials such as concrete, rug, marble, or leather on the basis of their textural appearance. Such natural textures arise from spatial variation of two surface attributes: (1) reflectance and (2) surface normal. In this paper, we provide a unified model to address both these aspects of natural texture. The main idea is to construct a vocabulary of prototype tiny surface patches with associated local geometric and photometric properties. We call these 3D textons. Examples might be ridges, grooves, spots or stripes or combinations thereof. Associated with each texton is an appearance vector, which characterizes the local irradiance distribution, represented as a set of linear Gaussian derivative filter outputs, under different lighting and viewing conditions. Given a large collection of images of different materials, a clustering approach is used to acquire a small (on the order of 100) 3D texton vocabulary. Given a few (1 to 4) images of any material, it can be characterized using these textons. We demonstrate the application of this representation for recognition of the material viewed under novel lighting and viewing conditions. We also illustrate how the 3D texton model can be used to predict the appearance of materials under novel conditions.",
"title": ""
},
{
"docid": "d0c75242aad1230e168122930b078671",
"text": "Combinatorial graph cut algorithms have been successfully applied to a wide range of problems in vision and graphics. This paper focusses on possibly the simplest application of graph-cuts: segmentation of objects in image data. Despite its simplicity, this application epitomizes the best features of combinatorial graph cuts methods in vision: global optima, practical efficiency, numerical robustness, ability to fuse a wide range of visual cues and constraints, unrestricted topological properties of segments, and applicability to N-D problems. Graph cuts based approaches to object extraction have also been shown to have interesting connections with earlier segmentation methods such as snakes, geodesic active contours, and level-sets. The segmentation energies optimized by graph cuts combine boundary regularization with region-based properties in the same fashion as Mumford-Shah style functionals. We present motivation and detailed technical description of the basic combinatorial optimization framework for image segmentation via s/t graph cuts. After the general concept of using binary graph cut algorithms for object segmentation was first proposed and tested in Boykov and Jolly (2001), this idea was widely studied in computer vision and graphics communities. We provide links to a large number of known extensions based on iterative parameter re-estimation and learning, multi-scale or hierarchical approaches, narrow bands, and other techniques for demanding photo, video, and medical applications.",
"title": ""
}
] | [
{
"docid": "88804f285f4d608b81a1cd741dbf2b7e",
"text": "Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates.\n We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"title": ""
},
{
"docid": "c32af7ce60d3d6eaa09a2876ba5469d3",
"text": "ID: 2423 Y. M. S. Al-Wesabi, Avishek Choudhury, Daehan Won Binghamton University, USA",
"title": ""
},
{
"docid": "13b9fd37b1cf4f15def39175157e12c5",
"text": "Although motorcycle safety helmets are known for preventing head injuries, in many countries, the use of motorcycle helmets is low due to the lack of police power to enforcing helmet laws. This paper presents a system which automatically detect motorcycle riders and determine that they are wearing safety helmets or not. The system extracts moving objects and classifies them as a motorcycle or other moving objects based on features extracted from their region properties using K-Nearest Neighbor (KNN) classifier. The heads of the riders on the recognized motorcycle are then counted and segmented based on projection profiling. The system classifies the head as wearing a helmet or not using KNN based on features derived from 4 sections of segmented head region. Experiment results show an average correct detection rate for near lane, far lane, and both lanes as 84%, 68%, and 74%, respectively.",
"title": ""
},
{
"docid": "7889bd099150ad799461bd0da2896428",
"text": "A systematic method to improve the quality ( ) factor of RF integrated inductors is presented in this paper. The proposed method is based on the layout optimization to minimize the series resistance of the inductor coil, taking into account both ohmic losses, due to conduction currents, and magnetically induced losses, due to Eddy currents. The technique is particularly useful when applied to inductors in which the fabrication process includes integration substrate removal. However, it is also applicable to inductors on low-loss substrates. The method optimizes the width of the metal strip for each turn of the inductor coil, leading to a variable strip-width layout. The optimization procedure has been successfully applied to the design of square spiral inductors in a silicon-based multichip-module technology, complemented with silicon micromachining postprocessing. The obtained experimental results corroborate the validity of the proposed method. A factor of about 17 have been obtained for a 35-nH inductor at 1.5 GHz, with values higher than 40 predicted for a 20-nH inductor working at 3.5 GHz. The latter is up to a 60% better than the best results for a single strip-width inductor working at the same frequency.",
"title": ""
},
{
"docid": "311bccf1c8bf6cbb2c2dbef22a709e8c",
"text": "We present a new video-assisted minimally invasive technique for the treatment of pilonidal disease (E.P.Si.T: endoscopic pilonidal sinus treatment). Between March and November 2012, we operated on 11 patients suffering from pilonidal disease. Surgery is performed under local or spinal anesthesia using the Meinero fistuloscope. The external opening is excised and the fistuloscope is introduced through the small hole. Anatomy is identified, hair and debris are removed and the entire area is ablated under direct vision. There were no significant complications recorded in the patient cohort. The pain experienced during the postoperative period was minimal. At 1 month postoperatively, the external opening(s) were closed in all patients and there were no cases of recurrence at a median follow-up of 6 months. All patients were admitted and discharged on the same day as surgery and commenced work again after a mean time period of 4 days. Aesthetic results were excellent. The key feature of the E.P.Si.T. technique is direct vision, allowing a good definition of the involved area, removal of debris and cauterization of the inflamed tissue.",
"title": ""
},
{
"docid": "23a5152da5142048332c09164bade40f",
"text": "Knowledge bases extracted automatically from the Web present new opportunities for data mining and exploration. Given a large, heterogeneous set of extracted relations, new tools are needed for searching the knowledge and uncovering relationships of interest. We present WikiTables, a Web application that enables users to interactively explore tabular knowledge extracted from Wikipedia.\n In experiments, we show that WikiTables substantially outperforms baselines on the novel task of automatically joining together disparate tables to uncover \"interesting\" relationships between table columns. We find that a \"Semantic Relatedness\" measure that leverages the Wikipedia link structure accounts for a majority of this improvement. Further, on the task of keyword search for tables, we show that WikiTables performs comparably to Google Fusion Tables despite using an order of magnitude fewer tables. Our work also includes the release of a number of public resources, including over 15 million tuples of extracted tabular data, manually annotated evaluation sets, and public APIs.",
"title": ""
},
{
"docid": "5b6a73103e7310de86c37185c729b8d9",
"text": "Motion segmentation is currently an active area of research in computer Vision. The task of comparing different methods of motion segmentation is complicated by the fact that researchers may use subtly different definitions of the problem. Questions such as ”Which objects are moving?”, ”What is background?”, and ”How can we use motion of the camera to segment objects, whether they are static or moving?” are clearly related to each other, but lead to different algorithms, and imply different versions of the ground truth. This report has two goals. The first is to offer a precise definition of motion segmentation so that the intent of an algorithm is as welldefined as possible. The second is to report on new versions of three previously existing data sets that are compatible with this definition. We hope that this more detailed definition, and the three data sets that go with it, will allow more meaningful comparisons of certain motion segmentation methods.",
"title": ""
},
{
"docid": "e54bf7ae1235031c3d62f3206d62a89a",
"text": "The purpose of the study is to explore the factors influencing customer buying decision through Intern et shopping. Several factors such as information quali ty, firm’s reputation, perceived ease of payment, s ites design, benefit of online shopping, and trust that influence customer decision to purchase from e-comm erce sites were analyzed. Factors such as those mention d above, which are commonly considered influencing purhasing decision through online shopping in other countries were hypothesized to be true in the case of Indonesia. A random sample comprised of 171 Indone sia people who have been buying goods/services through e-commerce sites at least once, were collec ted via online questionnaires. To test the hypothes is, the data were examined using Structural Equations Model ing (SEM) which is basically a combination of Confirmatory Factor Analysis (CFA), and linear Regr ession. The results suggest that information qualit y, perceived ease of payment, benefits of online shopp ing, and trust affect online purchase decision significantly. Close attention need to be placed on these factors to increase online sales. The most significant influence comes from trust. Indonesian people still lack of trust toward online commerce, so it is very important to gain customer trust to increase s al s. E-commerce’s business owners are encouraged t o develop sites that can meet the expectation of pote ntial customer, provides ease of payment system, pr ovide detailed and actual information and responsible for customer personal information and transaction reco rds. This paper outlined the key factors influencing onl ine shopping intention in Indonesia and pioneered t he building of an integrated research framework to und erstand how consumers make purchase decision toward online shopping; a relatively new way of shopping i the country.",
"title": ""
},
{
"docid": "0bbb23b9df622f451f7e7f2fd136d9e0",
"text": "The Janus kinase (JAK)-signal transducer of activators of transcription (STAT) pathway is now recognized as an evolutionarily conserved signaling pathway employed by diverse cytokines, interferons, growth factors, and related molecules. This pathway provides an elegant and remarkably straightforward mechanism whereby extracellular factors control gene expression. It thus serves as a fundamental paradigm for how cells sense environmental cues and interpret these signals to regulate cell growth and differentiation. Genetic mutations and polymorphisms are functionally relevant to a variety of human diseases, especially cancer and immune-related conditions. The clinical relevance of the pathway has been confirmed by the emergence of a new class of therapeutics that targets JAKs.",
"title": ""
},
{
"docid": "307dac4f0cc964a539160780abb1c123",
"text": "One of the main current applications of intelligent systems is recommender systems (RS). RS can help users to find relevant items in huge information spaces in a personalized way. Several techniques have been investigated for the development of RS. One of them is evolutionary computational (EC) techniques, which is an emerging trend with various application areas. The increasing interest in using EC for web personalization, information retrieval and RS fostered the publication of survey papers on the subject. However, these surveys have analyzed only a small number of publications, around ten. This study provides a comprehensive review of more than 65 research publications focusing on five aspects we consider relevant for such: the recommendation technique used, the datasets and the evaluation methods adopted in their experimental parts, the baselines employed in the experimental comparison of proposed approaches and the reproducibility of the reported experiments. At the end of this review, we discuss negative and positive aspects of these papers, as well as point out opportunities, challenges and possible future research directions. To the best of our knowledge, this review is the most comprehensive review of various approaches using EC in RS. Thus, we believe this review will be a relevant material for researchers interested in EC and RS.",
"title": ""
},
{
"docid": "4cd0d1040e104b4e317e22760b2ced71",
"text": "Color mapping is an important technique used in visualization to build visual representations of data and information. With output devices such as computer displays providing a large number of colors, developers sometimes tend to build their visualization to be visually appealing, while forgetting the main goal of clear depiction of the underlying data. Visualization researchers have profited from findings in adjoining areas such as human vision and psychophysics which, combined with their own experience, enabled them to establish guidelines that might help practitioners to select appropriate color scales and adjust the associated color maps, for particular applications. This survey presents an overview on the subject of color scales by focusing on important guidelines, experimental research work and tools proposed to help non-expert users. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "105913d67437afafa6147b7c67e8d808",
"text": "This paper proposes to develop an electronic device for obstacle detection in the path of visually impaired people. This device assists a user to walk without colliding with any obstacles in their path. It is a wearable device in the form of a waist belt that has ultrasonic sensors and raspberry pi installed on it. This device detects obstacles around the user up to 500cm in three directions i.e. front, left and right using a network of ultrasonic sensors. These ultrasonic sensors are connected to raspberry pi that receives data signals from these sensors for further data processing. The algorithm running in raspberry pi computes the distance from the obstacle and converts it into text message, which is then converted into speech and conveyed to the user through earphones/speakers. This design is benefitial in terms of it’s portability, low-cost, low power consumption and the fact that neither the user nor the device requires initial training. Keywords—embedded systems; raspberry pi; speech feedback; ultrasonic sensor; visually impaired;",
"title": ""
},
{
"docid": "bee01b9bd3beb41b0ca963c05378a93f",
"text": "Calibration between color camera and 3D Light Detection And Ranging (LIDAR) equipment is an essential process for data fusion. The goal of this paper is to improve the calibration accuracy between a camera and a 3D LIDAR. In particular, we are interested in calibrating a low resolution 3D LIDAR with a relatively small number of vertical sensors. Our goal is achieved by employing a new methodology for the calibration board, which exploits 2D-3D correspondences. The 3D corresponding points are estimated from the scanned laser points on the polygonal planar board with adjacent sides. Since the lengths of adjacent sides are known, we can estimate the vertices of the board as a meeting point of two projected sides of the polygonal board. The estimated vertices from the range data and those detected from the color image serve as the corresponding points for the calibration. Experiments using a low-resolution LIDAR with 32 sensors show robust results.",
"title": ""
},
{
"docid": "31c1427c3682a76528b1cb42036db7c1",
"text": "Fifteen years ago, a panel of experts representing the full spectrum of cardiovascular disease (CVD) research and practice assembled at a workshop to examine the state of knowledge about CVD. The leaders of the workshop generated a hypothesis that framed CVD as a chain of events, initiated by a myriad of related and unrelated risk factors and progressing through numerous physiological pathways and processes to the development of end-stage heart disease (Figure 1).1 They further hypothesized that intervention anywhere along the chain of events leading to CVD could disrupt the pathophysiological process and confer cardioprotection. The workshop participants endorsed this paradigm but also identified the unresolved issues relating to the concept of a CVD continuum. There was limited availability of clinical trial data and pathobiological evidence at that time, and the experts recognized that critical studies at both the mechanistic level and the clinical level were needed to validate the concept of a chain of events leading to end-stage CVD. In the intervening 15 years, new evidence for underlying pathophysiological mechanisms, the development of novel therapeutic agents, and the release of additional landmark clinical trial data have confirmed the concept of a CVD continuum and reinforced the notion that intervention at any point along this chain can modify CVD progression. In addition, the accumulated evidence indicates that the events leading to disease progression overlap and intertwine and do not always occur as a sequence of discrete, tandem incidents. Furthermore, although the original concept focused on risk factors for coronary artery disease (CAD) and its sequelae, the CVD continuum has expanded to include other areas such as cerebrovascular disease, peripheral vascular disease, and renal disease. Since its conception 15 years ago, the CVD continuum has become much in need of an update. Accordingly, this 2-part article will present a critical and comprehensive update of the current evidence for a CVD continuum based on the results of pathophysiological studies and the outcome of a broad range of clinical trials that have been performed in the past 15 years. It is not the intent of the article to include a comprehensive listing of all trials performed as part of the CVD continuum; instead, we have sought to include only those trials that have had the greatest impact. Part I briefly reviews the current understanding of the pathophysiology of CVD and discusses clinical trial data from risk factors for disease through stable CAD. Part II continues the review of clinical trial data beginning with acute coronary syndromes and continuing through extension of the CVD continuum to stroke and renal disease. The article concludes with a discussion of areas in which future research might further clarify our understanding of the CVD continuum.",
"title": ""
},
{
"docid": "458633abcbb030b9e58e432d5b539950",
"text": "In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. In this paper, we propose the Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network (CNN) architecture encoding rotation equivariance, invariance and covariance. Each convolutional filter is applied at multiple orientations and returns a vector field representing magnitude and angle of the highest scoring orientation at every spatial location. We develop a modified convolution operator relying on this representation to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs’ rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers extremely compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.",
"title": ""
},
{
"docid": "1c1775a64703f7276e4843b8afc26117",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
},
{
"docid": "b6a600ea1c277bc3bf8f2452b8aef3f1",
"text": "Fusion of data from multiple sensors can enable robust navigation in varied environments. However, for optimal performance, the sensors must calibrated relative to one another. Full sensor-to-sensor calibration is a spatiotemporal problem: we require an accurate estimate of the relative timing of measurements for each pair of sensors, in addition to the 6-DOF sensor-to-sensor transform. In this paper, we examine the problem of determining the time delays between multiple proprioceptive and exteroceptive sensor data streams. The primary difficultly is that the correspondences between measurements from different sensors are unknown, and hence the delays cannot be computed directly. We instead formulate temporal calibration as a registration task. Our algorithm operates by aligning curves in a three-dimensional orientation space, and, as such, can be considered as a variant of Iterative Closest Point (ICP). We present results from simulation studies and from experiments with a PR2 robot, which demonstrate accurate calibration of the time delays between measurements from multiple, heterogeneous sensors.",
"title": ""
},
{
"docid": "120e36cc162f4ce602da810c80c18c7d",
"text": "We describe a new model for learning meaningful representations of text documents from an unlabeled collection of documents. This model is inspired by the recently proposed Replicated Softmax, an undirected graphical model of word counts that was shown to learn a better generative model and more meaningful document representations. Specifically, we take inspiration from the conditional mean-field recursive equations of the Replicated Softmax in order to define a neural network architecture that estimates the probability of observing a new word in a given document given the previously observed words. This paradigm also allows us to replace the expensive softmax distribution over words with a hierarchical distribution over paths in a binary tree of words. The end result is a model whose training complexity scales logarithmically with the vocabulary size instead of linearly as in the Replicated Softmax. Our experiments show that our model is competitive both as a generative model of documents and as a document representation learning algorithm.",
"title": ""
},
{
"docid": "032f444d4844c4fa9a3e948cbbc0818a",
"text": "This paper presents a microstrip dual-band bandpass filter (BPF) based on cross-shaped resonator and spurline. It is shown that spurlines added into input/output ports of a cross-shaped resonator generate an additional notch band. Using even and odd-mode analysis the proposed structure is realized and designed. The proposed bandpass filter has dual passband from 1.9 GHz to 2.4 GHz and 9.5 GHz to 11.5 GHz.",
"title": ""
}
] | scidocsrr |
730d66eaef0577d2cd08caf3142db5a3 | Cover Tree Bayesian Reinforcement Learning | [
{
"docid": "4e4560d1434ee05c30168e49ffc3d94a",
"text": "We present a tree data structure for fast nearest neighbor operations in general <i>n</i>-point metric spaces (where the data set consists of <i>n</i> points). The data structure requires <i>O</i>(<i>n</i>) space <i>regardless</i> of the metric's structure yet maintains all performance properties of a navigating net (Krauthgamer & Lee, 2004b). If the point set has a bounded expansion constant <i>c</i>, which is a measure of the intrinsic dimensionality, as defined in (Karger & Ruhl, 2002), the cover tree data structure can be constructed in <i>O</i> (<i>c</i><sup>6</sup><i>n</i> log <i>n</i>) time. Furthermore, nearest neighbor queries require time only logarithmic in <i>n</i>, in particular <i>O</i> (<i>c</i><sup>12</sup> log <i>n</i>) time. Our experimental results show speedups over the brute force search varying between one and several orders of magnitude on natural machine learning datasets.",
"title": ""
}
] | [
{
"docid": "06e8d9c53fe89fbf683920e90bf09731",
"text": "Convolutional neural networks (CNNs) with their ability to learn useful spatial features have revolutionized computer vision. The network topology of CNNs exploits the spatial relationship among the pixels in an image and this is one of the reasons for their success. In other domains deep learning has been less successful because it is not clear how the structure of non-spatial data can constrain network topology. Here, we show how multivariate time series can be interpreted as space-time pictures, thus expanding the applicability of the tricks-of-the-trade for CNNs to this important domain. We demonstrate that our model beats more traditional state-of-the-art models at predicting price development on the European Power Exchange (EPEX). Furthermore, we find that the features discovered by CNNs on raw data beat the features that were hand-designed by an expert.",
"title": ""
},
{
"docid": "5e61c6f1f8b9d63ffd964119c4ae122f",
"text": "In this paper, a novel converter, named as negative-output KY buck-boost converter, is presented herein, which has no bilinear characteristics. First of all, the basic operating principle of the proposed converter is illustrated in detail, and secondly some simulated and experimental results are provided to verify its effectiveness.",
"title": ""
},
{
"docid": "9364e07801fc01e50d0598b61ab642aa",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "ffb03136c1f8d690be696f65f832ab11",
"text": "This paper aims to improve the feature learning in Convolutional Networks (Convnet) by capturing the structure of objects. A new sparsity function is imposed on the extracted featuremap to capture the structure and shape of the learned object, extracting interpretable features to improve the prediction performance. The proposed algorithm is based on organizing the activation within and across featuremap by constraining the node activities through `2 and `1 normalization in a structured form.",
"title": ""
},
{
"docid": "a1d9742feb9f2a5dcf2322b00daf4151",
"text": "We tackle the problem of predicting the future popularity level of micro-reviews, focusing on Foursquare tips, whose high degree of informality and briefness offer extra difficulties to the design of effective popularity prediction methods. Such predictions can greatly benefit the future design of content filtering and recommendation methods. Towards our goal, we first propose a rich set of features related to the user who posted the tip, the venue where it was posted, and the tip’s content to capture factors that may impact popularity of a tip. We evaluate different regression and classification based models using this rich set of proposed features as predictors in various scenarios. As fas as we know, this is the first work to investigate the predictability of micro-review popularity (or helpfulness) exploiting spatial, temporal, topical and, social aspects that are rarely exploited conjointly in this domain. © 2015 Published by Elsevier Inc.",
"title": ""
},
{
"docid": "9b34b171858ad3ebda73848b7bb5372d",
"text": "INTRODUCTION\nVulvar and vaginal atrophy (VVA) affects up to two thirds of postmenopausal women, but most symptomatic women do not receive prescription therapy.\n\n\nAIM\nTo evaluate postmenopausal women's perceptions of VVA and treatment options for symptoms in the Women's EMPOWER survey.\n\n\nMETHODS\nThe Rose Research firm conducted an internet survey of female consumers provided by Lightspeed Global Market Insite. Women at least 45 years of age who reported symptoms of VVA and residing in the United States were recruited.\n\n\nMAIN OUTCOME MEASURES\nSurvey results were compiled and analyzed by all women and by treatment subgroups.\n\n\nRESULTS\nRespondents (N = 1,858) had a median age of 58 years (range = 45-90). Only 7% currently used prescribed VVA therapies (local estrogen therapies or oral selective estrogen receptor modulators), whereas 18% were former users of prescribed VVA therapies, 25% used over-the-counter treatments, and 50% had never used any treatment. Many women (81%) were not aware of VVA or that it is a medical condition. Most never users (72%) had never discussed their symptoms with a health care professional (HCP). The main reason for women not to discuss their symptoms with an HCP was that they believed that VVA was just a natural part of aging and something to live with. When women spoke to an HCP about their symptoms, most (85%) initiated the discussion. Preferred sources of information were written material from the HCP's office (46%) or questionnaires to fill out before seeing the HCP (41%).The most negative attributes of hormonal products were perceived risk of systemic absorption, messiness of local creams, and the need to reuse an applicator. Overall, HCPs only recommended vaginal estrogen therapy to 23% and oral hormone therapies to 18% of women. When using vaginal estrogen therapy, less than half of women adhered to and complied with posology; only 33% to 51% of women were very to extremely satisfied with their efficacy.\n\n\nCONCLUSION\nThe Women's EMPOWER survey showed that VVA continues to be an under-recognized and under-treated condition, despite recent educational initiatives. A disconnect in education, communication, and information between HCPs and their menopausal patients remains prevalent. Kingsberg S, Krychman M, Graham S, et al. The Women's EMPOWER Survey: Identifying Women's Perceptions on Vulvar and Vaginal Atrophy and Its Treatment. J Sex Med 2017;14:413-424.",
"title": ""
},
{
"docid": "d150439e46201c3d3979bc243fb38c26",
"text": "Genetic Algorithms and Evolution Strategies represent two of the three major Evolutionary Algorithms. This paper examines the history, theory and mathematical background, applications, and the current direction of both Genetic Algorithms and Evolution Strategies.",
"title": ""
},
{
"docid": "1ee063329b62404e22d73a4f5996332d",
"text": "High-rate data communication over a multipath wireless channel often requires that the channel response be known at the receiver. Training-based methods, which probe the channel in time, frequency, and space with known signals and reconstruct the channel response from the output signals, are most commonly used to accomplish this task. Traditional training-based channel estimation methods, typically comprising linear reconstruction techniques, are known to be optimal for rich multipath channels. However, physical arguments and growing experimental evidence suggest that many wireless channels encountered in practice tend to exhibit a sparse multipath structure that gets pronounced as the signal space dimension gets large (e.g., due to large bandwidth or large number of antennas). In this paper, we formalize the notion of multipath sparsity and present a new approach to estimating sparse (or effectively sparse) multipath channels that is based on some of the recent advances in the theory of compressed sensing. In particular, it is shown in the paper that the proposed approach, which is termed as compressed channel sensing (CCS), can potentially achieve a target reconstruction error using far less energy and, in many instances, latency and bandwidth than that dictated by the traditional least-squares-based training methods.",
"title": ""
},
{
"docid": "bd4234dc626b4c56d0170948ac5d5de3",
"text": "ISSN: 1049-4820 (Print) 1744-5191 (Online) Journal homepage: http://www.tandfonline.com/loi/nile20 Gamification and student motivation Patrick Buckley & Elaine Doyle To cite this article: Patrick Buckley & Elaine Doyle (2016) Gamification and student motivation, Interactive Learning Environments, 24:6, 1162-1175, DOI: 10.1080/10494820.2014.964263 To link to this article: https://doi.org/10.1080/10494820.2014.964263",
"title": ""
},
{
"docid": "bda1e2a1f27673dceed36adddfdc3e36",
"text": "IEEE 802.11 WLANs are a very important technology to provide high speed wireless Internet access. Especially at airports, university campuses or in city centers, WLAN coverage is becoming ubiquitous leading to a deployment of hundreds or thousands of Access Points (AP). Managing and configuring such large WLAN deployments is a challenge. Current WLAN management protocols such as CAPWAP are hard to extend with new functionality. In this paper, we present CloudMAC, a novel architecture for enterprise or carrier grade WLAN systems. By partially offloading the MAC layer processing to virtual machines provided by cloud services and by integrating our architecture with OpenFlow, a software defined networking approach, we achieve a new level of flexibility and reconfigurability. In Cloud-MAC APs just forward MAC frames between virtual APs and IEEE 802.11 stations. The processing of MAC layer frames as well as the creation of management frames is handled at the virtual APs while the binding between the virtual APs and the physical APs is managed using OpenFlow. The testbed evaluation shows that CloudMAC achieves similar performance as normal WLANs, but allows novel services to be implemented easily in high level programming languages. The paper presents a case study which shows that dynamically switching off APs to save energy can be performed seamlessly with CloudMAC, while a traditional WLAN architecture causes large interruptions for users.",
"title": ""
},
{
"docid": "b5270bbcbe8ed4abf8ae5dabe02bb933",
"text": "We address the use of three-dimensional facial shape information for human face identification. We propose a new method to represent faces as 3D registered point clouds. Fine registration of facial surfaces is done by first automatically finding important facial landmarks and then, establishing a dense correspondence between points on the facial surface with the help of a 3D face template-aided thin plate spline algorithm. After the registration of facial surfaces, similarity between two faces is defined as a discrete approximation of the volume difference between facial surfaces. Experiments done on the 3D RMA dataset show that the proposed algorithm performs as good as the point signature method, and it is statistically superior to the point distribution model-based method and the 2D depth imagery technique. In terms of computational complexity, the proposed algorithm is faster than the point signature method.",
"title": ""
},
{
"docid": "64160c1842b00377b07da7797f6002d0",
"text": "The macaque monkey ventral intraparietal area (VIP) contains neurons with aligned visual-tactile receptive fields anchored to the face and upper body. Our previous fMRI studies using standard head coils found a human parietal face area (VIP+ complex; putative macaque VIP homologue) containing superimposed topological maps of the face and near-face visual space. Here, we construct high signal-to-noise surface coils and used phase-encoded air puffs and looming stimuli to map topological organization of the parietal face area at higher resolution. This area is consistently identified as a region extending between the superior postcentral sulcus and the upper bank of the anterior intraparietal sulcus (IPS), avoiding the fundus of IPS. Using smaller voxel sizes, our surface coils picked up strong fMRI signals in response to tactile and visual stimuli. By analyzing tactile and visual maps in our current and previous studies, we constructed a set of topological models illustrating commonalities and differences in map organization across subjects. The most consistent topological feature of the VIP+ complex is a central-anterior upper face (and upper visual field) representation adjoined by lower face (and lower visual field) representations ventrally (laterally) and/or dorsally (medially), potentially forming two subdivisions VIPv (ventral) and VIPd (dorsal). The lower visual field representations typically extend laterally into the anterior IPS to adjoin human area AIP, and medially to overlap with the parietal body areas at the superior parietal ridge. Significant individual variations are then illustrated to provide an accurate and comprehensive view of the topological organization of the parietal face area.",
"title": ""
},
{
"docid": "e910310c5cc8357c570c6c4110c4e94f",
"text": "Epistemic planning can be used for decision making in multi-agent situations with distributed knowledge and capabilities. Dynamic Epistemic Logic (DEL) has been shown to provide a very natural and expressive framework for epistemic planning. In this paper, we aim to give an accessible introduction to DEL-based epistemic planning. The paper starts with the most classical framework for planning, STRIPS, and then moves towards epistemic planning in a number of smaller steps, where each step is motivated by the need to be able to model more complex planning scenarios.",
"title": ""
},
{
"docid": "8c067af7b61fae244340e784149a9c9b",
"text": "Based on EuroNCAP regulations the number of autonomous emergency braking systems for pedestrians (AEB-P) will increase over the next years. According to accident research a considerable amount of severe pedestrian accidents happen at artificial lighting, twilight or total darkness conditions. Because radar sensors are very robust in these situations, they will play an important role for future AEB-P systems. To assess and evaluate systems a pedestrian dummy with reflection characteristics as close as possible to real humans is indispensable. As an extension to existing measurements in literature this paper addresses open issues like the influence of different positions of the limbs or different clothing for both relevant automotive frequency bands. Additionally suggestions and requirements for specification of pedestrian dummies based on results of RCS measurements of humans and first experimental developed dummies are given.",
"title": ""
},
{
"docid": "ff939b33128e2b8d2cd0074a3b021842",
"text": "Breast cancer is the most common form of cancer among women worldwide. Ultrasound imaging is one of the most frequently used diagnostic tools to detect and classify abnormalities of the breast. Recently, computer-aided diagnosis (CAD) systems using ultrasound images have been developed to help radiologists to increase diagnosis accuracy. However, accurate ultrasound image segmentation remains a challenging problem due to various ultrasound artifacts. In this paper, we investigate approaches developed for breast ultrasound (BUS) image segmentation. In this paper, we reviewed the literature on the segmentation of BUS images according to the techniques adopted, especially over the past 10 years. By dividing into seven classes (i.e., thresholding-based, clustering-based, watershed-based, graph-based, active contour model, Markov random field and neural network), we have introduced corresponding techniques and representative papers accordingly. We have summarized and compared many techniques on BUS image segmentation and found that all these techniques have their own pros and cons. However, BUS image segmentation is still an open and challenging problem due to various ultrasound artifacts introduced in the process of imaging, including high speckle noise, low contrast, blurry boundaries, low signal-to-noise ratio and intensity inhomogeneity To the best of our knowledge, this is the first comprehensive review of the approaches developed for segmentation of BUS images. With most techniques involved, this paper will be useful and helpful for researchers working on segmentation of ultrasound images, and for BUS CAD system developers.",
"title": ""
},
{
"docid": "0cfa40d89a1d169d334067172167d750",
"text": "Recent advances in RST discourse parsing have focused on two modeling paradigms: (a) high order parsers which jointly predict the tree structure of the discourse and the relations it encodes; or (b) lineartime parsers which are efficient but mostly based on local features. In this work, we propose a linear-time parser with a novel way of representing discourse constituents based on neural networks which takes into account global contextual information and is able to capture long-distance dependencies. Experimental results show that our parser obtains state-of-the art performance on benchmark datasets, while being efficient (with time complexity linear in the number of sentences in the document) and requiring minimal feature engineering.",
"title": ""
},
{
"docid": "929f294583267ca8cb8616e803687f1e",
"text": "Recent systems for natural language understanding are strong at overcoming linguistic variability for lookup style reasoning. Yet, their accuracy drops dramatically as the number of reasoning steps increases. We present the first formal framework to study such empirical observations, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space. Our formal model uses two interrelated spaces: a conceptual meaning space that is unambiguous and complete but hidden, and a linguistic symbol space that captures a noisy grounding of the meaning space in the symbols or words of a language. We apply this framework to study the connectivity problem in undirected graphs---a core reasoning problem that forms the basis for more complex multi-hop reasoning. We show that it is indeed possible to construct a high-quality algorithm for detecting connectivity in the (latent) meaning graph, based on an observed noisy symbol graph, as long as the noise is below our quantified noise level and only a few hops are needed. On the other hand, we also prove an impossibility result: if a query requires a large number (specifically, logarithmic in the size of the meaning graph) of hops, no reasoning system operating over the symbol graph is likely to recover any useful property of the meaning graph. This highlights a fundamental barrier for a class of reasoning problems and systems, and suggests the need to limit the distance between the two spaces, rather than investing in multi-hop reasoning with\"many\"hops.",
"title": ""
},
{
"docid": "1e7b1c821631918c37cf3fc583e59fe2",
"text": "One of the most important issue that must be addressed in designing communication protocols for wireless sensor networks (WSN) is how to save sensor node energy while meeting the needs of applications. Recent researches have led to new protocols specifically designed for sensor networks where energy awareness is an essential consideration. Internet of Things (IoT) is an innovative ICT paradigm where a number of intelligent devices connected to Internet are involved in sharing information and making collaborative decision. Integration of sensing and actuation systems, connected to the Internet, means integration of all forms of energy consuming devices such as power outlets, bulbs, air conditioner, etc. Sometimes the system can communicate with the utility supply company and this led to achieve a balance between power generation and energy usage or in general is likely to optimize energy consumption as a whole. In this paper some emerging trends and challenges are identified to enable energy-efficient communications in Internet of Things architectures and between smart devices. The way devices communicate is analyzed in order to reduce energy consumption and prolong system lifetime. Devices equipped with WiFi and RF interfaces are analyzed under different scenarios by setting different communication parameters, such as data size, in order to evaluate the best device configuration and the longest lifetime of devices.",
"title": ""
},
{
"docid": "07ce7ea6645bd4cb644e04771a14194f",
"text": "As organizations increase their dependence on database systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. A truly comprehensive approach for data protection must include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics. The database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. In this paper, we survey the most relevant concepts underlying the notion of access control policies for database security. We review the key access control models, namely, the discretionary and mandatory access control models and the role-based access control (RBAC)",
"title": ""
}
] | scidocsrr |
e12810a39baa7c96646907aceec16c72 | An effective solution for a real cutting stock problem in manufacturing plastic rolls | [
{
"docid": "74381f9602374af5ad0775a69163d1b9",
"text": "This paper discusses some of the basic formulation issues and solution procedures for solving oneand twodimensional cutting stock problems. Linear programming, sequential heuristic and hybrid solution procedures are described. For two-dimensional cutting stock problems with rectangular shapes, we also propose an approach for solving large problems with limits on the number of times an ordered size may appear in a pattern.",
"title": ""
}
] | [
{
"docid": "a4605974c90bc17edf715eb9edb10b8a",
"text": "Natural language processing has been in existence for more than fifty years. During this time, it has significantly contributed to the field of human-computer interaction in terms of theoretical results and practical applications. As computers continue to become more affordable and accessible, the importance of user interfaces that are effective, robust, unobtrusive, and user-friendly – regardless of user expertise or impediments – becomes more pronounced. Since natural language usually provides for effortless and effective communication in human-human interaction, its significance and potential in human-computer interaction should not be overlooked – either spoken or typewritten, it may effectively complement other available modalities, such as windows, icons, and menus, and pointing; in some cases, such as in users with disabilities, natural language may even be the only applicable modality. This chapter examines the field of natural language processing as it relates to humancomputer interaction by focusing on its history, interactive application areas, theoretical approaches to linguistic modeling, and relevant computational and philosophical issues. It also presents a taxonomy for interactive natural language systems based on their linguistic knowledge and processing requirements, and reviews related applications. Finally, it discusses linguistic coverage issues, and explores the development of natural language widgets and their integration into multimodal user interfaces.",
"title": ""
},
{
"docid": "e3f4add37a083f61feda8805478d0729",
"text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.",
"title": ""
},
{
"docid": "72cf634b61876d3ad9c265e61f1148ae",
"text": "Many functionals have been proposed for validation of partitions of object data produced by the fuzzy c-means (FCM) clustering algorithm. We examine the role a subtle but important parameter-the weighting exponent m of the FCM model-plays in determining the validity of FCM partitions. The functionals considered are the partition coefficient and entropy indexes of Bezdek, the Xie-Beni, and extended Xie-Beni indexes, and the FukuyamaSugeno index. Limit analysis indicates, and numerical experiments confirm, that the FukuyamaSugeno index is sensitive to both high and low values of m and may be unreliable because of this. Of the indexes tested, the Xie-Beni index provided the best response over a wide range of choices for the number of clusters, (%lo), and for m from 1.01-7. Finally, our calculations suggest that the best choice for m is probably in the interval [U, 2.51, whose mean and midpoint, m = 2, have often been the preferred choice for many users of FCM.",
"title": ""
},
{
"docid": "18848101a74a23d6740f08f86992a4a4",
"text": "Post-traumatic stress disorder (PTSD) is accompanied by disturbed sleep and an impaired ability to learn and remember extinction of conditioned fear. Following a traumatic event, the full spectrum of PTSD symptoms typically requires several months to develop. During this time, sleep disturbances such as insomnia, nightmares, and fragmented rapid eye movement sleep predict later development of PTSD symptoms. Only a minority of individuals exposed to trauma go on to develop PTSD. We hypothesize that sleep disturbance resulting from an acute trauma, or predating the traumatic experience, may contribute to the etiology of PTSD. Because symptoms can worsen over time, we suggest that continued sleep disturbances can also maintain and exacerbate PTSD. Sleep disturbance may result in failure of extinction memory to persist and generalize, and we suggest that this constitutes one, non-exclusive mechanism by which poor sleep contributes to the development and perpetuation of PTSD. Also reviewed are neuroendocrine systems that show abnormalities in PTSD, and in which stress responses and sleep disturbance potentially produce synergistic effects that interfere with extinction learning and memory. Preliminary evidence that insomnia alone can disrupt sleep-dependent emotional processes including consolidation of extinction memory is also discussed. We suggest that optimizing sleep quality following trauma, and even strategically timing sleep to strengthen extinction memories therapeutically instantiated during exposure therapy, may allow sleep itself to be recruited in the treatment of PTSD and other trauma and stress-related disorders.",
"title": ""
},
{
"docid": "51ba2c02aa4ad9b7cfb381ddae0f3dfe",
"text": "The dynamics of spontaneous fluctuations in neural activity are shaped by underlying patterns of anatomical connectivity. While numerous studies have demonstrated edge-wise correspondence between structural and functional connections, much less is known about how large-scale coherent functional network patterns emerge from the topology of structural networks. In the present study, we deploy a multivariate statistical technique, partial least squares, to investigate the association between spatially extended structural networks and functional networks. We find multiple statistically robust patterns, reflecting reliable combinations of structural and functional subnetworks that are optimally associated with one another. Importantly, these patterns generally do not show a one-to-one correspondence between structural and functional edges, but are instead distributed and heterogeneous, with many functional relationships arising from nonoverlapping sets of anatomical connections. We also find that structural connections between high-degree hubs are disproportionately represented, suggesting that these connections are particularly important in establishing coherent functional networks. Altogether, these results demonstrate that the network organization of the cerebral cortex supports the emergence of diverse functional network configurations that often diverge from the underlying anatomical substrate.",
"title": ""
},
{
"docid": "4c2f9f9681a1d3bc6d9a27a59c2a01d6",
"text": "BACKGROUND\nStatin therapy reduces low-density lipoprotein (LDL) cholesterol levels and the risk of cardiovascular events, but whether the addition of ezetimibe, a nonstatin drug that reduces intestinal cholesterol absorption, can reduce the rate of cardiovascular events further is not known.\n\n\nMETHODS\nWe conducted a double-blind, randomized trial involving 18,144 patients who had been hospitalized for an acute coronary syndrome within the preceding 10 days and had LDL cholesterol levels of 50 to 100 mg per deciliter (1.3 to 2.6 mmol per liter) if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter (1.3 to 3.2 mmol per liter) if they were not receiving lipid-lowering therapy. The combination of simvastatin (40 mg) and ezetimibe (10 mg) (simvastatin-ezetimibe) was compared with simvastatin (40 mg) and placebo (simvastatin monotherapy). The primary end point was a composite of cardiovascular death, nonfatal myocardial infarction, unstable angina requiring rehospitalization, coronary revascularization (≥30 days after randomization), or nonfatal stroke. The median follow-up was 6 years.\n\n\nRESULTS\nThe median time-weighted average LDL cholesterol level during the study was 53.7 mg per deciliter (1.4 mmol per liter) in the simvastatin-ezetimibe group, as compared with 69.5 mg per deciliter (1.8 mmol per liter) in the simvastatin-monotherapy group (P<0.001). The Kaplan-Meier event rate for the primary end point at 7 years was 32.7% in the simvastatin-ezetimibe group, as compared with 34.7% in the simvastatin-monotherapy group (absolute risk difference, 2.0 percentage points; hazard ratio, 0.936; 95% confidence interval, 0.89 to 0.99; P=0.016). Rates of prespecified muscle, gallbladder, and hepatic adverse effects and cancer were similar in the two groups.\n\n\nCONCLUSIONS\nWhen added to statin therapy, ezetimibe resulted in incremental lowering of LDL cholesterol levels and improved cardiovascular outcomes. Moreover, lowering LDL cholesterol to levels below previous targets provided additional benefit. (Funded by Merck; IMPROVE-IT ClinicalTrials.gov number, NCT00202878.).",
"title": ""
},
{
"docid": "6b55931c9945a71de6b28789323f191b",
"text": "Resistant hypertension-uncontrolled hypertension with 3 or more antihypertensive agents-is increasingly common in clinical practice. Clinicians should exclude pseudoresistant hypertension, which results from nonadherence to medications or from elevated blood pressure related to the white coat syndrome. In patients with truly resistant hypertension, thiazide diuretics, particularly chlorthalidone, should be considered as one of the initial agents. The other 2 agents should include calcium channel blockers and angiotensin-converting enzyme inhibitors for cardiovascular protection. An increasing body of evidence has suggested benefits of mineralocorticoid receptor antagonists, such as eplerenone and spironolactone, in improving blood pressure control in patients with resistant hypertension, regardless of circulating aldosterone levels. Thus, this class of drugs should be considered for patients whose blood pressure remains elevated after treatment with a 3-drug regimen to maximal or near maximal doses. Resistant hypertension may be associated with secondary causes of hypertension including obstructive sleep apnea or primary aldosteronism. Treating these disorders can significantly improve blood pressure beyond medical therapy alone. The role of device therapy for treating the typical patient with resistant hypertension remains unclear.",
"title": ""
},
{
"docid": "a0fc4982c5d63191ab1b15deff4e65d6",
"text": "Sentiment classification is an important subject in text mining research, which concerns the application of automatic methods for predicting the orientation of sentiment present on text documents, with many applications on a number of areas including recommender and advertising systems, customer intelligence and information retrieval. In this paper, we provide a survey and comparative study of existing techniques for opinion mining including machine learning and lexicon-based approaches, together with evaluation metrics. Also cross-domain and cross-lingual approaches are explored. Experimental results show that supervised machine learning methods, such as SVM and naive Bayes, have higher precision, while lexicon-based methods are also very competitive because they require few effort in human-labeled document and isn't sensitive to the quantity and quality of the training dataset.",
"title": ""
},
{
"docid": "be45e9231cc468c8f9551868c1d13938",
"text": "We present a user-centric approach for stream surface generation. Given a set of densely traced streamlines over the flow field, we design a sketch-based interface that allows users to draw simple strokes directly on top of the streamline visualization result. Based on the 2D stroke, we identify a 3D seeding curve and generate a stream surface that captures the flow pattern of streamlines at the outermost layer. Then, we remove the streamlines whose patterns are covered by the stream surface. Repeating this process, users can peel the flow by replacing the streamlines with customized surfaces layer by layer. Our sketch-based interface leverages an intuitive painting metaphor which most users are familiar with. We present results using multiple data sets to show the effectiveness of our approach, and discuss the limitations and future directions.",
"title": ""
},
{
"docid": "59786d8ea951639b8b9a4e60c9d43a06",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
},
{
"docid": "ee3c2f50a7ea955d33305a3e02310109",
"text": "This research strives for natural language moment retrieval in long, untrimmed video streams. The problem nevertheless is not trivial especially when a video contains multiple moments of interests and the language describes complex temporal dependencies, which often happens in real scenarios. We identify two crucial challenges: semantic misalignment and structural misalignment. However, existing approaches treat different moments separately and do not explicitly model complex moment-wise temporal relations. In this paper, we present Moment Alignment Network (MAN), a novel framework that unifies the candidate moment encoding and temporal structural reasoning in a single-shot feed-forward network. MAN naturally assigns candidate moment representations aligned with language semantics over different temporal locations and scales. Most importantly, we propose to explicitly model momentwise temporal relations as a structured graph and devise an iterative graph adjustment network to jointly learn the best structure in an end-to-end manner. We evaluate the proposed approach on two challenging public benchmarks Charades-STA and DiDeMo, where our MAN significantly outperforms the state-of-the-art by a large margin.",
"title": ""
},
{
"docid": "8f9e3bb85b4a2fcff3374fd700ac3261",
"text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.",
"title": ""
},
{
"docid": "caa35f58e9e217fd45daa2e49c4a4cde",
"text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤ tEsEbbEr-E ‘it was broken’, ‰ ̃bw l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.",
"title": ""
},
{
"docid": "18d8fe3f77ab8878ae2eb72b04fa8a48",
"text": "A new magneto-electric dipole antenna with a unidirectional radiation pattern is proposed. A novel differential feeding structure is designed to provide an ultra-wideband impedance matching. A stable gain of 8.25±1.05 dBi is realized by introducing two slots in the magneto-electric dipole and using a rectangular box-shaped reflector, instead of a planar reflector. The antenna can achieve an impedance bandwidth of 114% for SWR ≤ 2 from 2.95 to 10.73 GHz. Radiation patterns with low cross polarization, low back radiation, fixing broadside direction mainbeam and symmetrical E- and H -plane patterns are obtained over the operating frequency range. Moreover, the correlation factor between the transmitting antenna input signal and the receiving antenna output signal is calculated for evaluating the time-domain characteristic. The proposed antenna, which is small in size, can be constructed easily by using PCB fabrication technique.",
"title": ""
},
{
"docid": "2ed16f9344f5c5b024095a4e27283596",
"text": "An overview is presented of the impact of NLO on today's daily life. While NLO researchers have promised many applications, only a few have changed our lives so far. This paper categorizes applications of NLO into three areas: improving lasers, interaction with materials, and information technology. NLO provides: coherent light of different wavelengths; multi-photon absorption for plasma-materials interaction; advanced spectroscopy and materials analysis; and applications to communications and sensors. Applications in information processing and storage seem less mature.",
"title": ""
},
{
"docid": "2c3bfdb36a691434ece6b9f3e7e281e9",
"text": "Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements.",
"title": ""
},
{
"docid": "556c9a28f9bbd81d53e093b139ce7866",
"text": "This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.",
"title": ""
},
{
"docid": "76375aa50ebe8388d653241ba481ecd2",
"text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.",
"title": ""
},
{
"docid": "0fa35886300345106390cc55c6025257",
"text": "Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking — a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems.",
"title": ""
},
{
"docid": "107c839a73c12606d4106af7dc04cd96",
"text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.",
"title": ""
}
] | scidocsrr |
ee4f68d1700841990534552514471aa3 | Mental health awareness: The Indian scenario | [
{
"docid": "c5bc51e3e2ad5aedccfa17095ec1d7ed",
"text": "CONTEXT\nLittle is known about the extent or severity of untreated mental disorders, especially in less-developed countries.\n\n\nOBJECTIVE\nTo estimate prevalence, severity, and treatment of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) mental disorders in 14 countries (6 less developed, 8 developed) in the World Health Organization (WHO) World Mental Health (WMH) Survey Initiative.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nFace-to-face household surveys of 60 463 community adults conducted from 2001-2003 in 14 countries in the Americas, Europe, the Middle East, Africa, and Asia.\n\n\nMAIN OUTCOME MEASURES\nThe DSM-IV disorders, severity, and treatment were assessed with the WMH version of the WHO Composite International Diagnostic Interview (WMH-CIDI), a fully structured, lay-administered psychiatric diagnostic interview.\n\n\nRESULTS\nThe prevalence of having any WMH-CIDI/DSM-IV disorder in the prior year varied widely, from 4.3% in Shanghai to 26.4% in the United States, with an interquartile range (IQR) of 9.1%-16.9%. Between 33.1% (Colombia) and 80.9% (Nigeria) of 12-month cases were mild (IQR, 40.2%-53.3%). Serious disorders were associated with substantial role disability. Although disorder severity was correlated with probability of treatment in almost all countries, 35.5% to 50.3% of serious cases in developed countries and 76.3% to 85.4% in less-developed countries received no treatment in the 12 months before the interview. Due to the high prevalence of mild and subthreshold cases, the number of those who received treatment far exceeds the number of untreated serious cases in every country.\n\n\nCONCLUSIONS\nReallocation of treatment resources could substantially decrease the problem of unmet need for treatment of mental disorders among serious cases. Structural barriers exist to this reallocation. Careful consideration needs to be given to the value of treating some mild cases, especially those at risk for progressing to more serious disorders.",
"title": ""
}
] | [
{
"docid": "c5b80d54e6b50a56ab5a6d5e0111df81",
"text": "By understanding how real users have employed reliable multicast in real distributed systems, we can develop insight concerning the degree to which this technology has matched expectations. This paper reviews a number of applications with that goal in mind. Our findings point to tradeoffs between the form of reliability used by a system and its scalability and performance. We also find that to reach a broad user community (and a commercially interesting market) the technology must be better integrated with component and object-oriented systems architectures. Looking closely at these architectures, however, we identify some assumptions about failure handling which make reliable multicast difficult to exploit. Indeed, the major failures of reliable multicast are associated wit failures. The broader opportunity appears to involve relatively visible embeddings of these tools int h attempts to position it within object oriented systems in ways that focus on transparent recovery from server o object-oriented architectures enabling knowledgeable users to make tradeoffs. Fault-tolerance through transparent server replication may be better viewed as an unachievable holy grail.",
"title": ""
},
{
"docid": "6a9c7da90fe8de2ad6f3819df07f8642",
"text": "We define Quality of Service (QoS) and cost model for communications in Systems on Chip (SoC), and derive related Network on Chip (NoC) architecture and design process. SoC inter-module communication traffic is classified into four classes of service: signaling (for inter-module control signals); real-time (representing delay-constrained bit streams); RD/WR (modeling short data access) and block-transfer (handling large data bursts). Communication traffic of the target SoC is analyzed (by means of analytic calculations and simulations), and QoS requirements (delay and throughput) for each service class are derived. A customized Quality-of-Service NoC (QNoC) architecture is derived by modifying a generic network architecture. The customization process minimizes the network cost (in area and power) while maintaining the required QoS. The generic network is based on a two-dimensional planar mesh and fixed shortest path (X–Y based) multi-class wormhole routing. Once communication requirements of the target SoC are identified, the network is customized as follows: The SoC modules are placed so as to minimize spatial traffic density, unnecessary mesh links and switching nodes are removed, and bandwidth is allocated to the remaining links and switches according to their relative load so that link utilization is balanced. The result is a low cost customized QNoC for the target SoC which guarantees that QoS requirements are met. 2003 Elsevier B.V. All rights reserved. IDT: Network on chip; QoS architecture; Wormhole switching; QNoC design process; QNoC",
"title": ""
},
{
"docid": "ad2e02fd3b349b2a66ac53877b82e9bb",
"text": "This paper proposes a novel approach for the evolution of artificial creatures which moves in a 3D virtual environment based on the neuroevolution of augmenting topologies (NEAT) algorithm. The NEAT algorithm is used to evolve neural networks that observe the virtual environment and respond to it, by controlling the muscle force of the creature. The genetic algorithm is used to emerge the architecture of creature based on the distance metrics for fitness evaluation. The damaged morphologies of creature are elaborated, and a crossover algorithm is used to control it. Creatures with similar morphological traits are grouped into the same species to limit the complexity of the search space. The motion of virtual creature having 2–3 limbs is recorded at three different angles to check their performance in different types of viscous mediums. The qualitative demonstration of motion of virtual creature represents that improved swimming of virtual creatures is achieved in simulating mediums with viscous drag 1–10 arbitrary unit.",
"title": ""
},
{
"docid": "2f54746f666befe19af1391f1d90aca8",
"text": "The Internet of Things has drawn lots of research attention as the growing number of devices connected to the Internet. Long Term Evolution-Advanced (LTE-A) is a promising technology for wireless communication and it's also promising for IoT. The main challenge of incorporating IoT devices into LTE-A is a large number of IoT devices attempting to access the network in a short period which will greatly reduce the network performance. In order to improve the network utilization, we adopted a hierarchy architecture using a gateway for connecting the devices to the eNB and proposed a multiclass resource allocation algorithm for LTE based IoT communication. Simulation results show that the proposed algorithm can provide good performance both on data rate and latency for different QoS applications both in saturated and unsaturated environment.",
"title": ""
},
{
"docid": "8a21ff7f3e4d73233208d5faa70eb7ce",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "097da6ee2d13e0b4b2f84a26752574f4",
"text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.",
"title": ""
},
{
"docid": "72ad5d0f9e6b07d4392e7a4b53bdf17f",
"text": "This paper surveys current text and speech summarization evaluation approaches. It discusses advantages and disadv ant ges of these, with the goal of identifying summarization techni ques most suitable to speech summarization. Precision/recall s hemes, as well as summary accuracy measures which incorporate weig htings based on multiple human decisions, are suggested as par ticularly suitable in evaluating speech summaries.",
"title": ""
},
{
"docid": "91059e16806c0c2b3e7b39859ba2b6a5",
"text": "Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.",
"title": ""
},
{
"docid": "3e52520779e75997947d9538a6513ef4",
"text": "This article presents a reproducible research workflow for amplicon-based microbiome studies in personalized medicine created using Bioconductor packages and the knitr markdown interface.We show that sometimes a multiplicity of choices and lack of consistent documentation at each stage of the sequential processing pipeline used for the analysis of microbiome data can lead to spurious results. We propose its replacement with reproducible and documented analysis using R packages dada2, knitr, and phyloseq. This workflow implements both key stages of amplicon analysis: the initial filtering and denoising steps needed to construct taxonomic feature tables from error-containing sequencing reads (dada2), and the exploratory and inferential analysis of those feature tables and associated sample metadata (phyloseq). This workow facilitates reproducible interrogation of the full set of choices required in microbiome studies. We present several examples in which we leverage existing packages for analysis in a way that allows easy sharing and modification by others, and give pointers to articles that depend on this reproducible workflow for the study of longitudinal and spatial series analyses of the vaginal microbiome in pregnancy and the oral microbiome in humans with healthy dentition and intra-oral tissues.",
"title": ""
},
{
"docid": "e6a332a8dab110262beb1fc52b91945c",
"text": "Models are crucial in the engineering design process because they can be used for both the optimization of design parameters and the prediction of performance. Thus, models can significantly reduce design, development and optimization costs. This paper proposes a novel equivalent electrical model for Darrieus-type vertical axis wind turbines (DTVAWTs). The proposed model was built from the mechanical description given by the Paraschivoiu double-multiple streamtube model and is based on the analogy between mechanical and electrical circuits. This work addresses the physical concepts and theoretical formulations underpinning the development of the model. After highlighting the working principle of the DTVAWT, the step-by-step development of the model is presented. For assessment purposes, simulations of aerodynamic characteristics and those of corresponding electrical components are performed and compared.",
"title": ""
},
{
"docid": "5a13c741e9e907a0d4d8a794c5363b0c",
"text": "Quinoa (Chenopodium quinoa Willd.), which is considered a pseudocereal or pseudograin, has been recognized as a complete food due to its protein quality. It has remarkable nutritional properties; not only from its protein content (15%) but also from its great amino acid balance. It is an important source of minerals and vitamins, and has also been found to contain compounds like polyphenols, phytosterols, and flavonoids with possible nutraceutical benefits. It has some functional (technological) properties like solubility, water-holding capacity (WHC), gelation, emulsifying, and foaming that allow diversified uses. Besides, it has been considered an oil crop, with an interesting proportion of omega-6 and a notable vitamin E content. Quinoa starch has physicochemical properties (such as viscosity, freeze stability) which give it functional properties with novel uses. Quinoa has a high nutritional value and has recently been used as a novel functional food because of all these properties; it is a promising alternative cultivar.",
"title": ""
},
{
"docid": "3486bfa46d0e43317f32b1fb51309715",
"text": "Every arti cial-intelligence research project needs a working de nition of \\intelligence\", on which the deepest goals and assumptions of the research are based. In the project described in the following chapters, \\intelligence\" is de ned as the capacity to adapt under insu cient knowledge and resources. Concretely, an intelligent system should be nite and open, and should work in real time. If these criteria are used in the design of a reasoning system, the result is NARS, a non-axiomatic reasoning system. NARS uses a term-oriented formal language, characterized by the use of subject{ predicate sentences. The language has an experience-grounded semantics, according to which the truth value of a judgment is determined by previous experience, and the meaning of a term is determined by its relations with other terms. Several di erent types of uncertainty, such as randomness, fuzziness, and ignorance, can be represented in the language in a single way. The inference rules of NARS are based on three inheritance relations between terms. With di erent combinations of premises, revision, deduction, induction, abduction, exempli cation, comparison, and analogy can all be carried out in a uniform format, the major di erence between these types of inference being that di erent functions are used to calculate the truth value of the conclusion from the truth values of the premises. viii ix Since it has insu cient space{time resources, the system needs to distribute them among its tasks very carefully, and to dynamically adjust the distribution as the situation changes. This leads to a \\controlled concurrency\" control mechanism, and a \\bag-based\" memory organization. A recent implementation of the NARS model, with examples, is discussed. The system has many interesting properties that are shared by human cognition, but are absent from conventional computational models of reasoning. This research sheds light on several notions in arti cial intelligence and cognitive science, including symbol-grounding, induction, categorization, logic, and computation. These are discussed to show the implications of the new theory of intelligence. Finally, the major results of the research are summarized, a preliminary evaluation of the working de nition of intelligence is given, and the limitations and future extensions of the research are discussed.",
"title": ""
},
{
"docid": "b231da0ff32e823bb245328929bdebf3",
"text": "BACKGROUND\nCultivated bananas and plantains are giant herbaceous plants within the genus Musa. They are both sterile and parthenocarpic so the fruit develops without seed. The cultivated hybrids and species are mostly triploid (2n = 3x = 33; a few are diploid or tetraploid), and most have been propagated from mutants found in the wild. With a production of 100 million tons annually, banana is a staple food across the Asian, African and American tropics, with the 15 % that is exported being important to many economies.\n\n\nSCOPE\nThere are well over a thousand domesticated Musa cultivars and their genetic diversity is high, indicating multiple origins from different wild hybrids between two principle ancestral species. However, the difficulty of genetics and sterility of the crop has meant that the development of new varieties through hybridization, mutation or transformation was not very successful in the 20th century. Knowledge of structural and functional genomics and genes, reproductive physiology, cytogenetics, and comparative genomics with rice, Arabidopsis and other model species has increased our understanding of Musa and its diversity enormously.\n\n\nCONCLUSIONS\nThere are major challenges to banana production from virulent diseases, abiotic stresses and new demands for sustainability, quality, transport and yield. Within the genepool of cultivars and wild species there are genetic resistances to many stresses. Genomic approaches are now rapidly advancing in Musa and have the prospect of helping enable banana to maintain and increase its importance as a staple food and cash crop through integration of genetical, evolutionary and structural data, allowing targeted breeding, transformation and efficient use of Musa biodiversity in the future.",
"title": ""
},
{
"docid": "0bb2798c21d9f7420ea47c717578e94d",
"text": "Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail.",
"title": ""
},
{
"docid": "0da299fb53db5980a10e0ae8699d2209",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "5542f4693a4251edcf995e7608fbda56",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "c772bc43f2b8c76aa3e096405cd1b824",
"text": "Application programmers increasingly prefer distributed storage systems with strong consistency and distributed transactions (e.g., Google's Spanner) for their strong guarantees and ease of use. Unfortunately, existing transactional storage systems are expensive to use -- in part because they require costly replication protocols, like Paxos, for fault tolerance. In this paper, we present a new approach that makes transactional storage systems more affordable: we eliminate consistency from the replication protocol while still providing distributed transactions with strong consistency to applications.\n We present TAPIR -- the Transactional Application Protocol for Inconsistent Replication -- the first transaction protocol to use a novel replication protocol, called inconsistent replication, that provides fault tolerance without consistency. By enforcing strong consistency only in the transaction protocol, TAPIR can commit transactions in a single round-trip and order distributed transactions without centralized coordination. We demonstrate the use of TAPIR in a transactional key-value store, TAPIR-KV. Compared to conventional systems, TAPIR-KV provides better latency and throughput.",
"title": ""
},
{
"docid": "d15072fd8776d17e8a3b8b89af5fed08",
"text": "PsV: psoriasis vulgaris INTRODUCTION Pityriasis amiantacea is a rare clinical condition characterized by masses of waxy and sticky scales that adhere to the scalp and tenaciously attach to hair bundles. Pityriasis amiantacea can be associated with psoriasis vulgaris (PsV).We examined a patient with pityriasis amiantacea caused by PsV who also had keratotic horns on the scalp, histopathologically fibrokeratomas. To the best of our knowledge, this is the first case of scalp fibrokeratoma stimulated by pityriasis amiantacea and PsV.",
"title": ""
},
{
"docid": "7fc35d2bb27fb35b5585aad8601a0cbd",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "fc164dc2d55cec2867a99436d37962a1",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
}
] | scidocsrr |
f0026a7bfaadac338395d72b2bb48017 | Design of an arm exoskeleton with scapula motion for shoulder rehabilitation | [
{
"docid": "8eca353064d3b510b32c486e5f26c264",
"text": "Theoretical control algorithms are developed and an experimental system is described for 6-dof kinesthetic force/moment feedback to a human operator from a remote system. The remote system is a common six-axis slave manipulator with a force/torque sensor, while the haptic interface is a unique, cable-driven, seven-axis, force/moment-reflecting exoskeleton. The exoskeleton is used for input when motion commands are sent to the robot and for output when force/moment wrenches of contact are reflected to the human operator. This system exists at Wright-Patterson AFB. The same techniques are applicable to a virtual environment with physics models and general haptic interfaces.",
"title": ""
}
] | [
{
"docid": "305cfc6824ec7ac30a08ade2fff66c13",
"text": "Psychological research has shown that 'peak-end' effects influence people's retrospective evaluation of hedonic and affective experience. Rather than objectively reviewing the total amount of pleasure or pain during an experience, people's evaluation is shaped by the most intense moment (the peak) and the final moment (end). We describe an experiment demonstrating that peak-end effects can influence a user's preference for interaction sequences that are objectively identical in their overall requirements. Participants were asked to choose which of two interactive sequences of five pages they preferred. Both sequences required setting a total of 25 sliders to target values, and differed only in the distribution of the sliders across the five pages -- with one sequence intended to induce positive peak-end effects, the other negative. The study found that manipulating only the peak or the end of the series did not significantly change preference, but that a combined manipulation of both peak and end did lead to significant differences in preference, even though all series had the same overall effort.",
"title": ""
},
{
"docid": "1fe8f55e2d402c5fe03176cbf83a16c3",
"text": "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying sequences of binary logic operations, adding sequences of integers, and sorting sequences of real numbers. Overall performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. When applied to character-level language modelling on the Hutter prize Wikipedia dataset, ACT yields intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could be used to infer segment boundaries in sequence data.",
"title": ""
},
{
"docid": "bb0ef8084d0693d7ea453cd321b13e0b",
"text": "Distributed computation is increasingly important for deep learning, and many deep learning frameworks provide built-in support for distributed training. This results in a tight coupling between the neural network computation and the underlying distributed execution, which poses a challenge for the implementation of new communication and aggregation strategies. We argue that decoupling the deep learning framework from the distributed execution framework enables the flexible development of new communication and aggregation strategies. Furthermore, we argue that Ray [12] provides a flexible set of distributed computing primitives that, when used in conjunction with modern deep learning libraries, enable the implementation of a wide range of gradient aggregation strategies appropriate for different computing environments. We show how these primitives can be used to address common problems, and demonstrate the performance benefits empirically.",
"title": ""
},
{
"docid": "e73de1e6f191fef625f75808d7fbfbb1",
"text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.",
"title": ""
},
{
"docid": "d4345ee2baaa016fc38ba160e741b8ee",
"text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.",
"title": ""
},
{
"docid": "63f20dd528d54066ed0f189e4c435fe7",
"text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].",
"title": ""
},
{
"docid": "9423718cce01b45c688066f322b2c2aa",
"text": "Currently there are many techniques based on information technology and communication aimed at assessing the performance of students. Data mining applied in the educational field (educational data mining) is one of the most popular techniques that are used to provide feedback with regard to the teaching-learning process. In recent years there have been a large number of open source applications in the area of educational data mining. These tools have facilitated the implementation of complex algorithms for identifying hidden patterns of information in academic databases. The main objective of this paper is to compare the technical features of three open source tools (RapidMiner, Knime and Weka) as used in educational data mining. These features have been compared in a practical case study on the academic records of three engineering programs in an Ecuadorian university. This comparison has allowed us to determine which tool is most effective in terms of predicting student performance.",
"title": ""
},
{
"docid": "11ce5bca8989b3829683430abe2aee47",
"text": "Android is the most popular smartphone operating system with a market share of 80%, but as a consequence, also the platform most targeted by malware. To deal with the increasing number of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community, the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a fully automated, publicly available and comprehensive analysis system for Android apps. ANDRUBIS combines static analysis with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage. With ANDRUBIS, we collected a dataset of over 1,000,000 Android apps, including 40% malicious apps. This dataset allows us to discuss trends in malware behavior observed from apps dating back as far as 2010, as well as to present insights gained from operating ANDRUBIS as a publicly available service for the past two years.",
"title": ""
},
{
"docid": "23384db962a1eb524f40ca52f4852b14",
"text": "Recent developments in Artificial Intelligence (AI) have generated a steep interest from media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an important focus of research and development is understanding the ethical impact of these systems. What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI. Contrary to the frightening images of a dystopic future in media and popular fiction, where AI systems dominate the world and is mostly concerned with warfare, AI is already changing our daily lives mostly in ways that improve human health, safety, and productivity (Stone et al. 2016). This is the case in domain such as transportation; service robots; health-care; education; public safety and security; and entertainment. Nevertheless, and in order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems1, the Foundation for Responsible Robotics2, and the Partnership on AI3 amongst several others. As the capabilities for autonomous decision making grow, perhaps the most important issue to consider is the need to rethink responsibility (Dignum 2017). Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine about such issues that we consider to have ethical impact, but most importantly, we need frameworks to guide design choices, to regulate the reaches of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement. Values are dependent on the socio-cultural context (Turiel 2002), and are often only implicit in deliberation processes, which means that methodologies are needed to elicit the values held by all the stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems. That is, AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. Responsible Artificial Intelligence is about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and wellbeing in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental",
"title": ""
},
{
"docid": "d66799a5d65a6f23527a33b124812ea6",
"text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.",
"title": ""
},
{
"docid": "45c1119cd76ed4f1470ac398caf6d192",
"text": "UNLABELLED\nL-3,4-Dihydroxy-6-(18)F-fluoro-phenyl-alanine ((18)F-FDOPA) is an amino acid analog used to evaluate presynaptic dopaminergic neuronal function. Evaluation of tumor recurrence in neurooncology is another application. Here, the kinetics of (18)F-FDOPA in brain tumors were investigated.\n\n\nMETHODS\nA total of 37 patients underwent 45 studies; 10 had grade IV, 10 had grade III, and 13 had grade II brain tumors; 2 had metastases; and 2 had benign lesions. After (18)F-DOPA was administered at 1.5-5 MBq/kg, dynamic PET images were acquired for 75 min. Images were reconstructed with iterative algorithms, and corrections for attenuation and scatter were applied. Images representing venous structures, the striatum, and tumors were generated with factor analysis, and from these, input and output functions were derived with simple threshold techniques. Compartmental modeling was applied to estimate rate constants.\n\n\nRESULTS\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors and the cerebellum but not the striatum. A 3-compartment model with corrections for tissue blood volume, metabolites, and partial volume appeared to be superior for describing (18)F-FDOPA kinetics in tumors and the striatum. A significant correlation was found between influx rate constant K and late uptake (standardized uptake value from 65 to 75 min), whereas the correlation of K with early uptake was weak. High-grade tumors had significantly higher transport rate constant k(1), equilibrium distribution volumes, and influx rate constant K than did low-grade tumors (P < 0.01). Tumor uptake showed a maximum at about 15 min, whereas the striatum typically showed a plateau-shaped curve. Patlak graphical analysis did not provide accurate parameter estimates. Logan graphical analysis yielded reliable estimates of the distribution volume and could separate newly diagnosed high-grade tumors from low-grade tumors.\n\n\nCONCLUSION\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors in a first approximation. A 3-compartment model with corrections for metabolites and partial volume could adequately describe (18)F-FDOPA kinetics in tumors, the striatum, and the cerebellum. This model suggests that (18)F-FDOPA was transported but not trapped in tumors, unlike in the striatum. The shape of the uptake curve appeared to be related to tumor grade. After an early maximum, high-grade tumors had a steep descending branch, whereas low-grade tumors had a slowly declining curve, like that for the cerebellum but on a higher scale.",
"title": ""
},
{
"docid": "403310053251e81cdad10addedb64c87",
"text": "Many types of data are best analyzed by fitting a curve using nonlinear regression, and computer programs that perform these calculations are readily available. Like every scientific technique, however, a nonlinear regression program can produce misleading results when used inappropriately. This article reviews the use of nonlinear regression in a practical and nonmathematical manner to answer the following questions: Why is nonlinear regression superior to linear regression of transformed data? How does nonlinear regression differ from polynomial regression and cubic spline? How do nonlinear regression programs work? What choices must an investigator make before performing nonlinear regression? What do the final results mean? How can two sets of data or two fits to one set of data be compared? What problems can cause the results to be wrong? This review is designed to demystify nonlinear regression so that both its power and its limitations will be appreciated.",
"title": ""
},
{
"docid": "32e1b7734ba1b26a6a27e0504db07643",
"text": "Due to its high popularity and rich functionalities, the Portable Document Format (PDF) has become a major vector for malware propagation. To detect malicious PDF files, the first step is to extract and de-obfuscate Java Script codes from the document, for which an effective technique is yet to be created. However, existing static methods cannot de-obfuscate Java Script codes, existing dynamic methods bring high overhead, and existing hybrid methods introduce high false negatives. Therefore, in this paper, we present MPScan, a scanner that combines dynamic Java Script de-obfuscation and static malware detection. By hooking the Adobe Reader's native Java Script engine, Java Script source code and op-code can be extracted on the fly after the source code is parsed and then executed. We also perform a multilevel analysis on the resulting Java Script strings and op-code to detect malware. Our evaluation shows that regardless of obfuscation techniques, MPScan can effectively de-obfuscate and detect 98% malicious PDF samples.",
"title": ""
},
{
"docid": "4f287c788c7e95bf350a998650ff6221",
"text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.",
"title": ""
},
{
"docid": "f066cb3e2fc5ee543e0cc76919b261eb",
"text": "Eco-labels are part of a new wave of environmental policy that emphasizes information disclosure as a tool to induce environmentally friendly behavior by both firms and consumers. Little consensus exists as to whether eco-certified products are actually better than their conventional counterparts. This paper seeks to understand the link between eco-certification and product quality. We use data from three leading wine rating publications (Wine Advocate, Wine Enthusiast, and Wine Spectator) to assess quality for 74,148 wines produced in California between 1998 and 2009. Our results indicate that eco-certification is associated with a statistically significant increase in wine quality rating.",
"title": ""
},
{
"docid": "4d3b988de22e4630e1b1eff9e0d4551b",
"text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.",
"title": ""
},
{
"docid": "1444a4acc00c1d7d69a906f6e5f52a6d",
"text": "The prevalence of obesity among children is high and is increasing. We know that obesity runs in families, with children of obese parents at greater risk of developing obesity than children of thin parents. Research on genetic factors in obesity has provided us with estimates of the proportion of the variance in a population accounted for by genetic factors. However, this research does not provide information regarding individual development. To design effective preventive interventions, research is needed to delineate how genetics and environmental factors interact in the etiology of childhood obesity. Addressing this question is especially challenging because parents provide both genes and environment for children. An enormous amount of learning about food and eating occurs during the transition from the exclusive milk diet of infancy to the omnivore's diet consumed by early childhood. This early learning is constrained by children's genetic predispositions, which include the unlearned preference for sweet tastes, salty tastes, and the rejection of sour and bitter tastes. Children also are predisposed to reject new foods and to learn associations between foods' flavors and the postingestive consequences of eating. Evidence suggests that children can respond to the energy density of the diet and that although intake at individual meals is erratic, 24-hour energy intake is relatively well regulated. There are individual differences in the regulation of energy intake as early as the preschool period. These individual differences in self-regulation are associated with differences in child-feeding practices and with children's adiposity. This suggests that child-feeding practices have the potential to affect children's energy balance via altering patterns of intake. Initial evidence indicates that imposition of stringent parental controls can potentiate preferences for high-fat, energy-dense foods, limit children's acceptance of a variety of foods, and disrupt children's regulation of energy intake by altering children's responsiveness to internal cues of hunger and satiety. This can occur when well-intended but concerned parents assume that children need help in determining what, when, and how much to eat and when parents impose child-feeding practices that provide children with few opportunities for self-control. Implications of these findings for preventive interventions are discussed.",
"title": ""
},
{
"docid": "ff50d07261681dcc210f01593ad2c109",
"text": "A mathematical model of the system composed of two sensors, the semicircular canal and the sacculus, is suggested. The model is described by three lines of blocks, each line of which has the following structure: a biomechanical block, a mechanoelectrical transduction mechanism, and a block describing the hair cell ionic currents and membrane potential dynamics. The response of this system to various stimuli (head rotation under gravity and falling) is investigated. Identification of the model parameters was done with the experimental data obtained for the axolotl (Ambystoma tigrinum) at the Institute of Physiology, Autonomous University of Puebla, Mexico. Comparative analysis of the semicircular canal and sacculus membrane potentials is presented.",
"title": ""
},
{
"docid": "23d7eb4d414e4323c44121040c3b2295",
"text": "BACKGROUND\nThe use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality.\n\n\nOBJECTIVE\nTo describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine.\n\n\nRESULTS\nThe recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality.\n\n\nCONCLUSIONS\nAlthough the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.",
"title": ""
}
] | scidocsrr |
1c34baee9829a1688c72bce6ddcf45a1 | gSpan: Graph-Based Substructure Pattern Mining | [
{
"docid": "3429145583d25ba1d603b5ade11f4312",
"text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence",
"title": ""
}
] | [
{
"docid": "849613d7d30fb1ad10244d7e209f9fa8",
"text": "The tilt coordination technique is used in driving simulation for reproducing a sustained linear horizontal acceleration by tilting the simulator cabin. If combined with the translation motion of the simulator, this technique increases the acceleration rendering capabilities of the whole system. To perform this technique correctly, the rotational motion must be slow to remain under the perception threshold and thus be unnoticed by the driver. However, the acceleration to render changes quickly. Between the slow rotational motion limited by the tilt threshold and the fast change of acceleration to render, the design of the coupling between motions of rotation and translation plays a critical role in the realism of a driving simulator. This study focuses on the acceptance by drivers of different configurations for tilt restitution in terms of maximum tilt angle, tilt rate, and tilt acceleration. Two experiments were conducted, focusing respectively on roll tilt for a 0.2 Hz slaloming task and on pitch tilt for an acceleration/deceleration task. The results show what thresholds have to be followed in terms of amplitude, rate, and acceleration. These results are far superior to the standard human perception thresholds found in the literature.",
"title": ""
},
{
"docid": "dac5090c367ef05c8863da9c7979a619",
"text": "Full vinyl polysiloxane casts of the vagina were obtained from 23 Afro-American, 39 Caucasian and 15 Hispanic women in lying, sitting and standing positions. A new shape, the pumpkin seed, was found in 40% of Afro-American women, but not in Caucasians or Hispanics. Analyses of cast and introital measurements revealed: (1) posterior cast length is significantly longer, anterior cast length is significantly shorter and cast width is significantly larger in Hispanics than in the other two groups and (2) the Caucasian introitus is significantly greater than that of the Afro-American subject.",
"title": ""
},
{
"docid": "d54615bc5460d824aee45a8ac2c8009d",
"text": "In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four types of simple problems, for which the gradientbased algorithms commonly used in deep learning either fail or suffer from significant difficulties. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.",
"title": ""
},
{
"docid": "5d9d507a8bdd0d356d7ac220d9b0ef70",
"text": "This paper provides insights of possible plagiarism detection approach based on modern technologies – programming assignment versioning, auto-testing and abstract syntax tree comparison to estimate code similarities. Keywords—automation; assignment; testing; continuous integration INTRODUCTION In the emerging world of information technologies, a growing number of students is choosing this specialization for their education. Therefore, the number of homework and laboratory research assignments that should be tested is also growing. The majority of these tasks is based on the necessity to implement some algorithm as a small program. This article discusses the possible solutions to the problem of automated testing of programming laboratory research assignments. The course “Algorithmization and Programming of Solutions” is offered to all the first-year students of The Faculty of Computer Science and Information Technology (~500 students) in Riga Technical University and it provides the students the basics of the algorithmization of computing processes and the technology of program design using Java programming language (the given course and the University will be considered as an example of the implementation of the automated testing). During the course eight laboratory research assignments are planned, where the student has to develop an algorithm, create a program and submit it to the education portal of the University. The VBA test program was designed as one of the solutions, the requirements for each laboratory assignment were determined and the special tests have been created. At some point, however, the VBA offered options were no longer able to meet the requirements, therefore the activities on identifying the requirements for the automation of the whole cycle of programming work reception, testing and evaluation have begun. I. PLAGIARISM DETECTION APPROACHES To identify possible plagiarism detection techniques, it is imperative to define scoring or detecting threshold. Surely it is not an easy task, since only identical works can be considered as “true” plagiarism. In all other cases a person must make his decision whether two pieces of code are identical by their means or not. However, it is possible to outline some widespread approaches of assessment comparison. A. Manual Work Comparison In this case, all works must be compared one-by-one. Surely, this approach will lead to progressively increasing error rate due to human memory and cognitive function limitations. Large student group homework assessment verification can take long time, which is another contributing factor to errorrate increase. B. Diff-tool Application It is possible to compare two code fragments using semiautomated diff tool which provides information about Levenshtein distance between fragments. Although several visualization tools exist, it is quite easy to fool algorithm to believe that a code has multiple different elements in it, but all of them are actually another name for variables/functions/etc. without any additional contribution. C. Abstract Syntax Tree (AST) comparison Abstract syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.",
"title": ""
},
{
"docid": "bfb189f8052f41fe1491d8d71f9586f1",
"text": "In this paper, we introduce a novel reconfigurable architecture, named 3D field-programmable gate array (3D nFPGA), which utilizes 3D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS nanohybrid techniques that incorporate nanomaterials such as carbon nanotube bundles and nanowire crossbars into CMOS fabrication process. This architecture also has built-in features for fault tolerance and heat alleviation. Using unique features of FPGAs and a novel 3D stacking method enabled by the application of nanomaterials, 3D nFPGA obtains a 4x footprint reduction comparing to the traditional CMOS-based 2D FPGAs. With a customized design automation flow, we evaluate the performance and power of 3D nFPGA driven by the 20 largest MCNC benchmarks. Results demonstrate that 3D nFPGA is able to provide a performance gain of 2.6 x with a small power overhead comparing to the traditional 2D FPGA architecture.",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "a4c76e58074a42133a59a31d9022450d",
"text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.",
"title": ""
},
{
"docid": "4f91d43bf0185ddb5969d5bb13cb3b7e",
"text": "Man-made objects usually exhibit descriptive curved features (i.e., curve networks). The curve network of an object conveys its high-level geometric and topological structure. We present a framework for extracting feature curve networks from unstructured point cloud data. Our framework first generates a set of initial curved segments fitting highly curved regions. We then optimize these curved segments to respect both data fitting and structural regularities. Finally, the optimized curved segments are extended and connected into curve networks using a clustering method. To facilitate effectiveness in case of severe missing data and to resolve ambiguities, we develop a user interface for completing the curve networks. Experiments on various imperfect point cloud data validate the effectiveness of our curve network extraction framework. We demonstrate the usefulness of the extracted curve networks for surface reconstruction from incomplete point clouds.",
"title": ""
},
{
"docid": "062fb8603fe65ddde2be90bac0519f97",
"text": "Meta-heuristic methods represent very powerful tools for dealing with hard combinatorial optimization problems. However, real life instances usually cannot be treated efficiently in \"reasonable\" computing times. Moreover, a major issue in metaheuristic design and calibration is to make them robust, i.e., to provide high performance solutions for a variety of problem settings. Parallel meta-heuristics aim to address both issues. The objective of this chapter is to present a state-of-the-art survey of the main parallel meta-heuristic ideas and strategies, and to discuss general design principles applicable to all meta-heuristic classes. To achieve this goal, we explain various paradigms related to parallel meta-heuristic development, where communications, synchronization and control aspects are the most relevant. We also discuss implementation issues, namely the influence of the target architecture on parallel execution of meta-heuristics, pointing out the characteristics of shared and distributed memory multiprocessor systems. All these topics are illustrated by examples from recent literature. These examples are related to the parallelization of various meta-heuristic methods, but we focus here on Variable Neighborhood Search and Bee Colony Optimization.",
"title": ""
},
{
"docid": "1204d1695e39bb7897b6771c445d809e",
"text": "The known disorders of cholesterol biosynthesis have expanded rapidly since the discovery that Smith-Lemli-Opitz syndrome is caused by a deficiency of 7-dehydrocholesterol. Each of the six now recognized sterol disorders-mevalonic aciduria, Smith-Lemli-Opitz syndrome, desmosterolosis, Conradi-Hünermann syndrome, CHILD syndrome, and Greenberg dysplasia-has added to our knowledge of the relationship between cholesterol metabolism and embryogenesis. One of the most important lessons learned from the study of these disorders is that abnormal cholesterol metabolism impairs the function of the hedgehog class of embryonic signaling proteins, which help execute the vertebrate body plan during the earliest weeks of gestation. The study of the enzymes and genes in these several syndromes has also expanded and better delineated an important class of enzymes and proteins with diverse structural functions and metabolic actions that include sterol biosynthesis, nuclear transcriptional signaling, regulation of meiosis, and even behavioral modulation.",
"title": ""
},
{
"docid": "80ee585d49685a24a2011a1ddc27bb55",
"text": "A developmental model of antisocial behavior is outlined. Recent findings are reviewed that concern the etiology and course of antisocial behavior from early childhood through adolescence. Evidence is presented in support of the hypothesis that the route to chronic delinquency is marked by a reliable developmental sequence of experiences. As a first step, ineffective parenting practices are viewed as determinants for childhood conduct disorders. The general model also takes into account the contextual variables that influence the family interaction process. As a second step, the conduct-disordered behaviors lead to academic failure and peer rejection. These dual failures lead, in turn, to increased risk for depressed mood and involvement in a deviant peer group. This third step usually occurs during later childhood and early adolescence. It is assumed that children following this developmental sequence are at high risk for engaging in chronic delinquent behavior. Finally, implications for prevention and intervention are discussed.",
"title": ""
},
{
"docid": "3a4f8e1a1401bc77b9d847b69d461746",
"text": "This paper presents a family of techniques that we call congealing for modeling image classes from data. The idea is to start with a set of images and make them appear as similar as possible by removing variability along the known axes of variation. This technique can be used to eliminate \"nuisance\" variables such as affine deformations from handwritten digits or unwanted bias fields from magnetic resonance images. In addition to separating and modeling the latent images - i.e., the images without the nuisance variables - we can model the nuisance variables themselves, leading to factorized generative image models. When nuisance variable distributions are shared between classes, one can share the knowledge learned in one task with another task, leading to efficient learning. We demonstrate this process by building a handwritten digit classifier from just a single example of each class. In addition to applications in handwritten character recognition, we describe in detail the application of bias removal from magnetic resonance images. Unlike previous methods, we use a separate, nonparametric model for the intensity values at each pixel. This allows us to leverage the data from the MR images of different patients to remove bias from each other. Only very weak assumptions are made about the distributions of intensity values in the images. In addition to the digit and MR applications, we discuss a number of other uses of congealing and describe experiments about the robustness and consistency of the method.",
"title": ""
},
{
"docid": "93ae39ed7b4d6b411a2deb9967e2dc7d",
"text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.",
"title": ""
},
{
"docid": "4aeefa15b326ed583c9f922d7b035ff6",
"text": "In this paper, we present a Self-Supervised Neural Aggregation Network (SS-NAN) for human parsing. SS-NAN adaptively learns to aggregate the multi-scale features at each pixel \"address\". In order to further improve the feature discriminative capacity, a self-supervised joint loss is adopted as an auxiliary learning strategy, which imposes human joint structures into parsing results without resorting to extra supervision. The proposed SS-NAN is end-to-end trainable. SS-NAN can be integrated into any advanced neural networks to help aggregate features regarding the importance at different positions and scales and incorporate rich high-level knowledge regarding human joint structures from a global perspective, which in turn improve the parsing results. Comprehensive evaluations on the recent Look into Person (LIP) and the PASCAL-Person-Part benchmark datasets demonstrate the significant superiority of our method over other state-of-the-arts.",
"title": ""
},
{
"docid": "37637ca24397aba35e1e4926f1a94c91",
"text": "We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.",
"title": ""
},
{
"docid": "9cbeb94f97635cf115e3c19986b0acce",
"text": "This paper presents a hybrid algorithm for parameter estimation of synchronous generator. For large-residual problems (i.e., f(x) is large or f(x) is severely nonlinear), the performance of the Gauss-Newton method and Levenberg-Marquardt method is usually poor, and the slow convergence even causes iteration emergence divergence. The Quasi-Newton method can superlinearly converge, but it is not robust in the global stage of the iteration. Hybrid algorithm combining the two methods above is proved globally convergent with a high convergence speed through the example of synchronous generator parameter identification.",
"title": ""
},
{
"docid": "094906bcd076ae3207ba04755851c73a",
"text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.",
"title": ""
},
{
"docid": "8a30f829e308cb75164d1a076fa99390",
"text": "This paper proposes a planning method based on forward path generation and backward tracking algorithm for Automatic Parking Systems, especially suitable for backward parking situations. The algorithm is based on the steering property that backward moving trajectory coincides with the forward moving trajectory for the identical steering angle. The basic path planning is divided into two segments: a collision-free locating segment and an entering segment that considers the continuous steering angles for connecting the two paths. MATLAB simulations were conducted, along with experiments involving parallel and perpendicular situations.",
"title": ""
},
{
"docid": "dfade03850a7e0d27c76994e606ed078",
"text": "History of mental illness is a major factor behind suicide risk and ideation. However research efforts toward characterizing and forecasting this risk is limited due to the paucity of information regarding suicide ideation, exacerbated by the stigma of mental illness. This paper fills gaps in the literature by developing a statistical methodology to infer which individuals could undergo transitions from mental health discourse to suicidal ideation. We utilize semi-anonymous support communities on Reddit as unobtrusive data sources to infer the likelihood of these shifts. We develop language and interactional measures for this purpose, as well as a propensity score matching based statistical approach. Our approach allows us to derive distinct markers of shifts to suicidal ideation. These markers can be modeled in a prediction framework to identify individuals likely to engage in suicidal ideation in the future. We discuss societal and ethical implications of this research.",
"title": ""
},
{
"docid": "5d546a8d21859a057d36cdbd3fa7f887",
"text": "In 1984, a prospective cohort study, Coronary Artery Risk Development in Young Adults (CARDIA) was initiated to investigate life-style and other factors that influence, favorably and unfavorably, the evolution of coronary heart disease risk factors during young adulthood. After a year of planning and protocol development, 5,116 black and white women and men, age 18-30 years, were recruited and examined in four urban areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota, and Oakland, California. The initial examination included carefully standardized measurements of major risk factors as well as assessments of psychosocial, dietary, and exercise-related characteristics that might influence them, or that might be independent risk factors. This report presents the recruitment and examination methods as well as the mean levels of blood pressure, total plasma cholesterol, height, weight and body mass index, and the prevalence of cigarette smoking by age, sex, race and educational level. Compared to recent national samples, smoking is less prevalent in CARDIA participants, and weight tends to be greater. Cholesterol levels are representative and somewhat lower blood pressures in CARDIA are probably, at least in part, due to differences in measurement methods. Especially noteworthy among several differences in risk factor levels by demographic subgroup, were a higher body mass index among black than white women and much higher prevalence of cigarette smoking among persons with no more than a high school education than among those with more education.",
"title": ""
}
] | scidocsrr |
8a69f2cdc23badb693bf45b084f5a6b8 | Forecasting time series with complex seasonal patterns using exponential smoothing | [
{
"docid": "ca29fee64e9271e8fce675e970932af1",
"text": "This paper considers univariate online electricity demand forecasting for lead times from a half-hour-ahead to a day-ahead. A time series of demand recorded at half-hourly intervals contains more than one seasonal pattern. A within-day seasonal cycle is apparent from the similarity of the demand profile from one day to the next, and a within-week seasonal cycle is evident when one compares the demand on the corresponding day of adjacent weeks. There is strong appeal in using a forecasting method that is able to capture both seasonalities. The multiplicative seasonal ARIMA model has been adapted for this purpose. In this paper, we adapt the Holt-Winters exponential smoothing formulation so that it can accommodate two seasonalities. We correct for residual autocorrelation using a simple autoregressive model. The forecasts produced by the new double seasonal Holt-Winters method outperform those from traditional Holt-Winters and from a well-specified multiplicative double seasonal ARIMA model.",
"title": ""
}
] | [
{
"docid": "b1d2def5ce60ff9e787eb32a3b0431a6",
"text": "OSHA Region VIII office and the HBA of Metropolitan Denver who made this research possible and the Centers for Disease Control and Prevention, the National Institute for Occupational Safety and Health (NIOSH) for their support and funding via the awards 1 R03 OH04199-0: Occupational Low Back Pain in Residential Carpentry: Ergonomic Elements of Posture and Strain within the HomeSafe Pilot Program sponsored by OSHA and the HBA. Correspondence and requests for offprints should be sent to David P. Gilkey, Department of Environmental and Radiological Health Sciences, Colorado State University, Ft. Collins, CO 80523-1681, USA. E-mail: <[email protected]>. Low Back Pain Among Residential Carpenters: Ergonomic Evaluation Using OWAS and 2D Compression Estimation",
"title": ""
},
{
"docid": "cfd3548d7cf15b411b49eb77543d7903",
"text": "INTRODUCTION\nLiquid injectable silicone (LIS) has been used for soft tissue augmentation in excess of 50 years. Until recently, all literature on penile augmentation with LIS consisted of case reports or small cases series, most involving surgical intervention to correct the complications of LIS. New formulations of LIS and new methodologies for injection have renewed interest in this procedure.\n\n\nAIM\nWe reported a case of penile augmentation with LIS and reviewed the pertinent literature.\n\n\nMETHODS\nComprehensive literature review was performed using PubMed. We performed additional searches based on references from relevant review articles.\n\n\nRESULTS\nInjection of medical grade silicone for soft tissue augmentation has a role in carefully controlled study settings. Historically, the use of LIS for penile augmentation has had poor outcomes and required surgical intervention to correct complications resulting from LIS.\n\n\nCONCLUSIONS\nWe currently discourage the use of LIS for penile augmentation until carefully designed and evaluated trials have been completed.",
"title": ""
},
{
"docid": "e33129014269c9cf1579c5912f091916",
"text": "Cloud service brokerage has been identified as a key concern for future cloud technology development and research. We compare service brokerage solutions. A range of specific concerns like architecture, programming and quality will be looked at. We apply a 2-pronged classification and comparison framework. We will identify challenges and wider research objectives based on an identification of cloud broker architecture concerns and technical requirements for service brokerage solutions. We will discuss complex cloud architecture concerns such as commoditisation and federation of integrated, vertical cloud stacks.",
"title": ""
},
{
"docid": "4f42f1a6a9804f292b81313d9e8e04bf",
"text": "An integrated high performance, highly reliable, scalable, and secure communications network is critical for the successful deployment and operation of next-generation electricity generation, transmission, and distribution systems — known as “smart grids.” Much of the work done to date to define a smart grid communications architecture has focused on high-level service requirements with little attention to implementation challenges. This paper investigates in detail a smart grid communication network architecture that supports today's grid applications (such as supervisory control and data acquisition [SCADA], mobile workforce communication, and other voice and data communication) and new applications necessitated by the introduction of smart metering and home area networking, support of demand response applications, and incorporation of renewable energy sources in the grid. We present design principles for satisfying the diverse quality of service (QoS) and reliability requirements of smart grids.",
"title": ""
},
{
"docid": "c724224060408a1e13b135cb7c2bb9e4",
"text": "Large datasets are increasingly common and are often difficult to interpret. Principal component analysis (PCA) is a technique for reducing the dimensionality of such datasets, increasing interpretability but at the same time minimizing information loss. It does so by creating new uncorrelated variables that successively maximize variance. Finding such new variables, the principal components, reduces to solving an eigenvalue/eigenvector problem, and the new variables are defined by the dataset at hand, not a priori, hence making PCA an adaptive data analysis technique. It is adaptive in another sense too, since variants of the technique have been developed that are tailored to various different data types and structures. This article will begin by introducing the basic ideas of PCA, discussing what it can and cannot do. It will then describe some variants of PCA and their application.",
"title": ""
},
{
"docid": "f296b374b635de4f4c6fc9c6f415bf3e",
"text": "People increasingly use the Internet for obtaining information regarding diseases, diagnoses and available treatments. Currently, many online health portals already provide non-personalized health information in the form of articles. However, it can be challenging to find information relevant to one's condition, interpret this in context, and understand the medical terms and relationships. Recommender Systems (RS) already help these systems perform precise information filtering. In this short paper, we look one step ahead and show the progress made towards RS helping users find personalized, complex medical interventions or support them with preventive healthcare measures. We identify key challenges that need to be addressed for RS to offer the kind of decision support needed in high-risk domains like healthcare.",
"title": ""
},
{
"docid": "8c51c464d9137eec4600a5df5c6b451a",
"text": "An increasing number of disasters (natural and man-made) with a large number of victims and significant social and economical losses are observed in the past few years. Although particular events can always be attributed to fate, it is improving the disaster management that have to contribute to decreasing damages and ensuring proper care for citizens in affected areas. Some of the lessons learned in the last several years give clear indications that the availability, management and presentation of geo-information play a critical role in disaster management. However, all the management techniques that are being developed are understood by, and confined to the intellectual community and hence lack mass participation. Awareness of the disasters is the only effective way in which one can bring about mass participation. Hence, any disaster management is successful only when the general public has some awareness about the disaster. In the design of such awareness program, intelligent mapping through analysis and data sharing also plays a very vital role. The analytical capabilities of GIS support all aspects of disaster management: planning, response and recovery, and records management. The proposed GIS based awareness program in this paper would improve the currently practiced disaster management programs and if implemented, would result in a proper dosage of awareness and caution to the general public, which in turn would help to cope with the dangerous activities of disasters in future.",
"title": ""
},
{
"docid": "c2e0b234898df278ee57ae5827faadeb",
"text": "In this paper, we consider the problem of single image super-resolution and propose a novel algorithm that outperforms state-of-the-art methods without the need of learning patches pairs from external data sets. We achieve this by modeling images and, more precisely, lines of images as piecewise smooth functions and propose a resolution enhancement method for this type of functions. The method makes use of the theory of sampling signals with finite rate of innovation (FRI) and combines it with traditional linear reconstruction methods. We combine the two reconstructions by leveraging from the multi-resolution analysis in wavelet theory and show how an FRI reconstruction and a linear reconstruction can be fused using filter banks. We then apply this method along vertical, horizontal, and diagonal directions in an image to obtain a single-image super-resolution algorithm. We also propose a further improvement of the method based on learning from the errors of our super-resolution result at lower resolution levels. Simulation results show that our method outperforms state-of-the-art algorithms under different blurring kernels.",
"title": ""
},
{
"docid": "d612aeb7f7572345bab8609571f4030d",
"text": "In conventional supervised training, a model is trained to fit all the training examples. However, having a monolithic model may not always be the best strategy, as examples could vary widely. In this work, we explore a different learning protocol that treats each example as a unique pseudo-task, by reducing the original learning problem to a few-shot meta-learning scenario with the help of a domain-dependent relevance function.1 When evaluated on the WikiSQL dataset, our approach leads to faster convergence and achieves 1.1%–5.4% absolute accuracy gains over the non-meta-learning counterparts.",
"title": ""
},
{
"docid": "f8d256bf6fea179847bfb4cc8acd986d",
"text": "We present a logic for stating properties such as, “after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds”. The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and the Markov chain. A simple example is included to illustrate the algorithms.",
"title": ""
},
{
"docid": "cccecb08c92f8bcec4a359373a20afcb",
"text": "To solve the problem of the false matching and low robustness in detecting copy-move forgeries, a new method was proposed in this study. It involves the following steps: first, establish a Gaussian scale space; second, extract the orientated FAST key points and the ORB features in each scale space; thirdly, revert the coordinates of the orientated FAST key points to the original image and match the ORB features between every two different key points using the hamming distance; finally, remove the false matched key points using the RANSAC algorithm and then detect the resulting copy-move regions. The experimental results indicate that the new algorithm is effective for geometric transformation, such as scaling and rotation, and exhibits high robustness even when an image is distorted by Gaussian blur, Gaussian white noise and JPEG recompression; the new algorithm even has great detection on the type of hiding object forgery.",
"title": ""
},
{
"docid": "65b2d6ea5e1089c52378b4fd6386224c",
"text": "In traffic environment, conventional FMCW radar with triangular transmit waveform may bring out many false targets in multi-target situations and result in a high false alarm rate. An improved FMCW waveform and multi-target detection algorithm for vehicular applications is presented. The designed waveform in each small cycle is composed of two-segment: LFM section and constant frequency section. They have the same duration, yet in two adjacent small cycles the two LFM slopes are opposite sign and different size. Then the two adjacent LFM bandwidths are unequal. Within a determinate frequency range, the constant frequencies are modulated by a unique PN code sequence for different automotive radar in a big period. Corresponding to the improved waveform, which combines the advantages of both FSK and FMCW formats, a judgment algorithm is used in the continuous small cycle to further eliminate the false targets. The combination of unambiguous ranges and relative velocities can confirm and cancel most false targets in two adjacent small cycles.",
"title": ""
},
{
"docid": "9abd7aedf336f32abed7640dd3f4d619",
"text": "BACKGROUND\nAlthough evidence-based and effective treatments are available for people with depression, a substantial number does not seek or receive help. Therefore, it is important to gain a better understanding of the reasons why people do or do not seek help. This study examined what predisposing and need factors are associated with help-seeking among people with major depression.\n\n\nMETHODS\nA cross-sectional study was conducted in 102 subjects with major depression. Respondents were recruited from the general population in collaboration with three Municipal Health Services (GGD) across different regions in the Netherlands. Inclusion criteria were: being aged 18 years or older, a high score on a screening instrument for depression (K10 > 20), and a diagnosis of major depression established through the Composite International Diagnostic Interview (CIDI 2.1).\n\n\nRESULTS\nOf the total sample, 65 % (n = 66) had received help in the past six months. Results showed that respondents with a longer duration of symptoms and those with lower personal stigma were more likely to seek help. Other determinants were not significantly related to help-seeking.\n\n\nCONCLUSIONS\nLonger duration of symptoms was found to be an important determinant of help-seeking among people with depression. It is concerning that stigma was related to less help-seeking. Knowledge and understanding of depression should be promoted in society, hopefully leading to reduced stigma and increased help-seeking.",
"title": ""
},
{
"docid": "dc75c32aceb78acd8267e7af442b992c",
"text": "While pulmonary embolism (PE) causes approximately 100 000-180 000 deaths per year in the United States, mortality is restricted to patients who have massive or submassive PEs. This state of the art review familiarizes the reader with these categories of PE. The review discusses the following topics: pathophysiology, clinical presentation, rationale for stratification, imaging, massive PE management and outcomes, submassive PE management and outcomes, and future directions. It summarizes the most up-to-date literature on imaging, systemic thrombolysis, surgical embolectomy, and catheter-directed therapy for submassive and massive PE and gives representative examples that reflect modern practice. © RSNA, 2017.",
"title": ""
},
{
"docid": "25d913188ee5790d5b3a9f5fb8b68dda",
"text": "RPL, the routing protocol proposed by IETF for IPv6/6LoWPAN Low Power and Lossy Networks has significant complexity. Another protocol called LOADng, a lightweight variant of AODV, emerges as an alternative solution. In this paper, we compare the performance of the two protocols in a Home Automation scenario with heterogenous traffic patterns including a mix of multipoint-to-point and point-to-multipoint routes in realistic dense non-uniform network topologies. We use Contiki OS and Cooja simulator to evaluate the behavior of the ContikiRPL implementation and a basic non-optimized implementation of LOADng. Unlike previous studies, our results show that RPL provides shorter delays, less control overhead, and requires less memory than LOADng. Nevertheless, enhancing LOADng with more efficient flooding and a better route storage algorithm may improve its performance.",
"title": ""
},
{
"docid": "5124bfe94345f2abe6f91fe717731945",
"text": "Recently, IT trends such as big data, cloud computing, internet of things (IoT), 3D visualization, network, and so on demand terabyte/s bandwidth computer performance in a graphics card. In order to meet these performance, terabyte/s bandwidth graphics module using 2.5D-IC with high bandwidth memory (HBM) technology has been emerged. Due to the difference in scale of interconnect pitch between GPU or HBM and package substrate, the HBM interposer is certainly required for terabyte/s bandwidth graphics module. In this paper, the electrical performance of the HBM interposer channel in consideration of the manufacturing capabilities is analyzed by simulation both the frequency- and time-domain. Furthermore, although the silicon substrate is most widely employed for the HBM interposer fabrication, the organic and glass substrate are also proposed to replace the high cost and high loss silicon substrate. Therefore, comparison and analysis of the electrical performance of the HBM interposer channel using silicon, organic, and glass substrate are conducted.",
"title": ""
},
{
"docid": "342b57da0f0fcf190f926dfe0744977d",
"text": "Spike timing-dependent plasticity (STDP) as a Hebbian synaptic learning rule has been demonstrated in various neural circuits over a wide spectrum of species, from insects to humans. The dependence of synaptic modification on the order of pre- and postsynaptic spiking within a critical window of tens of milliseconds has profound functional implications. Over the past decade, significant progress has been made in understanding the cellular mechanisms of STDP at both excitatory and inhibitory synapses and of the associated changes in neuronal excitability and synaptic integration. Beyond the basic asymmetric window, recent studies have also revealed several layers of complexity in STDP, including its dependence on dendritic location, the nonlinear integration of synaptic modification induced by complex spike trains, and the modulation of STDP by inhibitory and neuromodulatory inputs. Finally, the functional consequences of STDP have been examined directly in an increasing number of neural circuits in vivo.",
"title": ""
},
{
"docid": "58fffa67053a82875177f32e126c2e43",
"text": "Cracking-resistant password vaults have been recently proposed with the goal of thwarting offline attacks. This requires the generation of synthetic password vaults that are statistically indistinguishable from real ones. In this work, we establish a conceptual link between this problem and steganography, where the stego objects must be undetectable among cover objects. We compare the two frameworks and highlight parallels and differences. Moreover, we transfer results obtained in the steganography literature into the context of decoy generation. Our results include the infeasibility of perfectly secure decoy vaults and the conjecture that secure decoy vaults are at least as hard to construct as secure steganography.",
"title": ""
},
{
"docid": "49a54c57984c3feaef32b708ae328109",
"text": "While it has a long history, the last 30 years have brought considerable advances to the discipline of forensic anthropology worldwide. Every so often it is essential that these advances are noticed and trends assessed. It is also important to identify those research areas that are needed for the forthcoming years. The purpose of this special issue is to examine some of the examples of research that might identify the trends in the 21st century. Of the 14 papers 5 dealt with facial features and identification such as facial profile determination and skull-photo superimposition. Age (fetus and cranial thickness), sex (supranasal region, arm and leg bones) and stature (from the arm bones) estimation were represented by five articles. Others discussed the estimation of time since death, skull color and diabetes, and a case study dealing with a mummy and skeletal analysis in comparison with DNA identification. These papers show that age, sex, and stature are still important issues of the discipline. Research on the human face is moving from hit and miss case studies to a more scientifically sound direction. A lack of studies on trauma and taphonomy is very clear. Anthropologists with other scientists can develop research areas to make the identification process more reliable. Research should include the assessment of animal attacks on human remains, factors affecting decomposition rates, and aging of the human face. Lastly anthropologists should be involved in the education of forensic pathologists about osteological techniques and investigators regarding archaeology of crime scenes.",
"title": ""
}
] | scidocsrr |
670e509f17f1f032a90f88c1dcfc2d9b | A Warning System for Obstacle Detection at Vehicle Lateral Blind Spot Area | [
{
"docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
}
] | [
{
"docid": "b3edfd5b56831080a663faeb0e159627",
"text": "Because wireless sensor networks (WSNs) are becoming increasingly integrated into daily life, solving the energy efficiency problem of such networks is an urgent problem. Many energy-efficient algorithms have been proposed to reduce energy consumption in traditional WSNs. The emergence of software-defined networks (SDNs) enables the transformation of WSNs. Some SDN-based WSNs architectures have been proposed and energy-efficient algorithms in SDN-based WSNs architectures have been studied. In this paper, we integrate an SDN into WSNs and an improved software-defined WSNs (SD-WSNs) architecture is presented. Based on the improved SD-WSNs architecture, we propose an energy-efficient algorithm. This energy-efficient algorithm is designed to match the SD-WSNs architecture, and is based on the residual energy and the transmission power, and the game theory is introduced to extend the network lifetime. Based on the SD-WSNs architecture and the energy-efficient algorithm, we provide a detailed introduction to the operating mechanism of the algorithm in the SD-WSNs. The simulation results show that our proposed algorithm performs better in terms of balancing energy consumption and extending the network lifetime compared with the typical energy-efficient algorithms in traditional WSNs.",
"title": ""
},
{
"docid": "338dcbb45ff0c1752eeb34ec1be1babe",
"text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural",
"title": ""
},
{
"docid": "fae65e55a1a670738d39a3d2db279ceb",
"text": "This paper presents a method to extract tone relevant features based on pitch flux from continuous speech signal. The autocorrelations of two adjacent frames are calculated and the covariance between them is estimated to extract multi-dimensional pitch flux features. These features, together with MFCCs, are modeled in a 2-stream GMM models, and are tested in a 3-dialect identification task for Chinese. The pitch flux features have shown to be very effective in identifying tonal languages with short speech segments. For the test speech segments of 3 seconds, 2-stream model achieves more than 30% error reduction over MFCC-based model",
"title": ""
},
{
"docid": "cf3ee200705e8bb564303bd758e8e235",
"text": "The current state of the art in playing many important perfect information games, including Chess and Go, combines planning and deep reinforcement learning with self-play. We extend this approach to imperfect information games and present ExIt-OOS, a novel approach to playing imperfect information games within the Expert Iteration framework and inspired by AlphaZero. We use Online Outcome Sampling, an online search algorithm for imperfect information games in place of MCTS. While training online, our neural strategy is used to improve the accuracy of playouts in OOS, allowing a learning and planning feedback loop for imperfect information games.",
"title": ""
},
{
"docid": "dd545adf1fba52e794af4ee8de34fc60",
"text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.",
"title": ""
},
{
"docid": "2e40682bca56659428d2919191e1cbf3",
"text": "Single-cell RNA-Seq (scRNA-Seq) has attracted much attention recently because it allows unprecedented resolution into cellular activity; the technology, therefore, has been widely applied in studying cell heterogeneity such as the heterogeneity among embryonic cells at varied developmental stages or cells of different cancer types or subtypes. A pertinent question in such analyses is to identify cell subpopulations as well as their associated genetic drivers. Consequently, a multitude of approaches have been developed for clustering or biclustering analysis of scRNA-Seq data. In this article, we present a fast and simple iterative biclustering approach called \"BiSNN-Walk\" based on the existing SNN-Cliq algorithm. One of BiSNN-Walk's differentiating features is that it returns a ranked list of clusters, which may serve as an indicator of a cluster's reliability. Another important feature is that BiSNN-Walk ranks genes in a gene cluster according to their level of affiliation to the associated cell cluster, making the result more biologically interpretable. We also introduce an entropy-based measure for choosing a highly clusterable similarity matrix as our starting point among a wide selection to facilitate the efficient operation of our algorithm. We applied BiSNN-Walk to three large scRNA-Seq studies, where we demonstrated that BiSNN-Walk was able to retain and sometimes improve the cell clustering ability of SNN-Cliq. We were able to obtain biologically sensible gene clusters in terms of GO term enrichment. In addition, we saw that there was significant overlap in top characteristic genes for clusters corresponding to similar cell states, further demonstrating the fidelity of our gene clusters.",
"title": ""
},
{
"docid": "0b1e0145affcdf2ff46580d9e5615211",
"text": "Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.",
"title": ""
},
{
"docid": "b26882cddec1690e3099757e835275d2",
"text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.",
"title": ""
},
{
"docid": "4dd2fc66b1a2f758192b02971476b4cc",
"text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.",
"title": ""
},
{
"docid": "0ae071bc719fdaac34a59991e66ab2b8",
"text": "It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.",
"title": ""
},
{
"docid": "4f87b93eb66b7126c53ee8126151f77f",
"text": "We propose a convolutional neural network architecture with k-max pooling layer for semantic modeling of music. The aim of a music model is to analyze and represent the semantic content of music for purposes of classification, discovery, or clustering. The k-max pooling layer is used in the network to make it possible to pool the k most active features, capturing the semantic-rich and time-varying information about music. Our network takes an input music as a sequence of audio words, where each audio word is associated with a distributed feature vector that can be fine-tuned by backpropagating errors during the training. The architecture allows us to take advantage of the better trained audio word embeddings and the deep structures to produce more robust music representations. Experiment results with two different music collections show that our neural networks achieved the best accuracy in music genre classification comparing with three state-of-art systems.",
"title": ""
},
{
"docid": "e711f9f57e1c3c22c762bf17cb6afd2b",
"text": "Qualitative research methodology has become an established part of the medical education research field. A very popular data-collection technique used in qualitative research is the \"focus group\". Focus groups in this Guide are defined as \"… group discussions organized to explore a specific set of issues … The group is focused in the sense that it involves some kind of collective activity … crucially, focus groups are distinguished from the broader category of group interview by the explicit use of the group interaction as research data\" (Kitzinger 1994, p. 103). This Guide has been designed to provide people who are interested in using focus groups with the information and tools to organize, conduct, analyze and publish sound focus group research within a broader understanding of the background and theoretical grounding of the focus group method. The Guide is organized as follows: Firstly, to describe the evolution of the focus group in the social sciences research domain. Secondly, to describe the paradigmatic fit of focus groups within qualitative research approaches in the field of medical education. After defining, the nature of focus groups and when, and when not, to use them, the Guide takes on a more practical approach, taking the reader through the various steps that need to be taken in conducting effective focus group research. Finally, the Guide finishes with practical hints towards writing up a focus group study for publication.",
"title": ""
},
{
"docid": "f7bc42beb169e42496b674c918541865",
"text": "Brain endothelial cells are unique among endothelial cells in that they express apical junctional complexes, including tight junctions, which quite resemble epithelial tight junctions both structurally and functionally. They form the blood-brain-barrier (BBB) which strictly controls the exchanges between the blood and the brain compartments by limiting passive diffusion of blood-borne solutes while actively transporting nutrients to the brain. Accumulating experimental and clinical evidence indicate that BBB dysfunctions are associated with a number of serious CNS diseases with important social impacts, such as multiple sclerosis, stroke, brain tumors, epilepsy or Alzheimer's disease. This review will focus on the implication of brain endothelial tight junctions in BBB architecture and physiology, will discuss the consequences of BBB dysfunction in these CNS diseases and will present some therapeutic strategies for drug delivery to the brain across the BBB.",
"title": ""
},
{
"docid": "8ea6c4957443916c2102f8a173f9d3dc",
"text": "INTRODUCTION\nOpioid overdose fatality has increased threefold since 1999. As a result, prescription drug overdose surpassed motor vehicle collision as the leading cause of unintentional injury-related death in the USA. Naloxone , an opioid antagonist that has been available for decades, can safely reverse opioid overdose if used promptly and correctly. However, clinicians often overestimate the dose of naloxone needed to achieve the desired clinical outcome, precipitating acute opioid withdrawal syndrome (OWS).\n\n\nAREAS COVERED\nThis article provides a comprehensive review of naloxone's pharmacologic properties and its clinical application to promote the safe use of naloxone in acute management of opioid intoxication and to mitigate the risk of precipitated OWS. Available clinical data on opioid-receptor kinetics that influence the reversal of opioid agonism by naloxone are discussed. Additionally, the legal and social barriers to take home naloxone programs are addressed.\n\n\nEXPERT OPINION\nNaloxone is an intrinsically safe drug, and may be administered in large doses with minimal clinical effect in non-opioid-dependent patients. However, when administered to opioid-dependent patients, naloxone can result in acute opioid withdrawal. Therefore, it is prudent to use low-dose naloxone (0.04 mg) with appropriate titration to reverse ventilatory depression in this population.",
"title": ""
},
{
"docid": "1fb0344be6a5da582e0563dceca70d44",
"text": "Self-mutilating behaviors could be minor and benign, but more severe cases are usually associated with psychiatric disorders or with acquired nervous system lesions and could be life-threatening. The patient was a 66-year-old man who had been mutilating his fingers for 6 years. This behavior started as serious nail biting and continued as severe finger mutilation (by biting), resulting in loss of the terminal phalanges of all fingers in both hands. On admission, he complained only about insomnia. The electromyography showed severe peripheral nerve damage in both hands and feet caused by severe diabetic neuropathy. Cognitive decline was not established (Mini Mental State Examination score, 28), although the computed tomographic scan revealed serious brain atrophy. He was given a diagnosis of impulse control disorder not otherwise specified. His impulsive biting improved markedly when low doses of haloperidol (1.5 mg/day) were added to fluoxetine (80 mg/day). In our patient's case, self-mutilating behavior was associated with severe diabetic neuropathy, impulsivity, and social isolation. The administration of a combination of an antipsychotic and an antidepressant proved to be beneficial.",
"title": ""
},
{
"docid": "03eabf03f8ac967c728ff35b77f3dd84",
"text": "In this paper, we tackle the problem of associating combinations of colors to abstract categories (e.g. capricious, classic, cool, delicate, etc.). It is evident that such concepts would be difficult to distinguish using single colors, therefore we consider combinations of colors or color palettes. We leverage two novel databases for color palettes and we learn categorization models using low and high level descriptors. Preliminary results show that Fisher representation based on GMMs is the most rewarding strategy in terms of classification performance over a baseline model. We also suggest a process for cleaning weakly annotated data, whilst preserving the visual coherence of categories. Finally, we demonstrate how learning abstract categories on color palettes can be used in the application of color transfer, personalization and image re-ranking.",
"title": ""
},
{
"docid": "e5c625ceaf78c66c2bfb9562970c09ec",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
},
{
"docid": "d272cf01340c8dcc3c24651eaf876926",
"text": "We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma’s Revenge. Instead of imitating human demonstrations, as proposed in other recent works, our approach is to maximize rewards directly. Our agent is trained using off-the-shelf reinforcement learning, but starts every episode by resetting to a state from a demonstration. By starting from such demonstration states, the agent requires much less exploration to learn a game compared to when it starts from the beginning of the game at every episode. We analyze reinforcement learning for tasks with sparse rewards in a simple toy environment, where we show that the run-time of standard RL methods scales exponentially in the number of states between rewards. Our method reduces this to quadratic scaling, opening up many tasks that were previously infeasible. We then apply our method to Montezuma’s Revenge, for which we present a trained agent achieving a high-score of 74,500, better than any previously published result.",
"title": ""
},
{
"docid": "2b569d086698cffc0cba2dc3fe0ab8a6",
"text": "Home security should be a top concern for everyone who owns or rents a home. Moreover, safe and secure residential space is the necessity of every individual as most of the family members are working. The home is left unattended for most of the day-time and home invasion crimes are at its peak as constantly monitoring of the home is difficult. Another reason for the need of home safety is specifically when the elderly person is alone or the kids are with baby-sitter and servant. Home security system i.e. HomeOS is thus applicable and desirable for resident’s safety and convenience. This will be achieved by turning your home into a smart home by intelligent remote monitoring. Smart home comes into picture for the purpose of controlling and monitoring the home. It will give you peace of mind, as you can have a close watch and stay connected anytime, anywhere. But, is common man really concerned about home security? An investigative study was done by conducting a survey to get the inputs from different people from diverse backgrounds. The main motivation behind this survey was to make people aware of advanced HomeOS and analyze their need for security. This paper also studied the necessity of HomeOS investigative study in current situation where the home burglaries are rising at an exponential rate. In order to arrive at findings and conclusions, data were analyzed. The graphical method was employed to identify the relative significance of home security. From this analysis, we can infer that the cases of having kids and aged person at home or location of home contribute significantly to the need of advanced home security system. At the end, the proposed system model with its flow and the challenges faced while implementing home security systems are also discussed.",
"title": ""
},
{
"docid": "da088acea8b1d2dc68b238e671649f4f",
"text": "Water is a naturally circulating resource that is constantly recharged. Therefore, even though the stocks of water in natural and artificial reservoirs are helpful to increase the available water resources for human society, the flow of water should be the main focus in water resources assessments. The climate system puts an upper limit on the circulation rate of available renewable freshwater resources (RFWR). Although current global withdrawals are well below the upper limit, more than two billion people live in highly water-stressed areas because of the uneven distribution of RFWR in time and space. Climate change is expected to accelerate water cycles and thereby increase the available RFWR. This would slow down the increase of people living under water stress; however, changes in seasonal patterns and increasing probability of extreme events may offset this effect. Reducing current vulnerability will be the first step to prepare for such anticipated changes.",
"title": ""
}
] | scidocsrr |
f6489b25ff7c3f5aa56afd450b184e34 | To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem? | [
{
"docid": "80e4748abbb22d2bfefa5e5cbd78fb86",
"text": "A reimplementation of the UNIX file system is described. The reimplementation provides substantially higher throughput rates by using more flexible allocation policies that allow better locality of reference and can be adapted to a wide range of peripheral and processor characteristics. The new file system clusters data that is sequentially accessed and provides tw o block sizes to allo w fast access to lar ge files while not wasting large amounts of space for small files. File access rates of up to ten times f aster than the traditional UNIX file system are e xperienced. Longneeded enhancements to the programmers’ interface are discussed. These include a mechanism to place advisory locks on files, extensions of the name space across file systems, the ability to use long file names, and provisions for administrati ve control of resource usage. Revised February 18, 1984 CR",
"title": ""
}
] | [
{
"docid": "e603d2a71580691cf6a61f0e892127cc",
"text": "Advances in tourism economics have enabled us to collect massive amounts of travel tour data. If properly analyzed, this data can be a source of rich intelligence for providing real-time decision making and for the provision of travel tour recommendations. However, tour recommendation is quite different from traditional recommendations, because the tourist's choice is directly affected by the travel cost, which includes the financial cost and the time. To that end, in this paper, we provide a focused study of cost-aware tour recommendation. Along this line, we develop two cost-aware latent factor models to recommend travel packages by considering both the travel cost and the tourist's interests. Specifically, we first design a cPMF model, which models the tourist's cost with a 2-dimensional vector. Also, in this cPMF model, the tourist's interests and the travel cost are learnt by exploring travel tour data. Furthermore, in order to model the uncertainty in the travel cost, we further introduce a Gaussian prior into the cPMF model and develop the GcPMF model, where the Gaussian prior is used to express the uncertainty of the travel cost. Finally, experiments on real-world travel tour data show that the cost-aware recommendation models outperform state-of-the-art latent factor models with a significant margin. Also, the GcPMF model with the Gaussian prior can better capture the impact of the uncertainty of the travel cost, and thus performs better than the cPMF model.",
"title": ""
},
{
"docid": "0d733d7f0782bfaf245bf344a46b58b8",
"text": "Smart Cities rely on the use of ICTs for a more efficient and intelligent use of resources, whilst improving citizens' quality of life and reducing the environmental footprint. As far as the livability of cities is concerned, traffic is one of the most frequent and complex factors directly affecting citizens. Particularly, drivers in search of a vacant parking spot are a non-negligible source of atmospheric and acoustic pollution. Although some cities have installed sensor-based vacant parking spot detectors in some neighbourhoods, the cost of this approach makes it unfeasible at large scale. As an approach to implement a sustainable solution to the vacant parking spot detection problem in urban environments, this work advocates fusing the information from small-scale sensor-based detectors with that obtained from exploiting the widely-deployed video surveillance camera networks. In particular, this paper focuses on how video analytics can be exploited as a prior step towards Smart City solutions based on data fusion. Through a set of experiments carefully planned to replicate a real-world scenario, the vacant parking spot detection success rate of the proposed system is evaluated through a critical comparison of local and global visual features (either alone or fused at feature level) and different classifier systems applied to the task. Furthermore, the system is tested under setup scenarios of different complexities, and experimental results show that while local features are best when training with small amounts of highly accurate on-site data, they are outperformed by their global counterparts when training with more samples from an external vehicle database.",
"title": ""
},
{
"docid": "74c386f9d3bc9bbe747a2186542c1fcf",
"text": "Assessment of right ventricular afterload in systolic heart failure seems mandatory as it plays an important role in predicting outcome. The purpose of this study is to estimate pulmonary vascular elastance as a reliable surrogate for right ventricular afterload in systolic heart failure. Forty-two patients with systolic heart failure (ejection fraction <35%) were studied by right heart catheterization. Pulmonary arterial elastance was calculated with three methods: Ea(PV) = (end-systolic pulmonary arterial pressure)/stroke volume; Ea*(PV) = (mean pulmonary arterial pressure - pulmonary capillary wedge pressure)/stroke volume; and PPSV = pulmonary arterial pulse pressure (systolic - diastolic)/stroke volume. These measures were compared with pulmonary vascular resistance ([mean pulmonary arterial pressure - pulmonary capillary wedge pressure]/CO). All estimates of pulmonary vascular elastance were significantly correlated with pulmonary vascular resistance (r=0.772, 0.569, and 0.935 for Ea(PV), Ea*(PV), and PPSV, respectively; P <.001). Pulmonary vascular elastance can easily be estimated by routine right heart catheterization in systolic heart failure and seems promising in assessment of right ventricular afterload.",
"title": ""
},
{
"docid": "8f73870d5e999c0269059c73bb85e05c",
"text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.",
"title": ""
},
{
"docid": "c37da50c2d31d262cb903405a7990ea0",
"text": "The automotive industry could be facing a situation of profound change and opportunity in the coming decades. There are a number of influencing factors such as increasing urban and aging populations, self-driving cars, 3D parts printing, energy innovation, and new models of transportation service delivery (Zipcar, Uber). The connected car means that vehicles are now part of the connected world, continuously Internet-connected, generating and transmitting data, which on the one hand can be helpfully integrated into applications, like real-time traffic alerts broadcast to smartwatches, but also raises security and privacy concerns. This paper explores the automotive connected world, and describes five killer QS (Quantified Self)-auto sensor applications that link quantified-self sensors (sensors that measure the personal biometrics of individuals like heart rate) and automotive sensors (sensors that measure driver and passenger biometrics or quantitative automotive performance metrics like speed and braking activity). The applications are fatigue detection, real-time assistance for parking and accidents, anger management and stress reduction, keyless authentication and digital identity verification, and DIY diagnostics. These kinds of applications help to demonstrate the benefit of connected world data streams in the automotive industry and beyond where, more fundamentally for human progress, the automation of both physical and now cognitive tasks is underway.",
"title": ""
},
{
"docid": "e0a2031394922edec46eaac60c473358",
"text": "In-wheel-motor drive electric vehicle (EV) is an innovative configuration, in which each wheel is driven individually by an electric motor. It is possible to use an electronic differential (ED) instead of the heavy mechanical differential because of the fast response time of the motor. A new ED control approach for a two-in-wheel-motor drive EV is devised based on the fuzzy logic control method. The fuzzy logic method employs to estimate the slip rate of each wheel considering the complex and nonlinear of the system. Then, the ED system distributes torque and power to each motor according to requirements. The effectiveness and validation of the proposed control method are evaluated in the Matlab/Simulink environment. Simulation results show that the new ED control system can keep the slip rate within the optimized range, ensuring the stability of the vehicle either in a straight or a curve lane.",
"title": ""
},
{
"docid": "b682d1da4fd31e470aa96244a47f081a",
"text": "With Android being the most widespread mobile platform, protecting it against malicious applications is essential. Android users typically install applications from large remote repositories, which provides ample opportunities for malicious newcomers. In this paper, we propose a simple, and yet highly effective technique for detecting malicious Android applications on a repository level. Our technique performs automatic classification based on tracking system calls while applications are executed in a sandbox environment. We implemented the technique in a tool called MALINE, and performed extensive empirical evaluation on a suite of around 12,000 applications. The evaluation yields an overall detection accuracy of 93% with a 5% benign application classification error, while results are improved to a 96% detection accuracy with up-sampling. This indicates that our technique is viable to be used in practice. Finally, we show that even simplistic feature choices are highly effective, suggesting that more heavyweight approaches should be thoroughly (re)evaluated. Android Malware Detection Based on System Calls Marko Dimjašević, Simone Atzeni, Zvonimir Rakamarić University of Utah, USA {marko,simone,zvonimir}@cs.utah.edu Ivo Ugrina University of Zagreb, Croatia",
"title": ""
},
{
"docid": "48427804f2e704ab6ea15251c624cdf2",
"text": "In this work, we propose Residual Attention Network, a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels.",
"title": ""
},
{
"docid": "0c61bfbb7106c5592ecb9677e617f83f",
"text": "BACKGROUND\nAcute exacerbations of chronic obstructive pulmonary disease (COPD) are associated with accelerated decline in lung function, diminished quality of life, and higher mortality. Proactively monitoring patients for early signs of an exacerbation and treating them early could prevent these outcomes. The emergence of affordable wearable technology allows for nearly continuous monitoring of heart rate and physical activity as well as recording of audio which can detect features such as coughing. These signals may be able to be used with predictive analytics to detect early exacerbations. Prior to full development, however, it is important to determine the feasibility of using wearable devices such as smartwatches to intensively monitor patients with COPD.\n\n\nOBJECTIVE\nWe conducted a feasibility study to determine if patients with COPD would wear and maintain a smartwatch consistently and whether they would reliably collect and transmit sensor data.\n\n\nMETHODS\nPatients with COPD were recruited from 3 hospitals and were provided with a smartwatch that recorded audio, heart rate, and accelerations. They were asked to wear and charge it daily for 90 days. They were also asked to complete a daily symptom diary. At the end of the study period, participants were asked what would motivate them to regularly use a wearable for monitoring of their COPD.\n\n\nRESULTS\nOf 28 patients enrolled, 16 participants completed the full 90 days. The average age of participants was 68.5 years, and 36% (10/28) were women. Survey, heart rate, and activity data were available for an average of 64.5, 65.1, and 60.2 days respectively. Technical issues caused heart rate and activity data to be unavailable for approximately 13 and 17 days, respectively. Feedback provided by participants indicated that they wanted to actively engage with the smartwatch and receive feedback about their activity, heart rate, and how to better manage their COPD.\n\n\nCONCLUSIONS\nSome patients with COPD will wear and maintain smartwatches that passively monitor audio, heart rate, and physical activity, and wearables were able to reliably capture near-continuous patient data. Further work is necessary to increase acceptability and improve the patient experience.",
"title": ""
},
{
"docid": "89d91df8511c0b0f424dd5fa20fcd212",
"text": "We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques.",
"title": ""
},
{
"docid": "8a293b95b931f4f72fe644fdfe30564a",
"text": "Today, the concept of brain connectivity plays a central role in the neuroscience. While functional connectivity is defined as the temporal coherence between the activities of different brain areas, the effective connectivity is defined as the simplest brain circuit that would produce the same temporal relationship as observed experimentally between cortical sites. The most used method to estimate effective connectivity in neuroscience is the structural equation modeling (SEM), typically used on data related to the brain hemodynamic behavior. However, the use of hemodynamic measures limits the temporal resolution on which the brain process can be followed. The present research proposes the use of the SEM approach on the cortical waveforms estimated from the high-resolution EEG data, which exhibits a good spatial resolution and a higher temporal resolution than hemodynamic measures. We performed a simulation study, in which different main factors were systematically manipulated in the generation of test signals, and the errors in the estimated connectivity were evaluated by the analysis of variance (ANOVA). Such factors were the signal-to-noise ratio and the duration of the simulated cortical activity. Since SEM technique is based on the use of a model formulated on the basis of anatomical and physiological constraints, different experimental conditions were analyzed, in order to evaluate the effect of errors made in the a priori model formulation on its performances. The feasibility of the proposed approach has been shown in a human study using high-resolution EEG recordings related to finger tapping movements.",
"title": ""
},
{
"docid": "acba717edc26ae7ba64debc5f0d73ded",
"text": "Previous phase I-II clinical trials have shown that recombinant human erythropoietin (rHuEpo) can ameliorate anemia in a portion of patients with multiple myeloma (MM) and non-Hodgkin's lymphoma (NHL). Therefore, we performed a randomized controlled multicenter study to define the optimal initial dosage and to identify predictors of response to rHuEpo. A total of 146 patients who had hemoglobin (Hb) levels < or = 11 g/dL and who had no need for transfusion at the time of enrollment entered this trial. Patients were randomized to receive 1,000 U (n = 31), 2,000 U (n = 29), 5,000 U (n = 31), or 10,000 U (n = 26) of rHuEpo daily subcutaneously for 8 weeks or to receive no therapy (n = 29). Of the patients, 84 suffered from MM and 62 from low- to intermediate-grade NHL, including chronic lymphocytic leukemia; 116 of 146 (79%) received chemotherapy during the study. The mean baseline Hb level was 9.4 +/- 1.0 g/dL. The median serum Epo level was 32 mU/mL, and endogenous Epo production was found to be defective in 77% of the patients, as judged by a value for the ratio of observed-to-predicted serum Epo levels (O/P ratio) of < or = 0.9. An intention-to-treat analysis was performed to evaluate treatment efficacy. The median average increase in Hb levels per week was 0.04 g/dL in the control group and -0.04 (P = .57), 0.22 (P = .05), 0.43 (P = .01), and 0.58 (P = .0001) g/dL in the 1,000 U, 2,000 U, 5,000 U, and 10,000 U groups, respectively (P values versus control). The probability of response (delta Hb > or = 2 g/dL) increased steadily and, after 8 weeks, reached 31% (2,000 U), 61% (5,000 U), and 62% (10,000 U), respectively. Regression analysis using Cox's proportional hazard model and classification and regression tree analysis showed that serum Epo levels and the O/P ratio were the most important factors predicting response in patients receiving 5,000 or 10,000 U. Approximately three quarters of patients presenting with Epo levels inappropriately low for the degree of anemia responded to rHuEpo, whereas only one quarter of those with adequate Epo levels did so. Classification and regression tree analysis also showed that doses of 2,000 U daily were effective in patients with an average platelet count greater than 150 x 10(9)/L. About 50% of these patients are expected to respond to rHuEpo. Thus, rHuEpo was safe and effective in ameliorating the anemia of MM and NHL patients who showed defective endogenous Epo production. From a practical point of view, we conclude that the decision to use rHuEpo in an individual anemic patient with MM or NHL should be based on serum Epo levels, whereas the choice of the initial dosage should be based on residual marrow function.",
"title": ""
},
{
"docid": "d9e0fd8abb80d6256bd86306b7112f20",
"text": "Visible light LEDs, due to their numerous advantages, are expected to become the dominant indoor lighting technology. These lights can also be switched ON/OFF at high frequency, enabling their additional use for wireless communication and indoor positioning. In this article, visible LED light--based indoor positioning systems are surveyed and classified into two broad categories based on the receiver structure. The basic principle and architecture of each design category, along with various position computation algorithms, are discussed and compared. Finally, several new research, implementation, commercialization, and standardization challenges are identified and highlighted for this relatively novel and interesting indoor localization technology.",
"title": ""
},
{
"docid": "cbed0b87ebae159115277322b21299ca",
"text": "The present work describes a classification schema for irony detection in Greek political tweets. Our hypothesis states that humorous political tweets could predict actual election results. The irony detection concept is based on subjective perceptions, so only relying on human-annotator driven labor might not be the best route. The proposed approach relies on limited labeled training data, thus a semi-supervised approach is followed, where collective-learning algorithms take both labeled and unlabeled data into consideration. We compare the semi-supervised results with the supervised ones from a previous research of ours. The hypothesis is evaluated via a correlation study between the irony that a party receives on Twitter, its respective actual election results during the Greek parliamentary elections of May 2012, and the difference between these results and the ones of the preceding elections of 2009. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5c9013c9514dc7deaa0b87fe9cd6db16",
"text": "To predict the uses of new technology, we present an approach grounded in science and technology studies (STS) that examines the social uses of current technology. As part of ongoing research on next-generation mobile imaging applications, we conducted an empirical study of the social uses of personal photography. We identify three: memory, creating and maintaining relationships, and self-expression. The roles of orality and materiality in these uses help us explain the observed resistances to intangible digital images and to assigning metadata and annotations. We conclude that this approach is useful for understanding the potential uses of technology and for design.",
"title": ""
},
{
"docid": "8f1d7499280f94b92044822c1dd4e59d",
"text": "WORK-LIFE BALANCE means bringing work, whether done on the job or at home, and leisure time into balance to live life to its fullest. It doesn’t mean that you spend half of your life working and half of it playing; instead, it means balancing the two to achieve harmony in physical, emotional, and spiritual health. In today’s economy, can nurses achieve work-life balance? Although doing so may be difficult, the consequences to our health can be enormous if we don’t try. This article describes some of the stresses faced by nurses and tips for attaining a healthy balance of work and leisure.",
"title": ""
},
{
"docid": "0f3cb3d8a841e0de31438da1dd99c176",
"text": "In this paper we give the details of the numerical solution of a three-dimensional multispecies diffuse interface model of tumor growth, which was derived in (Wise et al., J. Theor. Biol. 253 (2008)) and used to study the development of glioma in (Frieboes et al., NeuroImage 37 (2007) and tumor invasion in (Bearer et al., Cancer Research, 69 (2009)) and (Frieboes et al., J. Theor. Biol. 264 (2010)). The model has a thermodynamic basis, is related to recently developed mixture models, and is capable of providing a detailed description of tumor progression. It utilizes a diffuse interface approach, whereby sharp tumor boundaries are replaced by narrow transition layers that arise due to differential adhesive forces among the cell-species. The model consists of fourth-order nonlinear advection-reaction-diffusion equations (of Cahn-Hilliard-type) for the cell-species coupled with reaction-diffusion equations for the substrate components. Numerical solution of the model is challenging because the equations are coupled, highly nonlinear, and numerically stiff. In this paper we describe a fully adaptive, nonlinear multigrid/finite difference method for efficiently solving the equations. We demonstrate the convergence of the algorithm and we present simulations of tumor growth in 2D and 3D that demonstrate the capabilities of the algorithm in accurately and efficiently simulating the progression of tumors with complex morphologies.",
"title": ""
},
{
"docid": "459f368625415f80c88da01b69e94258",
"text": "Data visualization and feature selection methods are proposed based on the )oint mutual information and ICA. The visualization methods can find many good 2-D projections for high dimensional data interpretation, which cannot be easily found by the other existing methods. The new variable selection method is found to be better in eliminating redundancy in the inputs than other methods based on simple mutual information. The efficacy of the methods is illustrated on a radar signal analysis problem to find 2-D viewing coordinates for data visualization and to select inputs for a neural network classifier.",
"title": ""
},
{
"docid": "51da4d5923b30db560227155edd0621d",
"text": "The fifth generation wireless 5G development initiative is based upon 4G, which at present is struggling to meet its performance goals. The comparison between 3G and 4G wireless communication systems in relation to its architecture, speed, frequency band, switching design basis and forward error correction is studied, and were discovered that their performances are still unable to solve the unending problems of poor coverage, bad interconnectivity, poor quality of service and flexibility. An ideal 5G model to accommodate the challenges and shortfalls of 3G and 4G deployments is discussed as well as the significant system improvements on the earlier wireless technologies. The radio channel propagation characteristics for 4G and 5G systems is discussed. Major advantages of 5G network in providing myriads of services to end users personalization, terminal and network heterogeneity, intelligence networking and network convergence among other benefits are highlighted.The significance of the study is evaluated for a fast and effective connection and communication of devices like mobile phones and computers, including the capability of supporting and allowing a highly flexible network connectivity.",
"title": ""
},
{
"docid": "a903f9eb225a79ebe963d1905af6d3c8",
"text": "We have developed a multithreaded implementation of breadth-first search (BFS) of a sparse graph using the Cilk++ extensions to C++. Our PBFS program on a single processor runs as quickly as a standar. C++ breadth-first search implementation. PBFS achieves high work-efficiency by using a novel implementation of a multiset data structure, called a \"bag,\" in place of the FIFO queue usually employed in serial breadth-first search algorithms. For a variety of benchmark input graphs whose diameters are significantly smaller than the number of vertices -- a condition met by many real-world graphs -- PBFS demonstrates good speedup with the number of processing cores.\n Since PBFS employs a nonconstant-time \"reducer\" -- \"hyperobject\" feature of Cilk++ -- the work inherent in a PBFS execution depends nondeterministically on how the underlying work-stealing scheduler load-balances the computation. We provide a general method for analyzing nondeterministic programs that use reducers. PBFS also is nondeterministic in that it contains benign races which affect its performance but not its correctness. Fixing these races with mutual-exclusion locks slows down PBFS empirically, but it makes the algorithm amenable to analysis. In particular, we show that for a graph G=(V,E) with diameter D and bounded out-degree, this data-race-free version of PBFS algorithm runs it time O((V+E)/P + Dlg3(V/D)) on P processors, which means that it attains near-perfect linear speedup if P << (V+E)/Dlg3(V/D).",
"title": ""
}
] | scidocsrr |
43f590e6352178a6586387c6d88b28c4 | Knowledge sharing and social media: Altruism, perceived online attachment motivation, and perceived online relationship commitment | [
{
"docid": "cd0b28b896dd84ca70d42541b466d5ff",
"text": "a r t i c l e i n f o a b s t r a c t The success of knowledge management initiatives depends on knowledge sharing. This paper reviews qualitative and quantitative studies of individual-level knowledge sharing. Based on the literature review we developed a framework for understanding knowledge sharing research. The framework identifies five areas of emphasis of knowledge sharing research: organizational context, interpersonal and team characteristics, cultural characteristics, individual characteristics, and motivational factors. For each emphasis area the paper discusses the theoretical frameworks used and summarizes the empirical research results. The paper concludes with a discussion of emerging issues, new research directions, and practical implications of knowledge sharing research. Knowledge is a critical organizational resource that provides a sustainable competitive advantage in a competitive and dynamic economy (e. To gain a competitive advantage it is necessary but insufficient for organizations to rely on staffing and training systems that focus on selecting employees who have specific knowledge, skills, abilities, or competencies or helping employees acquire them (e.g., Brown & Duguid, 1991). Organizations must also consider how to transfer expertise and knowledge from experts who have it to novices who need to know (Hinds, Patterson, & Pfeffer, 2001). That is, organizations need to emphasize and more effectively exploit knowledge-based resources that already exist within the organization As one knowledge-centered activity, knowledge sharing is the fundamental means through which employees can contribute to knowledge application, innovation, and ultimately the competitive advantage of the organization (Jackson, Chuang, Harden, Jiang, & Joseph, 2006). Knowledge sharing between employees and within and across teams allows organizations to exploit and capitalize on knowledge-based resources Research has shown that knowledge sharing and combination is positively related to reductions in production costs, faster completion of new product development projects, team performance, firm innovation capabilities, and firm performance including sales growth and revenue from new products and services (e. Because of the potential benefits that can be realized from knowledge sharing, many organizations have invested considerable time and money into knowledge management (KM) initiatives including the development of knowledge management systems (KMS) which use state-of-the-art technology to facilitate the collection, storage, and distribution of knowledge. However, despite these investments it has been estimated that at least $31.5 billion are lost per year by Fortune 500",
"title": ""
},
{
"docid": "c3750965243aef6b2389a2dfc3afa1b0",
"text": "This study reports on an exploratory survey conducted to investigate the use of social media technologies for sharing information. This paper explores the issue of credibility of the information shared in the context of computer-mediated communication. Four categories of information were explored: sensitive, sensational, political and casual information, across five popular social media technologies: social networking sites, micro-blogging sites, wikis, online forums, and online blogs. One hundred and fourteen active users of social media technologies participated in the study. The exploratory analysis conducted in this study revealed that information producers use different cues to indicate credibility of the information they share on different social media sites. Organizations can leverage findings from this study to improve targeted engagement with their customers. The operationalization of how information credibility is codified by information producers contributes to knowledge in social media research. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "f24011de3d527f54be4dff329e3862e9",
"text": "Basic concepts of ANNs together with three most widely used ANN learning strategies (error back-propagation, Kohonen, and counterpropagation) are explained and discussed. In order to show how the explained methods can be applied to chemical problems, one simple example, the classification and the prediction of the origin of different olive oil samples, each represented by eigtht fatty acid concentrations, is worked out in detail.",
"title": ""
},
{
"docid": "c2bb03165910da0597b0dbdd8831666a",
"text": "In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels. In our proposed model, we train two neural networks: a target network, the learner and a confidence network, the meta-learner. The target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. We propose to control the magnitude of the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. Thus we avoid that the weight updates computed from noisy labels harm the quality of the target networkmodel.",
"title": ""
},
{
"docid": "d63591706309cf602404c34de547184f",
"text": "This paper presents an overview of the inaugural Amazon Picking Challenge along with a summary of a survey conducted among the 26 participating teams. The challenge goal was to design an autonomous robot to pick items from a warehouse shelf. This task is currently performed by human workers, and there is hope that robots can someday help increase efficiency and throughput while lowering cost. We report on a 28-question survey posed to the teams to learn about each team’s background, mechanism design, perception apparatus, planning, and control approach. We identify trends in this data, correlate it with each team’s success in the competition, and discuss observations and lessons learned based on survey results and the authors’ personal experiences during the challenge.Note to Practitioners—Perception, motion planning, grasping, and robotic system engineering have reached a level of maturity that makes it possible to explore automating simple warehouse tasks in semistructured environments that involve high-mix, low-volume picking applications. This survey summarizes lessons learned from the first Amazon Picking Challenge, highlighting mechanism design, perception, and motion planning algorithms, as well as software engineering practices that were most successful in solving a simplified order fulfillment task. While the choice of mechanism mostly affects execution speed, the competition demonstrated the systems challenges of robotics and illustrated the importance of combining reactive control with deliberative planning.",
"title": ""
},
{
"docid": "4d2be7aac363b77c6abd083947bc28c7",
"text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",
"title": ""
},
{
"docid": "36460eda2098bdcf3810828f54ee7d2b",
"text": "[This corrects the article on p. 662 in vol. 60, PMID: 27729694.].",
"title": ""
},
{
"docid": "fc67a42a0c1d278994f0255e6cf3331a",
"text": "ibrant public securities markets rely on complex systems of supporting institutions that promote the governance of publicly traded companies. Corporate governance structures serve: 1) to ensure that minority shareholders receive reliable information about the value of firms and that a company’s managers and large shareholders do not cheat them out of the value of their investments, and 2) to motivate managers to maximize firm value instead of pursuing personal objectives.1 Institutions promoting the governance of firms include reputational intermediaries such as investment banks and audit firms, securities laws and regulators such as the Securities and Exchange Commission (SEC) in the United States, and disclosure regimes that produce credible firm-specific information about publicly traded firms. In this paper, we discuss economics-based research focused primarily on the governance role of publicly reported financial accounting information. Financial accounting information is the product of corporate accounting and external reporting systems that measure and routinely disclose audited, quantitative data concerning the financial position and performance of publicly held firms. Audited balance sheets, income statements, and cash-flow statements, along with supporting disclosures, form the foundation of the firm-specific information set available to investors and regulators. Developing and maintaining a sophisticated financial disclosure regime is not cheap. Countries with highly developed securities markets devote substantial resources to producing and regulating the use of extensive accounting and disclosure rules that publicly traded firms must follow. Resources expended are not only financial, but also include opportunity costs associated with deployment of highly educated human capital, including accountants, lawyers, academicians, and politicians. In the United States, the SEC, under the oversight of the U.S. Congress, is responsible for maintaining and regulating the required accounting and disclosure rules that firms must follow. These rules are produced both by the SEC itself and through SEC oversight of private standards-setting bodies such as the Financial Accounting Standards Board and the Emerging Issues Task Force, which in turn solicit input from business leaders, academic researchers, and regulators around the world. In addition to the accounting standards-setting investments undertaken by many individual countries and securities exchanges, there is currently a major, well-funded effort in progress, under the auspices of the International Accounting Standards Board (IASB), to produce a single set of accounting standards that will ultimately be acceptable to all countries as the basis for cross-border financing transactions.2 The premise behind governance research in accounting is that a significant portion of the return on investment in accounting regimes derives from enhanced governance of firms, which in turn facilitates the operation of securities Robert M. Bushman and Abbie J. Smith",
"title": ""
},
{
"docid": "12819e1ad6ca9b546e39ed286fe54d23",
"text": "This paper describes an efficient method to make individual faces for animation from several possible inputs. We present a method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views or from range data obtained from any available resources. It is based on extracting features on a face in a semiautomatic way and modifying a generic model with detected feature points. Then the fine modifications follow if range data is available. Automatic texture mapping is employed using a composed image from the two images. The reconstructed 3Dface can be animated immediately with given expression parameters. Several faces by one methodology applied to different input data to get a final animatable face are illustrated.",
"title": ""
},
{
"docid": "7f6edf82ddbe5b63ba5d36a7d8691dda",
"text": "This paper identifies the possibility of using electronic compasses and accelerometers in mobile phones, as a simple and scalable method of localization without war-driving. The idea is not fundamentally different from ship or air navigation systems, known for centuries. Nonetheless, directly applying the idea to human-scale environments is non-trivial. Noisy phone sensors and complicated human movements present practical research challenges. We cope with these challenges by recording a person's walking patterns, and matching it against possible path signatures generated from a local electronic map. Electronic maps enable greater coverage, while eliminating the reliance on WiFi infrastructure and expensive war-driving. Measurements on Nokia phones and evaluation with real users confirm the anticipated benefits. Results show a location accuracy of less than 11m in regions where today's localization services are unsatisfactory or unavailable.",
"title": ""
},
{
"docid": "50c961c8b229c7a4b31ca6a67e06112c",
"text": "The emerging three-dimensional (3D) chip architectures, with their intrinsic capability of reducing the wire length, is one of the promising solutions to mitigate the interconnect problem in modern microprocessor designs. 3D memory stacking also enables much higher memory bandwidth for future chip-multiprocessor design, mitigating the ``memory wall\" problem. In addition, heterogenous integration enabled by 3D technology can also result in innovation designs for future microprocessors. This paper serves as a survey of various approaches to design future 3D microprocessors, leveraging the benefits of fast latency, higher bandwidth, and heterogeneous integration capability that are offered by 3D technology.",
"title": ""
},
{
"docid": "d84f0baebe248608ae3c910adb39baea",
"text": "BACKGROUND\nSkin atrophy is a common manifestation of aging and is frequently accompanied by ulceration and delayed wound healing. With an increasingly aging patient population, management of skin atrophy is becoming a major challenge in the clinic, particularly in light of the fact that there are no effective therapeutic options at present.\n\n\nMETHODS AND FINDINGS\nAtrophic skin displays a decreased hyaluronate (HA) content and expression of the major cell-surface hyaluronate receptor, CD44. In an effort to develop a therapeutic strategy for skin atrophy, we addressed the effect of topical administration of defined-size HA fragments (HAF) on skin trophicity. Treatment of primary keratinocyte cultures with intermediate-size HAF (HAFi; 50,000-400,000 Da) but not with small-size HAF (HAFs; <50,000 Da) or large-size HAF (HAFl; >400,000 Da) induced wild-type (wt) but not CD44-deficient (CD44-/-) keratinocyte proliferation. Topical application of HAFi caused marked epidermal hyperplasia in wt but not in CD44-/- mice, and significant skin thickening in patients with age- or corticosteroid-related skin atrophy. The effect of HAFi on keratinocyte proliferation was abrogated by antibodies against heparin-binding epidermal growth factor (HB-EGF) and its receptor, erbB1, which form a complex with a particular isoform of CD44 (CD44v3), and by tissue inhibitor of metalloproteinase-3 (TIMP-3).\n\n\nCONCLUSIONS\nOur observations provide a novel CD44-dependent mechanism for HA oligosaccharide-induced keratinocyte proliferation and suggest that topical HAFi application may provide an attractive therapeutic option in human skin atrophy.",
"title": ""
},
{
"docid": "49e2963e84967100deee8fc810e053ba",
"text": "We have developed a method for rigidly aligning images of tubes. This paper presents an evaluation of the consistency of that method for three-dimensional images of human vasculature. Vascular images may contain alignment ambiguities, poorly corresponding vascular networks, and non-rigid deformations, yet the Monte Carlo experiments presented in this paper show that our method registers vascular images with sub-voxel consistency in a matter of seconds. Furthermore, we show that the method's insensitivity to non-rigid deformations enables the localization, quantification, and visualization of those deformations. Our method aligns a source image with a target image by registering a model of the tubes in the source image directly with the target image. Time can be spent to extract an accurate model of the tubes in the source image. Multiple target images can then be registered with that model without additional extractions. Our registration method builds upon the principles of our tubular object segmentation work that combines dynamic-scale central ridge traversal with radius estimation. In particular, our registration method's consistency stems from incorporating multi-scale ridge and radius measures into the model-image match metric. Additionally, the method's speed is due in part to the use of coarse-to-fine optimization strategies that are enabled by measures made during model extraction and by the parameters inherent to the model-image match metric.",
"title": ""
},
{
"docid": "7716409441fb8e34013d3e9f58d32476",
"text": "Decentralized partially observable Markov decision processes (Dec-POMDPs) are a powerful tool for modeling multi-agent planning and decision-making under uncertainty. Prevalent Dec-POMDP solution techniques require centralized computation given full knowledge of the underlying model. Multi-agent reinforcement learning (MARL) based approaches have been recently proposed for distributed solution of during learning and policy execution are identical. In some practical scenarios this may not be the case. We propose a novel MARL approach in which agents are allowed to rehearse with information that will not be available during policy execution. The key is for the agents to learn policies that do not explicitly rely on these rehearsal features. We also establish a weak convergence result for our algorithm, RLaR, demonstrating that RLaR converges in probability when certain conditions are met. We show experimentally that incorporating rehearsal features can enhance the learning rate compared to non-rehearsalbased learners, and demonstrate fast, (near) optimal performance on many existing benchmark DecPOMDP problems. We also compare RLaR against an existing approximate Dec-POMDP solver which, like RLaR, does not assume a priori knowledge of the model. While RLaR's policy representation is not as scalable, we show that RLaR produces higher quality policies for most problems and horizons studied. & 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e2b166491ccc69674d2a597282facf02",
"text": "With the advancement of radio access networks, more and more mobile data content needs to be transported by optical networks. Mobile fronthaul is an important network segment that connects centralized baseband units (BBUs) with remote radio units in cloud radio access networks (C-RANs). It enables advanced wireless technologies such as coordinated multipoint and massive multiple-input multiple-output. Mobile backhaul, on the other hand, connects BBUs with core networks to transport the baseband data streams to their respective destinations. Optical access networks are well positioned to meet the first optical communication demands of C-RANs. To better address the stringent requirements of future generations of wireless networks, such as the fifth-generation (5G) wireless, optical access networks need to be improved and enhanced. In this paper, we review emerging optical access network technologies that aim to support 5G wireless with high capacity, low latency, and low cost and power per bit. Advances in high-capacity passive optical networks (PONs), such as 100 Gbit/s PON, will be reviewed. Among the topics discussed are advanced modulation and detection techniques, digital signal processing tailored for optical access networks, and efficient mobile fronthaul techniques. We also discuss the need for coordination between RAN and PON to simplify the overall network, reduce the network latency, and improve the network cost efficiency and power efficiency.",
"title": ""
},
{
"docid": "12c2e384d9bbb33e6b2d1e15acde7984",
"text": "Traditional companies are increasingly turning towards platform strategies to gain speed in the development of digital value propositions and prepare for the challenges arising from digitalization. This paper reports on the digitalization journey of the LEGO Group to elaborate how brick-and-mortar companies can break away from a drifting information infrastructure and trigger its transformation into a digital platform. Conceptualizing information infrastructure evolution as path-dependent process, the case study explores how mindful deviations by Enterprise Architects guide installed base cultivation through collective action and trigger the creation of a new ‘platformization’ path. Additionally, the findings portrait Enterprise Architecture management as a process of socio-technical path constitution that is equally shaped by deliberate human interventions and emergent forces through path dependencies.",
"title": ""
},
{
"docid": "5dd790f34fec2f4adc52971c39e55d6b",
"text": "Although within SDN community, the notion of logically centralized network control is well understood and agreed upon, many different approaches exist on how one should deliver such a logically centralized view to multiple distributed controller instances. In this paper, we survey and investigate those approaches. We discover that we can classify the methods into several design choices that are trending among SDN adopters. Each design choice may influence several SDN issues such as scalability, robustness, consistency, and privacy. Thus, we further analyze the pros and cons of each model regarding these matters. We conclude that each design begets some characteristics. One may excel in resolving one issue but perform poor in another. We also present which design combinations one should pick to build distributed controller that is scalable, robust, consistent",
"title": ""
},
{
"docid": "599a142f342506c08476bdd353a6ef89",
"text": "This study was undertaken to report clinical outcomes after high tibial osteotomy (HTO) in patients with a discoid lateral meniscus and to determine (1) whether discoid lateral meniscus degeneration by magnetic resonance imaging (MRI) progresses after HTO and (2) whether this progression adversely affects clinical results. The records of 292 patients (292 knees) who underwent medial opening HTO were retrospectively reviewed, and discoid types and grades of lateral meniscus degeneration as determined by MRI were recorded preoperatively. Of the 292 patients, 17 (5.8 %) had a discoid lateral meniscus, and postoperative MR images were obtained at least 2 years after HTO for 15 of these 17 patients. American Knee Society (AKS) pain, knee and function scores significantly improved in the 15 patients after surgery (p < 0.001). Eight (53 %) had an incomplete and 7 (47 %) had a complete discoid lateral meniscus. By preoperative MRI, the distribution of meniscal degeneration was as follows: grade 1, 4 patients; grade 2, 7 patients; and grade 3, 4 patients. At the final follow-up, the distribution of degeneration was as follows: grade 1, 2 patients; grade 2, 5 patients; and grade 3, 8 patients. Two patients with grade 3 degeneration who did not undergo partial meniscectomy showed tear progression. Thus, 8 of the 15 patients (53 %) experienced progressive discoid meniscal degeneration after HTO. Median AKS pain score was significantly lower in the progression group than in the non-progression group (40 vs 45, respectively). The results of this study suggest that increased load on the lateral compartment after HTO can accelerate discoid lateral meniscus degeneration by MRI and caution that when a discoid lateral meniscus is found by preoperative MRI, progressive degeneration may occur after HTO and clinical outcome may be adversely affected. Therapeutic study, Level IV.",
"title": ""
},
{
"docid": "30d119e1c2777988aab652e34fb76846",
"text": "The relationship between games and story remains a divisive question among game fans, designers, and scholars alike. At a recent academic Games Studies conference, for example, a blood feud threatened to erupt between the self-proclaimed Ludologists, who wanted to see the focus shift onto the mechanics of game play, and the Narratologists, who were interested in studying games alongside other storytelling media.(1) Consider some recent statements made on this issue:",
"title": ""
},
{
"docid": "1acb7ca89eab0a0b4306aa2ebb844018",
"text": "This paper describes work in progress. Our research is focused on efficient construction of effective models for spam detection. Clustering messages allows for efficient labeling of a representative sample of messages for learning a spam detection model using a Random Forest for classification and active learning for refining the classification model. Results are illustrated for the 2007 TREC Public Spam Corpus. The area under the Receiver Operating Characteristic (ROC) curve is competitive with other solutions while requiring much fewer labeled training examples.",
"title": ""
},
{
"docid": "048d54f4997bfea726f69cf7f030543d",
"text": "In this article, we have reviewed the state of the art of IPT systems and have explored the suitability of the technology to wirelessly charge battery powered vehicles. the review shows that the IPT technology has merits for stationary charging (when the vehicle is parked), opportunity charging (when the vehicle is stopped for a short period of time, for example, at a bus stop), and dynamic charging (when the vehicle is moving along a dedicated lane equipped with an IPT system). Dynamic wireless charging holds promise to partially or completely eliminate the overnight charging through a compact network of dynamic chargers installed on the roads that would keep the vehicle batteries charged at all times, consequently reducing the range anxiety and increasing the reliability of EVs. Dynamic charging can help lower the price of EVs by reducing the size of the battery pack. Indeed, if the recharging energy is readily available, the batteries do not have to support the whole driving range but only supply power when the IPT system is not available. Depending on the power capability, the use of dynamic charging may increase driving range and reduce the size of the battery pack.",
"title": ""
},
{
"docid": "217c8e82b0131a1634a0e09967c388dc",
"text": "F anthropology plays a vital role in medicolegal investigations of death. Today, forensic anthropologists are intimately involved in many aspects of these investigations; they may participate in search and recovery efforts, develop a biological profile, identify and document trauma, determine postmortem interval, and offer expert witness courtroom testimony. However, few forensic anthropology textbooks include substantial discussions of our medicolegal and judicial systems. Forensic Anthropology: Contemporary Theory and Practice, by Debra A. Komar and Jane E. Buikstra, not only examines current forensic anthropology from a theoretical perspective, but it also includes an introduction to elements of our legal system. Further, the text integrates these important concepts with bioanthropological theories and methods. Komar and Buikstra begin with an introductory chapter that traces the history of forensic anthropology in the United States. The careers of several founding members of the American Board of Forensic Anthropology are recognized for their contribution to advancing the profession. We are reminded that the field has evolved through the years from biological anthropologists doing forensic anthropology to modern students, who need training in both the medical and physical sciences, as well as traditional foundations in biological anthropology. In Chapters Two and Three, the authors introduce the reader to the medicolegal and judicial systems respectively. They present the medicolegal system with interesting discussions of important topics such as jurisdiction, death investigations, cause and manner of death, elements of a crime (actus reus and mens rea), and postmortem examinations. The chapter on the judicial system begins with the different classifications and interpretations of evidence, followed by an overview. Key components of this chapter include the rules governing expert witness testimony and scientific evidence in the courtroom. The authors also review the United States Supreme Court landmark decision, Daubert v. Merrell Dow Pharmaceuticals 1993, which established more stringent criteria that federal judges must follow regarding the admissibility of scientific evidence in federal courtrooms. The authors note that in the Daubert decision, the Supreme Court modified the “Frye test”, removing the general acceptability criterion formerly required. In light of the Daubert ruling, the authors demonstrate the need for anthropologists to refine techniques and continue to develop biological profiling methods that will meet the rigorous Daubert standards. Anthropology is not alone among the forensic sciences that seek to refine methods and techniques. For example, forensic odontology has recently come under scrutiny in cases where defendants have been wrongfully convicted based on bite mark evidence (Saks and Koehler 2005). Additionally, Saks and Koehler also remark upon 86 DNA exoneration cases and note that 63% of these wrongful convictions are attributed to forensic science testing errors. Chapter Four takes a comprehensive look at the role of forensic anthropologists during death investigations. The authors note that “the participation of forensic anthropologists can be invaluable to the proper handling of the death scene” (p. 65). To this end, the chapter includes discussions of identifying remains of medicolegal and nonmedicolegal significance, jurisdiction issues, search strategies, and proper handling of evidence. Readers may find the detailed treatment of differentiating human from nonhuman material particularly useful. The following two chapters deal with developing a biological profile, and pathology and trauma. A detailed review of sex and age estimation for both juvenile and adult skeletal remains is provided, as well as an assessment of the estimation of ancestry and stature. A welcome discussion on scientific testing and the error rates of different methods is highlighted throughout their ‘reference’ packed discussion. In their critical review of biological profile development, Komar and Buikstra discuss the various estimation methods; they note that more recent techniques may need testing on additional skeletal samples to survive potential challenges under the Daubert ruling. We also are reminded that in forensic science, flawed methods may result in the false imprisonment of innocent persons, therefore an emphasis is placed on developing and refining techniques that improve both the accuracy and reliability of biological profile estimates. Students will find that the descriptions and discussions of the different categories of both pathology and trauma assessments are beneficial for understanding postmortem examinations. One also may find that the reviews of blunt and sharp force trauma, gunshot wounds, and fracture terminology are particularly useful. Komar and Buikstra continue their remarkable book with a chapter focusing on forensic taphonomy. They begin with an introduction and an outline of the goals of forensic taphonomy which includes time since death estimation, mechanisms of bone modification, and reconstructing perimortem events. The reader is drawn to the case studies that",
"title": ""
}
] | scidocsrr |
86ebeb55ab6917b38de09b1cfc566ec3 | Game User Experience Evaluation | [
{
"docid": "d362b36e0c971c43856a07b7af9055f3",
"text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,",
"title": ""
}
] | [
{
"docid": "a95328b8210e8c6fcd628cb48618ebee",
"text": "Separation of video clips into foreground and background components is a useful and important technique, making recognition, classification, and scene analysis more efficient. In this paper, we propose a motion-assisted matrix restoration (MAMR) model for foreground-background separation in video clips. In the proposed MAMR model, the backgrounds across frames are modeled by a low-rank matrix, while the foreground objects are modeled by a sparse matrix. To facilitate efficient foreground-background separation, a dense motion field is estimated for each frame, and mapped into a weighting matrix which indicates the likelihood that each pixel belongs to the background. Anchor frames are selected in the dense motion estimation to overcome the difficulty of detecting slowly moving objects and camouflages. In addition, we extend our model to a robust MAMR model against noise for practical applications. Evaluations on challenging datasets demonstrate that our method outperforms many other state-of-the-art methods, and is versatile for a wide range of surveillance videos.",
"title": ""
},
{
"docid": "42c890832d861ad2854fd1f56b13eb45",
"text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.",
"title": ""
},
{
"docid": "323c9caac8b04b1531071acf74eb189b",
"text": "Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students' learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course and compared questions generated by G-Asks with human generated questions. The results indicate that G-Asks can generate questions as useful as human supervisors (‘useful’ is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors’ questions and discussed how the human supervisors generate such questions from the source text. General Terms: Automatic Question Generation, Natural Language Processing, Academic Writing Support",
"title": ""
},
{
"docid": "a9d948498c0ad0d99759636ea3ba4d1a",
"text": "Recently, Real Time Location Systems (RTLS) have been designed to provide location information of positioning target. The kernel of RTLS is localization algorithm, range-base localization algorithm is concerned as high precision. This paper introduces real-time range-based indoor localization algorithms, including Time of Arrival, Time Difference of Arrival, Received Signal Strength Indication, Time of Flight, and Symmetrical Double Sided Two Way Ranging. Evaluation criteria are proposed for assessing these algorithms, namely positioning accuracy, scale, cost, energy efficiency, and security. We also introduce the latest some solution, compare their Strengths and weaknesses. Finally, we give a recommendation about selecting algorithm from the viewpoint of the practical application need.",
"title": ""
},
{
"docid": "b8df66893d35839f1f4acec9c74467ad",
"text": "This paper presents the development of control circuit for single phase inverter using Atmel microcontroller. The attractiveness of this configuration is the elimination of a microcontroller to generate sinusoidal pulse width modulation (SPWM) pulses. The Atmel microcontroller is able to store all the commands to generate the necessary waveforms to control the frequency of the inverter through proper design of switching pulse. In this paper concept of the single phase inverter and it relation with the microcontroller is reviewed first. Subsequently approach and methods and dead time control are discussed. Finally simulation results and experimental results are discussed.",
"title": ""
},
{
"docid": "b2120881f15885cdb610d231f514bc9f",
"text": "In this work we do an analysis of Bitcoin’s price and volatility. Particularly, we look at Granger-causation relationships among the pairs of time series: Bitcoin price and the S&P 500, Bitcoin price and the VIX, Bitcoin realized volatility and the S&P 500, and Bitcoin realized volatility and the VIX. Additionally, we explored the relationship between Bitcoin weekly price and public enthusiasm for Blockchain, the technology behind Bitcoin, as measured by Google Trends data. we explore the Granger-causality relationships between Bitcoin weekly price and Blockchain Google Trend time series. We conclude that there exists a bidirectional Granger-causality relationship between Bitcoin realized volatility and the VIX at the 5% significance level, that we cannot reject the hypothesis that Bitcoin weekly price do not Granger-causes Blockchain trends and that we cannot reject the hypothesis that Bitcoin realized volatility do not Granger-causes S&P 500.",
"title": ""
},
{
"docid": "a2a77d422bbc8073390d6008978303a0",
"text": "As computing becomes more pervasive, the nature of applications must change accordingly. In particular, applications must become more flexible in order to respond to highly dynamic computing environments, and more autonomous, to reflect the growing ratio of applications to users and the corresponding decline in the attention a user can devote to each. That is, applications must become more context-aware. To facilitate the programming of such applications, infrastructure is required to gather, manage, and disseminate context information to applications. This paper is concerned with the development of appropriate context modeling concepts for pervasive computing, which can form the basis for such a context management infrastructure. This model overcomes problems associated with previous context models, including their lack of formality and generality, and also tackles issues such as wide variations in information quality, the existence of complex relationships amongst context information and temporal aspects of context.",
"title": ""
},
{
"docid": "309080fa2ef4f959951c08527ec1980d",
"text": "Complete scene understanding has been an aspiration of computer vision since its very early days. It has applications in autonomous navigation, aerial imaging, surveillance, human-computer interaction among several other active areas of research. While many methods since the advent of deep learning have taken performance in several scene understanding tasks to respectable levels, the tasks are far from being solved. One problem that plagues scene understanding is low-resolution. Convolutional Neural Networks that achieve impressive results on high resolution struggle when confronted with low resolution because of the inability to learn hierarchical features and weakening of signal with depth. In this thesis, we study the low resolution and suggest approaches that can overcome its consequences on three popular tasks object detection, in-the-wild face recognition, and semantic segmentation. The popular object detectors were designed for, trained, and benchmarked on datasets that have a strong bias towards medium and large sized objects. When these methods are finetuned and tested on a dataset of small objects, they perform miserably. The most successful detection algorithms follow a two-stage pipeline: the first which quickly generates regions of interest that are likely to contain the object and the second, which classifies these proposal regions. We aim to adapt both these stages for the case of small objects; the first by modifying anchor box generation based on theoretical considerations, and the second using a simple-yet-effective super-resolution step. Motivated by the success of being able to detect small objects, we study the problem of detecting and recognising objects with huge variations in resolution, in the problem of face recognition in semistructured scenes. Semi-structured scenes like social settings are more challenging than regular ones: there are several more faces of vastly different scales, there are large variations in illumination, pose and expression, and the existing datasets do not capture these variations. We address the unique challenges in this setting by (i) benchmarking popular methods for the problem of face detection, and (ii) proposing a method based on resolution-specific networks to handle different scales. Semantic segmentation is a more challenging localisation task where the goal is to assign a semantic class label to every pixel in the image. Solving such a problem is crucial for self-driving cars where we need sharper boundaries for roads, obstacles and paraphernalia. For want of a higher receptive field and a more global view of the image, CNN networks forgo resolution. This results in poor segmentation of complex boundaries, small and thin objects. We propose prefixing a super-resolution step before semantic segmentation. Through experiments, we show that a performance boost can be obtained on the popular streetview segmentation dataset, CityScapes.",
"title": ""
},
{
"docid": "87199b3e7def1db3159dc6b5989638aa",
"text": "We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"title": ""
},
{
"docid": "0879399fcb38c103a0e574d6d9010215",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "bf5874dc1fc1c968d7c41eb573d8d04a",
"text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.",
"title": ""
},
{
"docid": "f810dbe1e656fe984b4b6498c1c27bcb",
"text": "Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it involves only continuous optimization of model parameters, which is substantially simpler than discrete optimization of cluster assignments. However, existing methods still involve nonconvex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this letter, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.",
"title": ""
},
{
"docid": "5eb9c6540de63be3e7c645286f263b4d",
"text": "Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible.",
"title": ""
},
{
"docid": "2f185de66075fcba898afc052c820d98",
"text": "Owing to the complexity of the photovoltaic system structure and their environment, especially under the partial shadows environment, the output characteristics of photovoltaic arrays are greatly affected. Under the partial shadows environment, power-voltage (P-V) characteristics curve is of multi-peak. This makes that it is a difficult task to track the actual maximum power point. In addition, most programs are not able to get the maximum power point under these conditions. In this paper, we study the P-V curves under both uniform illumination and partial shadows environments, and then design an algorithm to track the maximum power point and select the strategy to deal with the MPPT algorithm by DSP chips and DC-DC converters. It is simple and easy to allow solar panels to maintain the best solar energy utilization resulting in increasing output at all times. Meanwhile, in order to track local peak point and improve the tracking speed, the algorithm proposed DC-DC converters operating feed-forward control scheme. Compared with the conventional controller, this controller costs much less time. This paper focuses mainly on specific processes of the algorithm, and being the follow-up basis for implementation of control strategies.",
"title": ""
},
{
"docid": "26f2b200bf22006ab54051c9288420e8",
"text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.",
"title": ""
},
{
"docid": "ec5aac01866a1e4ca3f4e906990d5d8e",
"text": "But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one orderof-magnitude improvement in productivity, in reliability, in simplicity. In this article, I shall try to show why, by examining both the nature of the software problem and the properties of the bullets proposed.",
"title": ""
},
{
"docid": "0745755e5347c370cdfbeca44dc6d288",
"text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.",
"title": ""
},
{
"docid": "d70a4fb982aeb2bd502519fb0a7d5c7b",
"text": "We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. e main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. e bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent. Parts of the work were done when Tongliang Liu was a visiting PhD student at Pompeu Fabra University. School of Information Technologies, Faculty Engineering and Information Technologies, University of Sydney, Sydney, Australia, [email protected], [email protected] Department of Economics and Business, Pompeu Fabra University, Barcelona, Spain, [email protected] ICREA, Pg. Llus Companys 23, 08010 Barcelona, Spain Barcelona Graduate School of Economics AI group, DTIC, Universitat Pompeu Fabra, Barcelona, Spain, [email protected] 1",
"title": ""
},
{
"docid": "e7f668483c8c0d1fbf6ef2c208e1a225",
"text": "A new capacitive pressure sensor with very large dynamic range is introduced. The sensor is based on a new technique for substantially changing the surface area of the electrodes, rather than the inter-electrode spacing as commonly done at the present. The prototype device has demonstrated a change in capacitance of approximately 2500 pF over a pressure range of 10 kPa.",
"title": ""
}
] | scidocsrr |
99854865f3b0c56939d67e168eb9d2ec | Name usage pattern in the synonym ambiguity problem in bibliographic data | [
{
"docid": "ce6f27561060d7119a82f9e69a089785",
"text": "Name disambiguation can occur when one is seeking a list of publications of an author who has used different name variations and when there are multiple other authors with the same name. We present an efficient integrative machine learning framework for solving the name disambiguation problem: a blocking method retrieves candidate classes of authors with similar names and a clustering method, DBSCAN, clusters papers by author. The distance metric between papers used in DBSCAN is calculated by an online active selection support vector machine algorithm (LASVM), yielding a simpler model, lower test errors and faster prediction time than a standard SVM. We prove that by recasting transitivity as density reachability in DBSCAN, transitivity is guaranteed for core points. For evaluation, we manually annotated 3,355 papers yielding 490 authors and achieved 90.6% pairwise-F1 metric. For scalability, authors in the entire CiteSeer dataset, over 700,000 papers, were readily disambiguated.",
"title": ""
},
{
"docid": "02d8ad18b07d08084764d124dc74a94c",
"text": "The large number of potential applications from bridging web data with knowledge bases have led to an increase in the entity linking research. Entity linking is the task to link entity mentions in text with their corresponding entities in a knowledge base. Potential applications include information extraction, information retrieval, and knowledge base population. However, this task is challenging due to name variations and entity ambiguity. In this survey, we present a thorough overview and analysis of the main approaches to entity linking, and discuss various applications, the evaluation of entity linking systems, and future directions.",
"title": ""
},
{
"docid": "7f57322b6e998d629d1a67cd5fb28da9",
"text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.",
"title": ""
},
{
"docid": "9c3218ce94172fd534e2a70224ee564f",
"text": "Author ambiguity mainly arises when several different authors express their names in the same way, generally known as the namesake problem, and also when the name of an author is expressed in many different ways, referred to as the heteronymous name problem. These author ambiguity problems have long been an obstacle to efficient information retrieval in digital libraries, causing incorrect identification of authors and impeding correct classification of their publications. It is a nontrivial task to distinguish those authors, especially when there is very limited information about them. In this paper, we propose a graph based approach to author name disambiguation, where a graph model is constructed using the co-author relations, and author ambiguity is resolved by graph operations such as vertex (or node) splitting and merging based on the co-authorship. In our framework, called a Graph Framework for Author Disambiguation (GFAD), the namesake problem is solved by splitting an author vertex involved in multiple cycles of co-authorship, and the heteronymous name problem is handled by merging multiple author vertices having similar names if those vertices are connected to a common vertex. Experiments were carried out with the real DBLP and Arnetminer collections and the performance of GFAD is compared with three representative unsupervised author name disambiguation systems. We confirm that GFAD shows better overall performance from the perspective of representative evaluation metrics. An additional contribution is that we released the refined DBLP collection to the public to facilitate organizing a performance benchmark for future systems on author disambiguation.",
"title": ""
},
{
"docid": "2d05142e12f63a354ec0c48436cd3697",
"text": "Author Name Disambiguation Neil R. Smalheiser and Vetle I. Torvik",
"title": ""
}
] | [
{
"docid": "90907753fd2c69c97088d333079fbb56",
"text": "This paper concerns the problem of pose estimation for an inertial-visual sensor. It is well known that IMU bias, and calibration errors between camera and IMU frames can impair the achievement of high-quality estimates through the fusion of visual and inertial data. The main contribution of this work is the design of new observers to estimate pose, IMU bias and camera-to-IMU rotation. The observers design relies on an extension of the so-called passive complementary filter on SO(3). Stability of the observers is established using Lyapunov functions under adequate observability conditions. Experimental results are presented to assess this approach.",
"title": ""
},
{
"docid": "f2af256af6a405a3b223abc5d9a276ac",
"text": "Traditional execution environments deploy Address Space Layout Randomization (ASLR) to defend against memory corruption attacks. However, Intel Software Guard Extension (SGX), a new trusted execution environment designed to serve security-critical applications on the cloud, lacks such an effective, well-studied feature. In fact, we find that applying ASLR to SGX programs raises non-trivial issues beyond simple engineering for a number of reasons: 1) SGX is designed to defeat a stronger adversary than the traditional model, which requires the address space layout to be hidden from the kernel; 2) the limited memory uses in SGX programs present a new challenge in providing a sufficient degree of entropy; 3) remote attestation conflicts with the dynamic relocation required for ASLR; and 4) the SGX specification relies on known and fixed addresses for key data structures that cannot be randomized. This paper presents SGX-Shield, a new ASLR scheme designed for SGX environments. SGX-Shield is built on a secure in-enclave loader to secretly bootstrap the memory space layout with a finer-grained randomization. To be compatible with SGX hardware (e.g., remote attestation, fixed addresses), SGX-Shield is designed with a software-based data execution protection mechanism through an LLVM-based compiler. We implement SGX-Shield and thoroughly evaluate it on real SGX hardware. It shows a high degree of randomness in memory layouts and stops memory corruption attacks with a high probability. SGX-Shield shows 7.61% performance overhead in running common microbenchmarks and 2.25% overhead in running a more realistic workload of an HTTPS server.",
"title": ""
},
{
"docid": "d967d6525cf88d498ecc872a9eef1c7c",
"text": "Historical Chinese character recognition has been suffering from the problem of lacking sufficient labeled training samples. A transfer learning method based on Convolutional Neural Network (CNN) for historical Chinese character recognition is proposed in this paper. A CNN model L is trained by printed Chinese character samples in the source domain. The network structure and weights of model L are used to initialize another CNN model T, which is regarded as the feature extractor and classifier in the target domain. The model T is then fine-tuned by a few labeled historical or handwritten Chinese character samples, and used for final evaluation in the target domain. Several experiments regarding essential factors of the CNNbased transfer learning method are conducted, showing that the proposed method is effective.",
"title": ""
},
{
"docid": "a1367b21acfebfe35edf541cdc6e3f48",
"text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.",
"title": ""
},
{
"docid": "96d44888850cf1940fb3a9e35c01f782",
"text": "This article investigates whether, and how, an artificial intelligence (AI) system can be said to use visual, imagery-based representations in a way that is analogous to the use of visual mental imagery by people. In particular, this article aims to answer two fundamental questions about imagery-based AI systems. First, what might visual imagery look like in an AI system, in terms of the internal representations used by the system to store and reason about knowledge? Second, what kinds of intelligent tasks would an imagery-based AI system be able to accomplish? The first question is answered by providing a working definition of what constitutes an imagery-based knowledge representation, and the second question is answered through a literature survey of imagery-based AI systems that have been developed over the past several decades of AI research, spanning task domains of: 1) template-based visual search; 2) spatial and diagrammatic reasoning; 3) geometric analogies and matrix reasoning; 4) naive physics; and 5) commonsense reasoning for question answering. This article concludes by discussing three important open research questions in the study of visual-imagery-based AI systems-on evaluating system performance, learning imagery operators, and representing abstract concepts-and their implications for understanding human visual mental imagery.",
"title": ""
},
{
"docid": "1e5f80dd831b5a1e373a9779f77ca373",
"text": "Direct volume rendered images (DVRIs) have been widely used to reveal structures in volumetric data. However, DVRIs generated by many volume visualization techniques can only partially satisfy users' demands. In this paper, we propose a framework for editing DVRIs, which can also be used for interactive transfer function (TF) design. Our approach allows users to fuse multiple features in distinct DVRIs into a comprehensive one, to blend two DVRIs, and/or to delete features in a DVRI. We further present how these editing operations can generate smooth animations for focus + context visualization. Experimental results on some real volumetric data demonstrate the effectiveness of our method.",
"title": ""
},
{
"docid": "a2fd33f276a336e2a33d84c2a0abc283",
"text": "The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion: adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves e ectiveness from 7% to 25% in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window de nition of \\local\", we achieve a 16% improvement.",
"title": ""
},
{
"docid": "614539c43d5fa2986b9aab3a2562fd85",
"text": "Mobile devices such as smart phones are becoming popular, and realtime access to multimedia data in different environments is getting easier. With properly equipped communication services, users can easily obtain the widely distributed videos, music, and documents they want. Because of its usability and capacity requirements, music is more popular than other types of multimedia data. Documents and videos are difficult to view on mobile phones' small screens, and videos' large data size results in high overhead for retrieval. But advanced compression techniques for music reduce the required storage space significantly and make the circulation of music data easier. This means that users can capture their favorite music directly from the Web without going to music stores. Accordingly, helping users find music they like in a large archive has become an attractive but challenging issue over the past few years.",
"title": ""
},
{
"docid": "7e10aa210d6985d757a21b8b6c49ae53",
"text": "Haptic devices for computers and video-game consoles aim to reproduce touch and to engage the user with `force feedback'. Although physical touch is often associated with proximity and intimacy, technologies of touch can reproduce such sensations over a distance, allowing intricate and detailed operations to be conducted through a network such as the Internet. The `virtual handshake' between Boston and London in 2002 is given as an example. This paper is therefore a critical investigation into some technologies of touch, leading to observations about the sociospatial framework in which this technological touching takes place. Haptic devices have now become routinely included with videogame consoles, and have started to be used in computer-aided design and manufacture, medical simulation, and even the cybersex industry. The implications of these new technologies are enormous, as they remould the human ^ computer interface from being primarily audiovisual to being more truly multisensory, and thereby enhance the sense of `presence' or immersion. But the main thrust of this paper is the development of ideas of presence over a large distance, and how this is enhanced by the sense of touch. By using the results of empirical research, including interviews with key figures in haptics research and engineering and personal experience of some of the haptic technologies available, I build up a picture of how `presence', `copresence', and `immersion', themselves paradoxically intangible properties, are guiding the design, marketing, and application of haptic devices, and the engendering and engineering of a set of feelings of interacting with virtual objects, across a range of distances. DOI:10.1068/d394t",
"title": ""
},
{
"docid": "023d547ffb283a377635ad12be9cac99",
"text": "Pretend play has recently been of great interest to researchers studying children's understanding of the mind. One reason for this interest is that pretense seems to require many of the same skills as mental state understanding, and these skills seem to emerge precociously in pretense. Pretend play might be a zone of proximal development, an activity in which children operate at a cognitive level higher than they operate at in nonpretense situations. Alternatively, pretend play might be fool's gold, in that it might appear to be more sophisticated than it really is. This paper first discusses what pretend play is. It then investigates whether pretend play is an area of advanced understanding with reference to 3 skills that are implicated in both pretend play and a theory of mind: the ability to represent one object as two things at once, the ability to see one object as representing another, and the ability to represent mental representations.",
"title": ""
},
{
"docid": "ea04dad2ac1de160f78fa79b33a93b6a",
"text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.",
"title": ""
},
{
"docid": "3fe5ea7769bfd7e7ea0adcb9ae497dcf",
"text": "Working memory emerges in infancy and plays a privileged role in subsequent adaptive cognitive development. The neural networks important for the development of working memory during infancy remain unknown. We used diffusion tensor imaging (DTI) and deterministic fiber tracking to characterize the microstructure of white matter fiber bundles hypothesized to support working memory in 12-month-old infants (n=73). Here we show robust associations between infants' visuospatial working memory performance and microstructural characteristics of widespread white matter. Significant associations were found for white matter tracts that connect brain regions known to support working memory in older children and adults (genu, anterior and superior thalamic radiations, anterior cingulum, arcuate fasciculus, and the temporal-parietal segment). Better working memory scores were associated with higher FA and lower RD values in these selected white matter tracts. These tract-specific brain-behavior relationships accounted for a significant amount of individual variation above and beyond infants' gestational age and developmental level, as measured with the Mullen Scales of Early Learning. Working memory was not associated with global measures of brain volume, as expected, and few associations were found between working memory and control white matter tracts. To our knowledge, this study is among the first demonstrations of brain-behavior associations in infants using quantitative tractography. The ability to characterize subtle individual differences in infant brain development associated with complex cognitive functions holds promise for improving our understanding of normative development, biomarkers of risk, experience-dependent learning and neuro-cognitive periods of developmental plasticity.",
"title": ""
},
{
"docid": "a5cee6dc248da019159ba7d769406928",
"text": "Coffee is one of the most consumed beverages in the world and is the second largest traded commodity after petroleum. Due to the great demand of this product, large amounts of residues are generated in the coffee industry, which are toxic and represent serious environmental problems. Coffee silverskin and spent coffee grounds are the main coffee industry residues, obtained during the beans roasting, and the process to prepare “instant coffee”, respectively. Recently, some attempts have been made to use these residues for energy or value-added compounds production, as strategies to reduce their toxicity levels, while adding value to them. The present article provides an overview regarding coffee and its main industrial residues. In a first part, the composition of beans and their processing, as well as data about the coffee world production and exportation, are presented. In the sequence, the characteristics, chemical composition, and application of the main coffee industry residues are reviewed. Based on these data, it was concluded that coffee may be considered as one of the most valuable primary products in world trade, crucial to the economies and politics of many developing countries since its cultivation, processing, trading, transportation, and marketing provide employment for millions of people. As a consequence of this big market, the reuse of the main coffee industry residues is of large importance from environmental and economical viewpoints.",
"title": ""
},
{
"docid": "add6957a74f1df33e21bf1923732ddc4",
"text": "Conversational search and recommendation based on user-system dialogs exhibit major differences from conventional search and recommendation tasks in that 1) the user and system can interact for multiple semantically coherent rounds on a task through natural language dialog, and 2) it becomes possible for the system to understand the user needs or to help users clarify their needs by asking appropriate questions from the users directly. We believe the ability to ask questions so as to actively clarify the user needs is one of the most important advantages of conversational search and recommendation. In this paper, we propose and evaluate a unified conversational search/recommendation framework, in an attempt to make the research problem doable under a standard formalization. Specifically, we propose a System Ask -- User Respond (SAUR) paradigm for conversational search, define the major components of the paradigm, and design a unified implementation of the framework for product search and recommendation in e-commerce. To accomplish this, we propose the Multi-Memory Network (MMN) architecture, which can be trained based on large-scale collections of user reviews in e-commerce. The system is capable of asking aspect-based questions in the right order so as to understand the user needs, while (personalized) search is conducted during the conversation, and results are provided when the system feels confident. Experiments on real-world user purchasing data verified the advantages of conversational search and recommendation against conventional search and recommendation algorithms in terms of standard evaluation measures such as NDCG.",
"title": ""
},
{
"docid": "66127055aff890d3f3f9d40bd1875980",
"text": "A simple, but comprehensive model of heat transfer and solidification of the continuous casting of steel slabs is described, including phenomena in the mold and spray regions. The model includes a one-dimensional (1-D) transient finite-difference calculation of heat conduction within the solidifying steel shell coupled with two-dimensional (2-D) steady-state heat conduction within the mold wall. The model features a detailed treatment of the interfacial gap between the shell and mold, including mass and momentum balances on the solid and liquid interfacial slag layers, and the effect of oscillation marks. The model predicts the shell thickness, temperature distributions in the mold and shell, thickness of the resolidified and liquid powder layers, heat-flux profiles down the wide and narrow faces, mold water temperature rise, ideal taper of the mold walls, and other related phenomena. The important effect of the nonuniform distribution of superheat is incorporated using the results from previous threedimensional (3-D) turbulent fluid-flow calculations within the liquid pool. The FORTRAN program CONID has a user-friendly interface and executes in less than 1 minute on a personal computer. Calibration of the model with several different experimental measurements on operating slab casters is presented along with several example applications. In particular, the model demonstrates that the increase in heat flux throughout the mold at higher casting speeds is caused by two combined effects: a thinner interfacial gap near the top of the mold and a thinner shell toward the bottom. This modeling tool can be applied to a wide range of practical problems in continuous casters.",
"title": ""
},
{
"docid": "c3f4f7d75c1b5cfd713ad7a10c887a3a",
"text": "This paper presents an open-source diarization toolkit which is mostly dedicated to speaker and developed by the LIUM. This toolkit includes hierarchical agglomerative clustering methods using well-known measures such as BIC and CLR. Two applications for which the toolkit has been used are presented: one is for broadcast news using the ESTER 2 data and the other is for telephone conversations using the MEDIA corpus.",
"title": ""
},
{
"docid": "e7f54e013e8e0de9cd5fde903dbac813",
"text": "Three concurrent public health problems coexist in the United States: endemic nonmedical use/misuse of opioid analgesics, epidemic overdose fatalities involving opioid analgesics, and endemic chronic pain in adults. These intertwined issues comprise an opioid crisis that has spurred the development of formulations of opioids with abuse-deterrent properties and label claims (OADP). To reduce abuse and misuse of prescription opioids, the federal Food and Drug Administration (FDA) has issued a formal Guidance to drug developers that delineates four categories of testing to generate data sufficient for a description of a product's abuse-deterrent properties, along with associated claims, in its Full Prescribing Information (FPI). This article reviews the epidemiology of the crisis as background for the development of OADP, summarizes the FDA Guidance for Industry regarding abuse-deterrent technologies, and provides an overview of some technologies that are currently employed or are under study for incorporation into OADP. Such technologies include physical and chemical barriers to abuse, combined formulations of opioid agonists and antagonists, inclusion of aversive agents, use of delivery systems that deter abuse, development of new molecular entities and prodrugs, and formulation of products that include some combination of these approaches. Opioids employing these novel technologies are one part of a comprehensive intervention strategy that can deter abuse of prescription opioid analgesics without creating barriers to the safe use of prescription opioids. The maximal public health contribution of OADP will probably occur only when all opioids have FDA-recognized abuse-deterrent properties and label claims.",
"title": ""
},
{
"docid": "fd1b32615aa7eb8f153e495d831bdd93",
"text": "The culture movement challenged the universality of the self-enhancement motive by proposing that the motive is pervasive in individualistic cultures (the West) but absent in collectivistic cultures (the East). The present research posited that Westerners and Easterners use different tactics to achieve the same goal: positive self-regard. Study 1 tested participants from differing cultural backgrounds (the United States vs. Japan), and Study 2 tested participants of differing self-construals (independent vs. interdependent). Americans and independents self-enhanced on individualistic attributes, whereas Japanese and interdependents self-enhanced on collectivistic attributes. Independents regarded individualistic attributes, whereas interdependents regarded collectivistic attributes, as personally important. Attribute importance mediated self-enhancement. Regardless of cultural background or self-construal, people self-enhance on personally important dimensions. Self-enhancement is a universal human motive.",
"title": ""
},
{
"docid": "cca4bd7bf4d9d00a4cf19bf2be785366",
"text": "Sometimes information systems fail or have operational and communication problems because designers may not have knowledge of the domain which is intended to be modeled. The same happens with systems for monitoring. Thus, an ontological model is needed to represent the organizational domain, which is intended to be monitored in order to develop an effective monitoring system. In this way, the purpose of the paper is to present a database based on Enterprise Ontology, which represents and specifies organizational transactions, aiming to be a repository of references or models of organizational transaction executions. Therefore, this database intends to be a generic risk profiles repository of organizational transactions for monitoring applications. Moreover, the Risk Profiles Repository presented in this paper is an innovative vision about continuous monitoring and has demonstrated to be a powerful tool for technological representations of organizational transactions and processes in compliance with the formalisms of a business ontological model.",
"title": ""
},
{
"docid": "b40bbfc19072efc645e5f1d6fb1d89e7",
"text": "With the development of information technologies, a great amount of semantic data is being generated on the web. Consequently, finding efficient ways of accessing this data becomes more and more important. Question answering is a good compromise between intuitiveness and expressivity, which has attracted the attention of researchers from different communities. In this paper, we propose an intelligent questing answering system for answering questions about concepts. It is based on ConceptRDF, which is an RDF presentation of the ConceptNet knowledge base. We use it as a knowledge base for answering questions. Our experimental results show that our approach is promising: it can answer questions about concepts at a satisfactory level of accuracy (reaches 94.5%).",
"title": ""
}
] | scidocsrr |
6b86364641ab8e2bb17cb12913780e8c | Time for a paradigm change in meniscal repair: save the meniscus! | [
{
"docid": "c8c82af8fc9ca5e0adac5b8b6a14031d",
"text": "PURPOSE\nTo systematically review the results of arthroscopic transtibial pullout repair (ATPR) for posterior medial meniscus root tears.\n\n\nMETHODS\nA systematic electronic search of the PubMed database and the Cochrane Library was performed in September 2014 to identify studies that reported clinical, radiographic, or second-look arthroscopic outcomes of ATPR for posterior medial meniscus root tears. Included studies were abstracted regarding study characteristics, patient demographic characteristics, surgical technique, rehabilitation, and outcome measures. The methodologic quality of the included studies was assessed with the modified Coleman Methodology Score.\n\n\nRESULTS\nSeven studies with a total of 172 patients met the inclusion criteria. The mean patient age was 55.3 years, and 83% of patients were female patients. Preoperative and postoperative Lysholm scores were reported for all patients. After a mean follow-up period of 30.2 months, the Lysholm score increased from 52.4 preoperatively to 85.9 postoperatively. On conventional radiographs, 64 of 76 patients (84%) showed no progression of Kellgren-Lawrence grading. Magnetic resonance imaging showed no progression of cartilage degeneration in 84 of 103 patients (82%) and showed reduced medial meniscal extrusion in 34 of 61 patients (56%). On the basis of second-look arthroscopy and magnetic resonance imaging in 137 patients, the healing status was rated as complete in 62%, partial in 34%, and failed in 3%. Overall, the methodologic quality of the included studies was fair, with a mean modified Coleman Methodology Score of 63.\n\n\nCONCLUSIONS\nATPR significantly improves functional outcome scores and seems to prevent the progression of osteoarthritis in most patients, at least during a short-term follow-up. Complete healing of the repaired root and reduction of meniscal extrusion seem to be less predictable, being observed in only about 60% of patients. Conclusions about the progression of osteoarthritis and reduction of meniscal extrusion are limited by the small portion of patients undergoing specific evaluation (44% and 35% of the study group, respectively).\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level III and IV studies.",
"title": ""
}
] | [
{
"docid": "483881d2c4ab6b25b019bdf1ebd75913",
"text": "Copyright: © 2018 The Author(s) Abstract. In the last few years, leading-edge research from information systems, strategic management, and economics have separately informed our understanding of platforms and infrastructures in the digital age. Our motivation for undertaking this special issue rests in the conviction that it is significant to discuss platforms and infrastructures concomitantly, while enabling knowledge from diverse disciplines to cross-pollinate to address critical, pressing policy challenges and inform strategic thinking across both social and business spheres. In this editorial, we review key insights from the literature on digital infrastructures and platforms, present emerging research themes, highlight the contributions developed from each of the six articles in this special issue, and conclude with suggestions for further research.",
"title": ""
},
{
"docid": "d64b30b463245e7e3b1690a04f1748e2",
"text": "Grasping-force optimization of multifingered robotic hands can be formulated as a problem for minimizing an objective function subject to form-closure constraints and balance constraints of external force. This paper presents a novel recurrent neural network for real-time dextrous hand-grasping force optimization. The proposed neural network is shown to be globally convergent to the optimal grasping force. Compared with existing approaches to grasping-force optimization, the proposed neural-network approach has the advantages that the complexity for implementation is reduced, and the solution accuracy is increased, by avoiding the linearization of quadratic friction constraints. Simulation results show that the proposed neural network can achieve optimal grasping force in real time.",
"title": ""
},
{
"docid": "9847936462257d8f0d03473c9a78f27d",
"text": "In this paper, a vision-guided autonomous quadrotor in an air-ground multi-robot system has been proposed. This quadrotor is equipped with a monocular camera, IMUs and a flight computer, which enables autonomous flights. Two complementary pose/motion estimation methods, respectively marker-based and optical-flow-based, are developed by considering different altitudes in a flight. To achieve smooth take-off, stable tracking and safe landing with respect to a moving ground robot and desired trajectories, appropriate controllers are designed. Additionally, data synchronization and time delay compensation are applied to improve the system performance. Real-time experiments are conducted in both indoor and outdoor environments.",
"title": ""
},
{
"docid": "0349cf3ec02acb10afd94db3b2910ac5",
"text": "Reaction of WF6 with air-exposed 27and 250-nm-thick Ti films has been studied using Rutherford backscattering spectroscopy, scanning and high-resolution transmission electron microscopy, electron and x-ray diffraction, and x-ray photoelectron spectroscopy. We show that W nucleates and grows rapidly at localized sites on Ti during short WF 6 exposures~'6 s! at 445 °C at low partial pressurespWF6,0.2 Torr. Large amounts of F, up to '2.0310 atoms/cm corresponding to an average F/Ti ratio of 1.5 in a 27-nm-thick Ti layer, penetrate the Ti film, forming a solid solution and nonvolatile TiF3. The large stresses developed due to volume expansion during fluorination of the Ti layer result in local delamination at the W/Ti and the Ti/SiO 2 interfaces at low and high WF 6 exposures, respectively. WF 6 exposure atpWF6.0.35 results in the formation of a network of elongated microcracks in the W film which allow WF 6 to diffuse through and attack the underlying Ti, consuming the 27-nm-thick Ti film through the evolution of gaseous TiF 4. © 1999 American Institute of Physics. @S0021-8979 ~99!10303-7#",
"title": ""
},
{
"docid": "b906dc1c2fc89824fd25a455dcf1475b",
"text": "Compelling evidence indicates that the CRISPR-Cas system protects prokaryotes from viruses and other potential genome invaders. This adaptive prokaryotic immune system arises from the clustered regularly interspaced short palindromic repeats (CRISPRs) found in prokaryotic genomes, which harbor short invader-derived sequences, and the CRISPR-associated (Cas) protein-coding genes. Here, we have identified a CRISPR-Cas effector complex that is comprised of small invader-targeting RNAs from the CRISPR loci (termed prokaryotic silencing (psi)RNAs) and the RAMP module (or Cmr) Cas proteins. The psiRNA-Cmr protein complexes cleave complementary target RNAs at a fixed distance from the 3' end of the integral psiRNAs. In Pyrococcus furiosus, psiRNAs occur in two size forms that share a common 5' sequence tag but have distinct 3' ends that direct cleavage of a given target RNA at two distinct sites. Our results indicate that prokaryotes possess a unique RNA silencing system that functions by homology-dependent cleavage of invader RNAs.",
"title": ""
},
{
"docid": "d46415de07d618b5127602b614415c83",
"text": "In many cases, the topology of communcation systems can be abstracted and represented as graph. Graph theories and algorithms are useful in these situations. In this paper, we introduced an algorithm to enumerate all cycles in a graph. It can be applied on digraph or undirected graph. Multigraph can also be used on for this purpose. It can be used to enumerate given length cycles without enumerating all cycles. This algorithm is simple and easy to be implemented.",
"title": ""
},
{
"docid": "f3f441c2cf1224746c0bfbb6ce02706d",
"text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.",
"title": ""
},
{
"docid": "138cd401515c3367428f88d4ef5d5cc7",
"text": "BACKGROUND\nThe present study was designed to implement an interprofessional simulation-based education program for nursing students and evaluate the influence of this program on nursing students' attitudes toward interprofessional education and knowledge about operating room nursing.\n\n\nMETHODS\nNursing students were randomly assigned to either the interprofessional simulation-based education or traditional course group. A before-and-after study of nursing students' attitudes toward the program was conducted using the Readiness for Interprofessional Learning Scale. Responses to an open-ended question were categorized using thematic content analysis. Nursing students' knowledge about operating room nursing was measured.\n\n\nRESULTS\nNursing students from the interprofessional simulation-based education group showed statistically different responses to four of the nineteen questions in the Readiness for Interprofessional Learning Scale, reflecting a more positive attitude toward interprofessional learning. This was also supported by thematic content analysis of the open-ended responses. Furthermore, nursing students in the simulation-based education group had a significant improvement in knowledge about operating room nursing.\n\n\nCONCLUSIONS\nThe integrated course with interprofessional education and simulation provided a positive impact on undergraduate nursing students' perceptions toward interprofessional learning and knowledge about operating room nursing. Our study demonstrated that this course may be a valuable elective option for undergraduate nursing students in operating room nursing education.",
"title": ""
},
{
"docid": "69a9dddda7590fb4f7b44216c6fc5a83",
"text": "We have developed a fast, perceptual method for selecting color scales for data visualization that takes advantage of our sensitivity to luminance variations in human faces. To do so, we conducted experiments in which we mapped various color scales onto the intensitiy values of a digitized photograph of a face and asked observers to rate each image. We found a very strong correlation between the perceived naturalness of the images and the degree to which the underlying color scales increased monotonically in luminance. Color scales that did not include a monotonically-increasing luminance component produced no positive rating scores. Since color scales with monotonic luminance profiles are widely recommended for visualizing continuous scalar data, a purely visual technique for identifying such color scales could be very useful, especially in situations where color calibration is not integrated into the visualization environment, such as over the Internet.",
"title": ""
},
{
"docid": "4e7003b497dc59c373347d8814c8f83e",
"text": "The present experiment was designed to test whether specific recordable changes in the neuromuscular system could be associated with specific alterations in soft- and hard-tissue morphology in the craniofacial region. The effect of experimentally induced neuromuscular changes on the craniofacial skeleton and dentition of eight rhesus monkeys was studied. The neuromuscular changes were triggered by complete nasal airway obstruction and the need for an oral airway. Alterations were also triggered 2 years later by removal of the obstruction and the return to nasal breathing. Changes in neuromuscular recruitment patterns resulted in changed function and posture of the mandible, tongue, and upper lip. There was considerable variation among the animals. Statistically significant morphologic effects of the induced changes were documented in several of the measured variables after the 2-year experimental period. The anterior face height increased more in the experimental animals than in the control animals; the occlusal and mandibular plane angles measured to the sella-nasion line increased; and anterior crossbites and malposition of teeth occurred. During the postexperimental period some of these changes were reversed. Alterations in soft-tissue morphology were also observed during both experimental periods. There was considerable variation in morphologic response among the animals. It was concluded that the marked individual variations in skeletal morphology and dentition resulting from the procedures were due to the variation in nature and degree of neuromuscular and soft-tissue adaptations in response to the altered function. The recorded neuromuscular recruitment patterns could not be directly related to specific changes in morphology.",
"title": ""
},
{
"docid": "3755f56410365a498c3a1ff4b61e77de",
"text": "Both high switching frequency and high efficiency are critical in reducing power adapter size. The active clamp flyback (ACF) topology allows zero voltage soft switching (ZVS) under all line and load conditions, eliminates leakage inductance and snubber losses, and enables high frequency and high power density power conversion. Traditional ACF ZVS operation relies on the resonance between leakage inductance and a small primary-side clamping capacitor, which leads to increased rms current and high conduction loss. This also causes oscillatory output rectifier current and impedes the implementation of synchronous rectification. This paper proposes a secondary-side resonance scheme to shape the primary current waveform in a way that significantly improves synchronous rectifier operation and reduces primary rms current. The concept is verified with a ${\\mathbf{25}}\\hbox{--}{\\text{W/in}}^{3}$ high-density 45-W adapter prototype using a monolithic gallium nitride power IC. Over 93% full-load efficiency was demonstrated at the worst case 90-V ac input and maximum full-load efficiency was 94.5%.",
"title": ""
},
{
"docid": "c8722cd243c552811c767fc160020b75",
"text": "Touché proposes a novel Swept Frequency Capacitive Sensing technique that can not only detect a touch event, but also recognize complex configurations of the human hands and body. Such contextual information significantly enhances touch interaction in a broad range of applications, from conventional touchscreens to unique contexts and materials. For example, in our explorations we add touch and gesture sensitivity to the human body and liquids. We demonstrate the rich capabilities of Touché with five example setups from different application domains and conduct experimental studies that show gesture classification accuracies of 99% are achievable with our technology.",
"title": ""
},
{
"docid": "79020f32ea93c9e9789bb3546cde1016",
"text": "Within software engineering, requirements engineering starts from imprecise and vague user requirements descriptions and infers precise, formalized specifications. Techniques, such as interviewing by requirements engineers, are typically applied to identify the user's needs. We want to partially automate even this first step of requirements elicitation by methods of evolutionary computation. The idea is to enable users to specify their desired software by listing examples of behavioral descriptions. Users initially specify two lists of operation sequences, one with desired behaviors and one with forbidden behaviors. Then, we search for the appropriate formal software specification in the form of a deterministic finite automaton. We solve this problem known as grammatical inference with an active coevolutionary approach following Bongard and Lipson [2]. The coevolutionary process alternates between two phases: (A) additional training data is actively proposed by an evolutionary process and the user is interactively asked to label it; (B) appropriate automata are then evolved to solve this extended grammatical inference problem. Our approach leverages multi-objective evolution in both phases and outperforms the state-of-the-art technique [2] for input alphabet sizes of three and more, which are relevant to our problem domain of requirements specification.",
"title": ""
},
{
"docid": "6c9f3107fbf14f5bef1b8edae1b9d059",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "ae1705c0b7be3c218c1fcb42cc53ea9a",
"text": "We examine the relation between executive compensation and corporate fraud. Executives at fraud firms have significantly larger equity-based compensation and greater financial incentives to commit fraud than do executives at industryand sizematched control firms. Executives at fraud firms also earn significantly more total compensation by exercising significantly larger fractions of their vested options than the control executives during the fraud years. Operating and stock performance measures suggest executives who commit corporate fraud attempt to offset declines in performance that would otherwise occur. Our results imply that optimal governance measures depend on the strength of executives’ financial incentives.",
"title": ""
},
{
"docid": "40f8240220dad82a7a2da33932fb0e73",
"text": "The incidence of clinically evident Curling's ulcer among 109 potentially salvageable severely burned patients was reviewed. These patients, who had greater than a 40 per cent body surface area burn, received one of these three treatment regimens: antacids hourly until autografting was complete, antacids hourly during the early postburn period followed by nutritional supplementation with Vivonex until autografting was complete or no antacids during the early postburn period but subsequent nutritional supplementation with Vivonex until autografting was complete. Clinically evident Curling's ulcer occurred in three patients. This incidence approximates the lowest reported among severely burned patients treated prophylactically with acid-reducing regimens to minimize clinically evident Curling's ulcer. In addition to its protective effect on Curling's ulcer, Vivonex, when used in combination with a high protein, high caloric diet, meets the caloric needs of the severely burned patient. Probably, Vivonex, which has a pH range of 4.5 to 5.4 protects against clinically evident Curling's ulcer by a dilutional alkalinization of gastric secretion.",
"title": ""
},
{
"docid": "8628e1073017a7dc0fec1d22e46280db",
"text": "Narita for their comments. Some of the results and ideas in this paper are similar to those in a working paper that I wrote in 2009, \"Bursting Bubbles: Consequences and Cures.\"",
"title": ""
},
{
"docid": "9c452434ad1c25d0fbe71138b6c39c4b",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "11b11bf5be63452e28a30b4494c9a704",
"text": "Advertisement and Brand awareness plays an important role in brand building, brand recognition, brand loyalty and boost up the sales performance which is regarded as the foundation for brand development. To some degree advertisement and brand awareness can directly influence consumers’ buying behavior. The female consumers from IT industry have been taken as main consumers for the research purpose. The researcher seeks to inspect and investigate brand’s intention factors and consumer’s individual factors in influencing advertisement and its impact of brand awareness on fast moving consumer goods especially personal care products .The aim of the paper is to examine the advertising and its impact of brand awareness towards FMCG Products, on the other hand, to analyze the influence of advertising on personal care products among female consumers in IT industry and finally to study the impact of media on advertising & brand awareness. The prescribed survey were conducted in the form of questionnaire and found valid and reliable for this research. After evaluating some questions, better questionnaires were developed. Then the questionnaires were distributed among 200 female consumers with a response rate of 100%. We found that advertising has constantly a significant positive effect on brand awareness and consumers perceive the brand awareness with positive attitude. Findings depicts that advertising and brand awareness have strong positive influence and considerable relationship with purchase intention of the consumer. This research highlights that female consumers of personal care products in IT industry are more brand conscious and aware about their personal care products. Advertisement and brand awareness affects their purchase intention positively; also advertising media positively influences the brand awareness and purchase intention of the female consumers. The obtained data were then processed by Pearson correlation, multiple regression analysis and ANOVA. A Study On Advertising And Its Impact Of Brand Awareness On Fast Moving Consumer Goods With Reference To Personal Care Products In Chennai Paper ID IJIFR/ V2/ E9/ 068 Page No. 3325-3333 Subject Area Business Administration",
"title": ""
},
{
"docid": "a1d2e6238e0ee4abf10facba6e9c0ef0",
"text": "The recent successes of deep learning have led to a wave of interest from non-experts. Gaining an understanding of this technology, however, is difficult. While the theory is important, it is also helpful for novices to develop an intuitive feel for the effect of different hyperparameters and structural variations. We describe TensorFlow Playground, an interactive, open sourced visualization that allows users to experiment via direct manipulation rather than coding, enabling them to quickly build an intuition about neural nets.",
"title": ""
}
] | scidocsrr |
8bf0224075997c84429972b9b7e70960 | Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification | [
{
"docid": "9e04e2d09e0b57a6af76ed522ede1154",
"text": "The field of surveillance and forensics research is currently shifting focus and is now showing an ever increasing interest in the task of people reidentification. This is the task of assigning the same identifier to all instances of a particular individual captured in a series of images or videos, even after the occurrence of significant gaps over time or space. People reidentification can be a useful tool for people analysis in security as a data association method for long-term tracking in surveillance. However, current identification techniques being utilized present many difficulties and shortcomings. For instance, they rely solely on the exploitation of visual cues such as color, texture, and the object’s shape. Despite the many advances in this field, reidentification is still an open problem. This survey aims to tackle all the issues and challenging aspects of people reidentification while simultaneously describing the previously proposed solutions for the encountered problems. This begins with the first attempts of holistic descriptors and progresses to the more recently adopted 2D and 3D model-based approaches. The survey also includes an exhaustive treatise of all the aspects of people reidentification, including available datasets, evaluation metrics, and benchmarking.",
"title": ""
},
{
"docid": "225204d66c371372debb3bb2a37c795b",
"text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.",
"title": ""
}
] | [
{
"docid": "bf08d673b40109d6d6101947258684fd",
"text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.",
"title": ""
},
{
"docid": "1224987c5fdd228cc38bf1ee3aeb6f2d",
"text": "Many existing studies of social media focus on only one platform, but the reality of users' lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider \"audience\" and \"content\" when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.",
"title": ""
},
{
"docid": "512fee2ebf2765335f07a45d8f648c03",
"text": "Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to utterances in a conversation. The problem of associating semantic labels to utterances can be treated as a sequence labeling problem. In this work, we build a hierarchical recurrent neural network using bidirectional LSTM as a base unit and the conditional random field (CRF) as the top layer to classify each utterance into its corresponding dialogue act. The hierarchical network learns representations at multiple levels, i.e., word level, utterance level, and conversation level. The conversation level representations are input to the CRF layer, which takes into account not only all previous utterances but also their dialogue acts, thus modeling the dependency among both, labels and utterances, an important consideration of natural dialogue. We validate our approach on two different benchmark data sets, Switchboard and Meeting Recorder Dialogue Act, and show performance improvement over the state-of-the-art methods by 2.2% and 4.1% absolute points, respectively. It is worth noting that the inter-annotator agreement on Switchboard data set is 84%, and our method is able to achieve the accuracy of about 79% despite being trained on the noisy data.",
"title": ""
},
{
"docid": "1d273a18183c450c11ec6f3e4fa9a4e7",
"text": "Autonomous vehicles are an emerging application of automotive technology. They can recognize the scene, plan the path, and control the motion by themselves while interacting with drivers. Although they receive considerable attention, components of autonomous vehicles are not accessible to the public but instead are developed as proprietary assets. To facilitate the development of autonomous vehicles, this article introduces an open platform using commodity vehicles and sensors. Specifically, the authors present algorithms, software libraries, and datasets required for scene recognition, path planning, and vehicle control. This open platform allows researchers and developers to study the basis of autonomous vehicles, design new algorithms, and test their performance using the common interface.",
"title": ""
},
{
"docid": "d580f60d48331b37c55f1e9634b48826",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "f071a3d699ba4b3452043b6efb14b508",
"text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.",
"title": ""
},
{
"docid": "91c3734125249659df4098ba02f2d5e5",
"text": "Good performance and efficiency, in terms of high quality of service and resource utilization for example, are important goals in a cloud environment. Through extensive measurements of an n-tier application benchmark (RUBBoS), we show that overall system performance is surprisingly sensitive to appropriate allocation of soft resources (e.g., server thread pool size). Inappropriate soft resource allocation can quickly degrade overall application performance significantly. Concretely, both under-allocation and over-allocation of thread pool can lead to bottlenecks in other resources because of non-trivial dependencies. We have observed some non-obvious phenomena due to these correlated bottlenecks. For instance, the number of threads in the Apache web server can limit the total useful throughput, causing the CPU utilization of the C-JDBC clustering middleware to decrease as the workload increases. We provide a practical iterative solution approach to this challenge through an algorithmic combination of operational queuing laws and measurement data. Our results show that soft resource allocation plays a central role in the performance scalability of complex systems such as n-tier applications in cloud environments.",
"title": ""
},
{
"docid": "edaa6ccb75658c9818e48538c6135097",
"text": "Software Defined Network (SDN) is the latest network architecture in which the data and control planes do not reside on the same networking element. The control of packet forwarding in this architecture is taken out and is carried out by a programmable software component, the controller, whereas the forwarding elements are only used as packet moving devices that are driven by the controller. SDN architecture also provides Open APIs from both control and data planes. In order to provide communication between the controller and the forwarding hardware among many available protocols, OpenFlow (OF), is generally regarded as a standardized protocol for SDN. Open APIs for communication between the controller and applications enable development of network management applications easy. Therefore, SDN makes it possible to program the network thus provide numerous benefits. As a result, various vendors have developed SDN architectures. This paper summarizes as well as compares most of the common SDN architectures available till date.",
"title": ""
},
{
"docid": "d2a9cd6bfbaff70302f2d6f455e87fcc",
"text": "A Deep-learning architecture is a representation learning method with multiple levels of abstraction. It finds out complex structure of nonlinear processing layer in large datasets for pattern recognition. From the earliest uses of deep learning, Convolution Neural Network (CNN) can be trained by simple mathematical method based gradient descent. One of the most promising improvement of CNN is the integration of intelligent heuristic algorithms for learning optimization. In this paper, we use the seven layer CNN, named ConvNet, for handwriting digit classification. The Particle Swarm Optimization algorithm (PSO) is adapted to evolve the internal parameters of processing layers.",
"title": ""
},
{
"docid": "e3299737a0fb3cd3c9433f462565b278",
"text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.",
"title": ""
},
{
"docid": "a490c396ff6d47e11f35d2f08776b7fc",
"text": "The present study examined the nature of social support exchanged within an online HIV/AIDS support group. Content analysis was conducted with reference to five types of social support (information support, tangible assistance, esteem support, network support, and emotional support) on 85 threads (1,138 messages). Our analysis revealed that many of the messages offered informational and emotional support, followed by esteem support and network support, with tangible assistance the least frequently offered. Results suggest that this online support group is a popular forum through which individuals living with HIV/AIDS can offer social support. Our findings have implications for health care professionals who support individuals living with HIV/AIDS.",
"title": ""
},
{
"docid": "dc1093c859a1f3ed32245d4a6809fd34",
"text": "Recommender systems have been researched extensively over the past decades. Whereas several algorithms have been developed and deployed in various application domains, recent research effort s are increasingly oriented towards the user experience of recommender systems. This research goes beyond accuracy of recommendation algorithms and focuses on various human factors that affect acceptance of recommendations, such as user satisfaction, trust, transparency and sense of control. In this paper, we present an interactive visualization framework that combines recommendation with visualization techniques to support human-recommender interaction. Then, we analyze existing interactive recommender systems along the dimensions of our framework, including our work. Based on our survey results, we present future research challenges and opportunities. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7a5edda3bc5b271b6c1305c6a13d50eb",
"text": "Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs.",
"title": ""
},
{
"docid": "6ee1666761a78989d5b17bf0de21aa9a",
"text": "Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.",
"title": ""
},
{
"docid": "1c1628a582befbefa8e32be4b8053b06",
"text": "Gradient coils for magnetic resonance imaging (MRI) require large currents (> 500 A) for the gradient field strength, as well as high voltage (> 1600 V) for fast slew rates. Additionally, extremely high fidelity, reproducing the command signal, is critical for image quality. A new driver topology recently proposed can provide the high power and operate at high switching frequency allowing high bandwidth control. The paper presents additional improvements to the new driver architecture, and more importantly, describes the digital control design and implementation, crucial to achieve the required performance level. The power stage and control have been build and tested with the experimental results showing that the performance achieved with the new digital control capability, more than fulfills the system requirements",
"title": ""
},
{
"docid": "786df8b6b1231119e79c21cbb98e7b91",
"text": "Electric Vehicle (EV) drivers have an urgent demand for fast battery refueling methods for long distance trip and emergency drive. A well-planned battery swapping station (BSS) network can be a promising solution to offer timely refueling services. However, an inappropriate battery recharging process in the BSS may not only violate the stabilization of the power grid by their large power consumption, but also increase the charging cost from the BSS operators' point of view. In this paper, we aim to obtain the optimal charging policy to minimize the charging cost while ensuring the quality of service (QoS) of the BSS. A novel queueing network model is proposed to capture the operation nature for an individual BSS. Based on practical assumptions, we formulate the charging schedule problem as a stochastic control problem and achieve the optimal charging policy by dynamic programming. Monte Carlo simulation is used to evaluate the performance of different policies for both stationary and non-stationary EV arrival cases. Numerical results show the importance of determining the number of total batteries and charging outlets held in the BSS. Our work gives insight for the future infrastructure planning and operational management of BSS network.",
"title": ""
},
{
"docid": "621ae81c61bbeb4804045b3a038980d2",
"text": "A multi-functional in-memory inference processor integrated circuit (IC) in a 65-nm CMOS process is presented. The prototype employs a deep in-memory architecture (DIMA), which enhances both energy efficiency and throughput over conventional digital architectures via simultaneous access of multiple rows of a standard 6T bitcell array (BCA) per precharge, and embedding column pitch-matched low-swing analog processing at the BCA periphery. In doing so, DIMA exploits the synergy between the dataflow of machine learning (ML) algorithms and the SRAM architecture to reduce the dominant energy cost due to data movement. The prototype IC incorporates a 16-kB SRAM array and supports four commonly used ML algorithms—the support vector machine, template matching, <inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>-nearest neighbor, and the matched filter. Silicon measured results demonstrate simultaneous gains (dot product mode) in energy efficiency of 10<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> and in throughput of 5.3<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> leading to a 53<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> reduction in the energy-delay product with negligible (<inline-formula> <tex-math notation=\"LaTeX\">$\\le $ </tex-math></inline-formula>1%) degradation in the decision-making accuracy, compared with the conventional 8-b fixed-point single-function digital implementations.",
"title": ""
},
{
"docid": "09f6fff3ec44139a305a2e3e5bed2c91",
"text": "This paper presents a novel application to detect counterfeit identity documents forged by a scan-printing operation. Texture analysis approaches are proposed to extract validation features from security background that is usually printed in documents as IDs or banknotes. The main contribution of this work is the end-to-end mobile-server architecture, which provides a service for non-expert users and therefore can be used in several scenarios. The system also provides a crowdsourcing mode so labeled images can be gathered, generating databases for incremental training of the algorithms.",
"title": ""
},
{
"docid": "db5f5f0b7599f1e9b3ebe81139eab1e6",
"text": "In the manufacturing industry, supply chain management is playing an important role in providing profit to the enterprise. Information that is useful in improving existing products and development of new products can be obtained from databases and ontology. The theory of inventive problem solving (TRIZ) supports designers of innovative product design by searching a knowledge base. The existing TRIZ ontology supports innovative design of specific products (Flashlight) for a TRIZ ontology. The research reported in this paper aims at developing a metaontology for innovative product design that can be applied to multiple products in different domain areas. The authors applied the semantic TRIZ to a product (Smart Fan) as an interim stage toward a metaontology that can manage general products and other concepts. Modeling real-world (Smart Pen and Smart Machine) ontologies is undertaken as an evaluation of the metaontology. This may open up new possibilities to innovative product designs. Innovative Product Design using Metaontology with Semantic TRIZ",
"title": ""
},
{
"docid": "222b060b4235b0d31199a74fbc630a0d",
"text": "Online bookings of hotels have increased drastically throughout recent years. Studies in tourism and hospitality have investigated the relevance of hotel attributes influencing choice but did not yet explore them in an online booking setting. This paper presents findings about consumers’ stated preferences for decision criteria from an adaptive conjoint study among 346 respondents. The results show that recommendations of friends and online reviews are the most important factors that influence online hotel booking. Partitioning the importance values of the decision criteria reveals group-specific differences indicating the presence of market segments.",
"title": ""
}
] | scidocsrr |
9ed1a87bc398c68eab5380f3b343704e | To catch a chorus: using chroma-based representations for audio thumbnailing | [
{
"docid": "0297b1f3565e4d1a3554137ac4719cfd",
"text": "Systems to automatically provide a representative summary or `Key Phrase' of a piece of music are described. For a `rock' song with `verse' and `chorus' sections, we aim to return the chorus or in any case the most repeated and hence most memorable section. The techniques are less applicable to music with more complicated structure although possibly our general framework could still be used with di erent heuristics. Our process consists of three steps. First we parameterize the song into features. Next we use these features to discover the song structure, either by clustering xed-length segments or by training a hidden Markov model (HMM) for the song. Finally, given this structure, we use heuristics to choose the Key Phrase. Results for summaries of 18 Beatles songs evaluated by ten users show that the technique based on clustering is superior to the HMM approach and to choosing the Key Phrase at random.",
"title": ""
}
] | [
{
"docid": "18f9fff4bd06f28cd39c97ff40467d0f",
"text": "Smart agriculture is an emerging concept, because IOT sensors are capable of providing information about agriculture fields and then act upon based on the user input. In this Paper, it is proposed to develop a Smart agriculture System that uses advantages of cutting edge technologies such as Arduino, IOT and Wireless Sensor Network. The paper aims at making use of evolving technology i.e. IOT and smart agriculture using automation. Monitoring environmental conditions is the major factor to improve yield of the efficient crops. The feature of this paper includes development of a system which can monitor temperature, humidity, moisture and even the movement of animals which may destroy the crops in agricultural field through sensors using Arduino board and in case of any discrepancy send a SMS notification as well as a notification on the application developed for the same to the farmer’s smartphone using Wi-Fi/3G/4G. The system has a duplex communication link based on a cellularInternet interface that allows for data inspection and irrigation scheduling to be programmed through an android application. Because of its energy autonomy and low cost, the system has the potential to be useful in water limited geographically isolated areas.",
"title": ""
},
{
"docid": "2e89bc59f85b14cf40a868399a3ce351",
"text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.",
"title": ""
},
{
"docid": "1e4daa242bfee88914b084a1feb43212",
"text": "In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.",
"title": ""
},
{
"docid": "0e068a4e7388ed456de4239326eb9b08",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "946330bdcc96711090f15dbaf772edf6",
"text": "This paper deals with the estimation of the channel impulse response (CIR) in orthogonal frequency division multiplexed (OFDM) systems. In particular, we focus on two pilot-aided schemes: the maximum likelihood estimator (MLE) and the Bayesian minimum mean square error estimator (MMSEE). The advantage of the former is that it is simpler to implement as it needs no information on the channel statistics. On the other hand, the MMSEE is expected to have better performance as it exploits prior information about the channel. Theoretical analysis and computer simulations are used in the comparisons. At SNR values of practical interest, the two schemes are found to exhibit nearly equal performance, provided that the number of pilot tones is sufficiently greater than the CIRs length. Otherwise, the MMSEE is superior. In any case, the MMSEE is more complex to implement.",
"title": ""
},
{
"docid": "50dc3186ad603ef09be8cca350ff4d77",
"text": "Design iteration time in SoC design flow is reduced through performance exploration at a higher level of abstraction. This paper proposes an accurate and fast performance analysis method in early stage of design process using a behavioral model written in C/C++ language. We made a cycle-accurate but fast and flexible compiled instruction set simulator (ISS) and IP models that represent hardware functionality and performance. System performance analyzer configured by the target communication architecture analyzes the performance utilizing event-traces obtained by running the ISS and IP models. This solution is automated and implemented in the tool, HIPA. We obtain diverse performance profiling results and achieve 95% accuracy using an abstracted C model. We also achieve about 20 times speed-up over corresponding co-simulation tools.",
"title": ""
},
{
"docid": "2c9e17d4c5bfb803ea1ff20ea85fbd10",
"text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.",
"title": ""
},
{
"docid": "74afc31d233f76e28b58f019dfc28df4",
"text": "We present a motion planner for autonomous highway driving that adapts the state lattice framework pioneered for planetary rover navigation to the structured environment of public roadways. The main contribution of this paper is a search space representation that allows the search algorithm to systematically and efficiently explore both spatial and temporal dimensions in real time. This allows the low-level trajectory planner to assume greater responsibility in planning to follow a leading vehicle, perform lane changes, and merge between other vehicles. We show that our algorithm can readily be accelerated on a GPU, and demonstrate it on an autonomous passenger vehicle.",
"title": ""
},
{
"docid": "12fb3e47b285dcabe11806aeb7949520",
"text": "This paper presents a differential low-noise highresolution switched-capacitor readout circuit that is intended for capacitive sensors. Amplitude modulation/demodulation and correlated double sampling are used to minimize the adverse effects of the amplifier offset and flicker (1/f) noise and improve the sensitivity of the readout circuit. In order to simulate the response of the readout circuit, a Verilog-A model is used to model the variable sense capacitor. The interface circuit is designed and laid out in a 0.8 µm CMOS process. Postlayout simulation results show that the readout interface is able to linearly resolve sense capacitance variation from 2.8 aF to 0.3 fF with a sensitivity of 7.88 mV/aF from a single 5V supply (the capacitance-to-voltage conversion is approximately linear for capacitance changes from 0.3 fF to~1.2 fF). The power consumption of the circuit is 9.38 mW.",
"title": ""
},
{
"docid": "c7c1bafc295af6ebc899e391daae04c1",
"text": "Non-orthogonal multiple access (NOMA) is expected to be a promising multiple access technique for 5G networks due to its superior spectral efficiency. In this letter, the ergodic capacity maximization problem is first studied for the Rayleigh fading multiple-input multiple-output (MIMO) NOMA systems with statistical channel state information at the transmitter (CSIT). We propose both optimal and low complexity suboptimal power allocation schemes to maximize the ergodic capacity of MIMO NOMA system with total transmit power constraint and minimum rate constraint of the weak user. Numerical results show that the proposed NOMA schemes significantly outperform the traditional orthogonal multiple access scheme.",
"title": ""
},
{
"docid": "a845a36fb352f347224e9902087d9625",
"text": "Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research.",
"title": ""
},
{
"docid": "3eebd7a2d8f7ae93a6a70c7e680b4b68",
"text": "BACKGROUND\nThis longitudinal community study assessed the prevalence and development of psychiatric disorders from age 9 through 16 years and examined homotypic and heterotypic continuity.\n\n\nMETHODS\nA representative population sample of 1420 children aged 9 to 13 years at intake were assessed annually for DSM-IV disorders until age 16 years.\n\n\nRESULTS\nAlthough 3-month prevalence of any disorder averaged 13.3% (95% confidence interval [CI], 11.7%-15.0%), during the study period 36.7% of participants (31% of girls and 42% of boys) had at least 1 psychiatric disorder. Some disorders (social anxiety, panic, depression, and substance abuse) increased in prevalence, whereas others, including separation anxiety disorder and attention-deficit/hyperactivity disorder (ADHD), decreased. Lagged analyses showed that children with a history of psychiatric disorder were 3 times more likely than those with no previous disorder to have a diagnosis at any subsequent wave (odds ratio, 3.7; 95% CI, 2.9-4.9; P<.001). Risk from a previous diagnosis was high among both girls and boys, but it was significantly higher among girls. Continuity of the same disorder (homotypic) was significant for all disorders except specific phobias. Continuity from one diagnosis to another (heterotypic) was significant from depression to anxiety and anxiety to depression, from ADHD to oppositional defiant disorder, and from anxiety and conduct disorder to substance abuse. Almost all the heterotypic continuity was seen in girls.\n\n\nCONCLUSIONS\nThe risk of having at least 1 psychiatric disorder by age 16 years is much higher than point estimates would suggest. Concurrent comorbidity and homotypic and heterotypic continuity are more marked in girls than in boys.",
"title": ""
},
{
"docid": "adabd3971fa0abe5c60fcf7a8bb3f80c",
"text": "The present paper describes the development of a query focused multi-document automatic summarization. A graph is constructed, where the nodes are sentences of the documents and edge scores reflect the correlation measure between the nodes. The system clusters similar texts having related topical features from the graph using edge scores. Next, query dependent weights for each sentence are added to the edge score of the sentence and accumulated with the corresponding cluster score. Top ranked sentence of each cluster is identified and compressed using a dependency parser. The compressed sentences are included in the output summary. The inter-document cluster is revisited in order until the length of the summary is less than the maximum limit. The summarizer has been tested on the standard TAC 2008 test data sets of the Update Summarization Track. Evaluation of the summarizer yielded accuracy scores of 0.10317 (ROUGE-2) and 0.13998 (ROUGE–SU-4).",
"title": ""
},
{
"docid": "60182038191a764fd7070e8958185718",
"text": "Shales of very low metamorphic grade from the 2.78 to 2.45 billion-year-old (Ga) Mount Bruce Supergroup, Pilbara Craton, Western Australia, were analyzed for solvent extractable hydrocarbons. Samples were collected from ten drill cores and two mines in a sampling area centered in the Hamersley Basin near Wittenoom and ranging 200 km to the southeast, 100 km to the southwest and 70 km to the northwest. Almost all analyzed kerogenous sedimentary rocks yielded solvent extractable organic matter. Concentrations of total saturated hydrocarbons were commonly in the range of 1 to 20 ppm ( g/g rock) but reached maximum values of 1000 ppm. The abundance of aromatic hydrocarbons was 1 to 30 ppm. Analysis of the extracts by gas chromatography-mass spectrometry (GC-MS) and GC-MS metastable reaction monitoring (MRM) revealed the presence of n-alkanes, midand end-branched monomethylalkanes, -cyclohexylalkanes, acyclic isoprenoids, diamondoids, trito pentacyclic terpanes, steranes, aromatic steroids and polyaromatic hydrocarbons. Neither plant biomarkers nor hydrocarbon distributions indicative of Phanerozoic contamination were detected. The host kerogens of the hydrocarbons were depleted in C by 2 to 21‰ relative ton-alkanes, a pattern typical of, although more extreme than, other Precambrian samples. Acyclic isoprenoids showed carbon isotopic depletion relative to n-alkanes and concentrations of 2 -methylhopanes were relatively high, features rarely observed in the Phanerozoic but characteristic of many other Precambrian bitumens. Molecular parameters, including sterane and hopane ratios at their apparent thermal maxima, condensate-like alkane profiles, high monoand triaromatic steroid maturity parameters, high methyladamantane and methyldiamantane indices and high methylphenanthrene maturity ratios, indicate thermal maturities in the wet-gas generation zone. Additionally, extracts from shales associated with iron ore deposits at Tom Price and Newman have unusual polyaromatic hydrocarbon patterns indicative of pyrolytic dealkylation. The saturated hydrocarbons and biomarkers in bitumens from the Fortescue and Hamersley Groups are characterized as ‘probably syngenetic with their Archean host rock’ based on their typical Precambrian molecular and isotopic composition, extreme maturities that appear consistent with the thermal history of the host sediments, the absence of biomarkers diagnostic of Phanerozoic age, the absence of younger petroleum source rocks in the basin and the wide geographic distribution of the samples. Aromatic hydrocarbons detected in shales associated with iron ore deposits at Mt Tom Price and Mt Whaleback are characterized as ‘clearly Archean’ based on their hypermature composition and covalent bonding to kerogen. Copyright © 2003 Elsevier Ltd",
"title": ""
},
{
"docid": "1d562cc5517fa367a0f807ce7bb1c897",
"text": "Wireless sensor networks for environmental monitoring and agricultural applications often face long-range requirements at low bit-rates together with large numbers of nodes. This paper presents the design and test of a novel wireless sensor network that combines a large radio range with very low power consumption and cost. Our asymmetric sensor network uses ultralow-cost 40 MHz transmitters and a sensitive software defined radio receiver with multichannel capability. Experimental radio range measurements in two different outdoor environments demonstrate a single-hop range of up to 1.8 km. A theoretical model for radio propagation at 40 MHz in outdoor environments is proposed and validated with the experimental measurements. The reliability and fidelity of network communication over longer time periods is evaluated with a deployment for distributed temperature measurements. Our results demonstrate the feasibility of the transmit-only low-frequency system design approach for future environmental sensor networks. Although there have been several papers proposing the theoretical benefits of this approach, to the best of our knowledge this is the first paper to provide experimental validation of such claims.",
"title": ""
},
{
"docid": "440b90f61bc7826c1165a1f3d306bd5e",
"text": "Image descriptors based on activations of Convolutional Neural Networks (CNNs) have become dominant in image retrieval due to their discriminative power, compactness of representation, and search efficiency. Training of CNNs, either from scratch or fine-tuning, requires a large amount of annotated data, where a high quality of annotation is often crucial. In this work, we propose to fine-tune CNNs for image retrieval on a large collection of unordered images in a fully automated manner. Reconstructed 3D models obtained by the state-of-the-art retrieval and structure-from-motion methods guide the selection of the training data. We show that both hard-positive and hard-negative examples, selected by exploiting the geometry and the camera positions available from the 3D models, enhance the performance of particular-object retrieval. CNN descriptor whitening discriminatively learned from the same training data outperforms commonly used PCA whitening. We propose a novel trainable Generalized-Mean (GeM) pooling layer that generalizes max and average pooling and show that it boosts retrieval performance. Applying the proposed method to the VGG network achieves state-of-the-art performance on the standard benchmarks: Oxford Buildings, Paris, and Holidays datasets.",
"title": ""
},
{
"docid": "6fd9793e9f44b726028f8c879157f1f7",
"text": "Modeling, simulation and implementation of Voltage Source Inverter (VSI) fed closed loop control of 3-phase induction motor drive is presented in this paper. A mathematical model of the drive system is developed and is used for the simulation study. Simulation is carried out using Scilab/Scicos, which is free and open source software. The above said drive system is implemented in laboratory using a PC and an add-on card. In this study the air gap flux of the machine is kept constant by maintaining Volt/Hertz (v/f) ratio constant. The experimental transient responses of the drive system obtained for change in speed under no load as well as under load conditions are presented.",
"title": ""
},
{
"docid": "61998885a181e074eadd41a2f067f697",
"text": "Introduction. Opinion mining has been receiving increasing attention from a broad range of scientific communities since early 2000s. The present study aims to systematically investigate the intellectual structure of opinion mining research. Method. Using topic search, citation expansion, and patent search, we collected 5,596 bibliographic records of opinion mining research. Then, intellectual landscapes, emerging trends, and recent developments were identified. We also captured domain-level citation trends, subject category assignment, keyword co-occurrence, document co-citation network, and landmark articles. Analysis. Our study was guided by scientometric approaches implemented in CiteSpace, a visual analytic system based on networks of co-cited documents. We also employed a dual-map overlay technique to investigate epistemological characteristics of the domain. Results. We found that the investigation of algorithmic and linguistic aspects of opinion mining has been of the community’s greatest interest to understand, quantify, and apply the sentiment orientation of texts. Recent thematic trends reveal that practical applications of opinion mining such as the prediction of market value and investigation of social aspects of product feedback have received increasing attention from the community. Conclusion. Opinion mining is fast-growing and still developing, exploring the refinements of related techniques and applications in a variety of domains. We plan to apply the proposed analytics to more diverse domains and comprehensive publication materials to gain more generalized understanding of the true structure of a science.",
"title": ""
},
{
"docid": "dbafea1fbab901ff5a53f752f3bfb4b8",
"text": "Three studies were conducted to test the hypothesis that high trait aggressive individuals are more affected by violent media than are low trait aggressive individuals. In Study 1, participants read film descriptions and then chose a film to watch. High trait aggressive individuals were more likely to choose a violent film to watch than were low trait aggressive individuals. In Study 2, participants reported their mood before and after the showing of a violet or nonviolent videotape. High trait aggressive individuals felt more angry after viewing the violent videotape than did low trait aggressive individuals. In Study 3, participants first viewed either a violent or a nonviolent videotape and then competed with an \"opponent\" on a reaction time task in which the loser received a blast of unpleasant noise. Videotape violence was more likely to increase aggression in high trait aggressive individuals than in low trait aggressive individuals.",
"title": ""
},
{
"docid": "7d8dcb65acd5e0dc70937097ded83013",
"text": "This paper addresses the problem of mapping natural language sentences to lambda–calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.",
"title": ""
}
] | scidocsrr |
db7ee58f58d7f901dac2d5e03c4c4e75 | Arbitrary-Oriented Vehicle Detection in Aerial Imagery with Single Convolutional Neural Networks | [
{
"docid": "a0dbf8e57a7e11f88bc3ed14a1eabad7",
"text": "Detecting vehicles in aerial imagery plays an important role in a wide range of applications. The current vehicle detection methods are mostly based on sliding-window search and handcrafted or shallow-learning-based features, having limited description capability and heavy computational costs. Recently, due to the powerful feature representations, region convolutional neural networks (CNN) based detection methods have achieved state-of-the-art performance in computer vision, especially Faster R-CNN. However, directly using it for vehicle detection in aerial images has many limitations: (1) region proposal network (RPN) in Faster R-CNN has poor performance for accurately locating small-sized vehicles, due to the relatively coarse feature maps; and (2) the classifier after RPN cannot distinguish vehicles and complex backgrounds well. In this study, an improved detection method based on Faster R-CNN is proposed in order to accomplish the two challenges mentioned above. Firstly, to improve the recall, we employ a hyper region proposal network (HRPN) to extract vehicle-like targets with a combination of hierarchical feature maps. Then, we replace the classifier after RPN by a cascade of boosted classifiers to verify the candidate regions, aiming at reducing false detection by negative example mining. We evaluate our method on the Munich vehicle dataset and the collected vehicle dataset, with improvements in accuracy and robustness compared to existing methods.",
"title": ""
}
] | [
{
"docid": "9113e4ba998ec12dd2536073baf40610",
"text": "Fast adaptation of deep neural networks (DNN) is an important research topic in deep learning. In this paper, we have proposed a general adaptation scheme for DNN based on discriminant condition codes, which are directly fed to various layers of a pre-trained DNN through a new set of connection weights. Moreover, we present several training methods to learn connection weights from training data as well as the corresponding adaptation methods to learn new condition code from adaptation data for each new test condition. In this work, the fast adaptation scheme is applied to supervised speaker adaptation in speech recognition based on either frame-level cross-entropy or sequence-level maximum mutual information training criterion. We have proposed three different ways to apply this adaptation scheme based on the so-called speaker codes: i) Nonlinear feature normalization in feature space; ii) Direct model adaptation of DNN based on speaker codes; iii) Joint speaker adaptive training with speaker codes. We have evaluated the proposed adaptation methods in two standard speech recognition tasks, namely TIMIT phone recognition and large vocabulary speech recognition in the Switchboard task. Experimental results have shown that all three methods are quite effective to adapt large DNN models using only a small amount of adaptation data. For example, the Switchboard results have shown that the proposed speaker-code-based adaptation methods may achieve up to 8-10% relative error reduction using only a few dozens of adaptation utterances per speaker. Finally, we have achieved very good performance in Switchboard (12.1% in WER) after speaker adaptation using sequence training criterion, which is very close to the best performance reported in this task (\"Deep convolutional neural networks for LVCSR,\" T. N. Sainath et al., Proc. IEEE Acoust., Speech, Signal Process., 2013).",
"title": ""
},
{
"docid": "920b3c1264ad303bbb1a263ecf7c1162",
"text": "Nowadays, operational quality and robustness of cellular networks are among the hottest topics wireless communications research. As a response to a growing need in reduction of expenses for mobile operators, 3rd Generation Partnership Project (3GPP) initiated work on Minimization of Drive Tests (MDT). There are several major areas of standardization related to MDT, such as coverage, capacity, mobility optimization and verification of end user quality [1]. This paper presents results of the research devoted to Quality of Service (QoS) verification for MDT. The main idea is to jointly observe the user experienced QoS in terms of throughput, and corresponding radio conditions. Also the necessity to supplement the existing MDT metrics with the new reporting types is elaborated.",
"title": ""
},
{
"docid": "d7b60ce82b6deb61efdf2d6aef5f5341",
"text": "The Evolution of Cognitive Bias Despite widespread claims to the contrary, the human mind is not worse than rational… but may often be better than rational. On the surface, cognitive biases appear to be somewhat puzzling when viewed through an evolutionary lens. Because they depart from standards of logic and accuracy, they appear to be design flaws instead of examples of good engineering. Cognitive traits can be evaluated according to any number of performance criteria-logical sufficiency, accuracy, speed of processing, and so on. The value of a criterion depends on the question the scientist is asking. To the evolutionary psychologist, however, the evaluative task is not whether the cognitive feature is accurate or logical, but rather how well it solves a particular problem, and how solving this problem contributed to fitness ancestrally. Viewed in this way, if a cognitive bias positively impacted fitness it is not a design flaw – it is a design feature. This chapter discusses the many biases that are probably not the result of mere constraints on the design of the mind or other mysterious irrationalities, but rather are adaptations that can be studied and better understood from an evolutionary perspective. By cognitive bias, we mean cases in which human cognition reliably produces representations that are systematically distorted compared to some aspect of objective reality. We note that the term bias is used in the literature in a number of different ways (see, We do not seek to make commitments about these definitions here; rather, we use bias throughout this chapter in the relatively noncommittal sense defined above. An evolutionary psychological perspective predicts that the mind is equipped with function-specific mechanisms adapted for special purposes—mechanisms with special design for Cognitive Bias-3 solving problems such as mating, which are separate, at least in part, from those involved in solving problems of food choice, predator avoidance, and social exchange (e. demonstrating domain specificity in solving a particular problem is a part of building a case that the trait has been shaped by selection to perform that function. The evolved function of the eye, for instance, is to facilitate sight because it does this well (it exhibits proficiency), the features of the eye have the common and unique effect of facilitating sight (it exhibits specificity), and there are no plausible alternative hypotheses that account for the eye's features. Some design features that appear to be flaws when viewed in …",
"title": ""
},
{
"docid": "c77b2b45f189b6246c9f2e2ed527772f",
"text": "PaaS vendors face challenges in efficiently providing services with the growth of their offerings. In this paper, we explore how PaaS vendors are using containers as a means of hosting Apps. The paper starts with a discussion of PaaS Use case and the current adoption of Container based PaaS architectures with the existing vendors. We explore various container implementations - Linux Containers, Docker, Warden Container, lmctfy and OpenVZ. We look at how each of this implementation handle Process, FileSystem and Namespace isolation. We look at some of the unique features of each container and how some of them reuse base Linux Container implementation or differ from it. We also explore how IaaSlayer itself has started providing support for container lifecycle management along with Virtual Machines. In the end, we look at factors affecting container implementation choices and some of the features missing from the existing implementations for the next generation PaaS.",
"title": ""
},
{
"docid": "3b8e716e658176cebfbdb313c8cb22ac",
"text": "To realize the vision of Internet-of-Things (IoT), numerous IoT devices have been developed for improving daily lives, in which smart home devices are among the most popular ones. Smart locks rely on smartphones to ease the burden of physical key management and keep tracking the door opening/close status, the security of which have aroused great interests from the security community. As security is of utmost importance for the IoT environment, we try to investigate the security of IoT by examining smart lock security. Specifically, we focus on analyzing the security of August smart lock. The threat models are illustrated for attacking August smart lock. We then demonstrate several practical attacks based on the threat models toward August smart lock including handshake key leakage, owner account leakage, personal information leakage, and denial-of-service (DoS) attacks. We also propose the corresponding defense methods to counteract these attacks.",
"title": ""
},
{
"docid": "7b44c4ec18d01f46fdd513780ba97963",
"text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.",
"title": ""
},
{
"docid": "fc7d777932e990ddba30b13c77cfc88c",
"text": "With increasing volumes in data and more sophisticated Machine Learning algorithms, the demand for fast and energy efficient computation systems is also growing. The combination of classical CPU systems with more specialized hardware such as FPGAs offer one way to meet this demand. FPGAs are fast and energy efficient reconfigurable hardware devices allowing new design explorations for algorithms and their implementations. This report briefly discusses FPGAs as computational hardware and their application in the domain of Machine Learning, specifically in combination with Gaussian Processes.",
"title": ""
},
{
"docid": "853220dc960afe1b4b2137b934b1e235",
"text": "Multi-level marketing is a marketing approach that motivates its participants to promote a certain product among their friends. The popularity of this approach increases due to the accessibility of modern social networks, however, it existed in one form or the other long before the Internet age began (the infamous Pyramid scheme that dates back at least a century is in fact a special case of multi-level marketing). This paper lays foundations for the study of reward mechanisms in multi-level marketing within social networks. We provide a set of desired properties for such mechanisms and show that they are uniquely satisfied by geometric reward mechanisms. The resilience of mechanisms to false-name manipulations is also considered; while geometric reward mechanisms fail against such manipulations, we exhibit other mechanisms which are false-name-proof.",
"title": ""
},
{
"docid": "bf2065f6c04f566110667a22a9d1b663",
"text": "Casticin, a polymethoxyflavone occurring in natural plants, has been shown to have anticancer activities. In the present study, we aims to investigate the anti-skin cancer activity of casticin on melanoma cells in vitro and the antitumor effect of casticin on human melanoma xenografts in nu/nu mice in vivo. A flow cytometric assay was performed to detect expression of viable cells, cell cycles, reactive oxygen species production, levels of [Formula: see text] and caspase activity. A Western blotting assay and confocal laser microscope examination were performed to detect expression of protein levels. In the in vitro studies, we found that casticin induced morphological cell changes and DNA condensation and damage, decreased the total viable cells, and induced G2/M phase arrest. Casticin promoted reactive oxygen species (ROS) production, decreased the level of [Formula: see text], and promoted caspase-3 activities in A375.S2 cells. The induced G2/M phase arrest indicated by the Western blotting assay showed that casticin promoted the expression of p53, p21 and CHK-1 proteins and inhibited the protein levels of Cdc25c, CDK-1, Cyclin A and B. The casticin-induced apoptosis indicated that casticin promoted pro-apoptotic proteins but inhibited anti-apoptotic proteins. These findings also were confirmed by the fact that casticin promoted the release of AIF and Endo G from mitochondria to cytosol. An electrophoretic mobility shift assay (EMSA) assay showed that casticin inhibited the NF-[Formula: see text]B binding DNA and that these effects were time-dependent. In the in vivo studies, results from immuno-deficient nu/nu mice bearing the A375.S2 tumor xenograft indicated that casticin significantly suppressed tumor growth based on tumor size and weight decreases. Early G2/M arrest and mitochondria-dependent signaling contributed to the apoptotic A375.S2 cell demise induced by casticin. In in vivo experiments, A375.S2 also efficaciously suppressed tumor volume in a xenotransplantation model. Therefore, casticin might be a potential therapeutic agent for the treatment of skin cancer in the future.",
"title": ""
},
{
"docid": "a0aa33c4afa58bd4dff7eb209bfb7924",
"text": "OBJECTIVE\nTo assess whether frequent marijuana use is associated with residual neuropsychological effects.\n\n\nDESIGN\nSingle-blind comparison of regular users vs infrequent users of marijuana.\n\n\nPARTICIPANTS\nTwo samples of college undergraduates: 65 heavy users, who had smoked marijuana a median of 29 days in the last 30 days (range, 22 to 30 days) and who also displayed cannabinoids in their urine, and 64 light users, who had smoked a median of 1 day in the last 30 days (range, 0 to 9 days) and who displayed no urinary cannabinoids.\n\n\nINTERVENTION\nSubjects arrived at 2 PM on day 1 of their study visit, then remained at our center overnight under supervision. Neuropsychological tests were administered to all subjects starting at 9 AM on day 2. Thus, all subjects were abstinent from marijuana and other drugs for a minimum of 19 hours before testing.\n\n\nMAIN OUTCOME MEASURES\nSubjects received a battery of standard neuropsychological tests to assess general intellectual functioning, abstraction ability, sustained attention, verbal fluency, and ability to learn and recall new verbal and visuospatial information.\n\n\nRESULTS\nHeavy users displayed significantly greater impairment than light users on attention/executive functions, as evidenced particularly by greater perseverations on card sorting and reduced learning of word lists. These differences remained after controlling for potential confounding variables, such as estimated levels of premorbid cognitive functioning, and for use of alcohol and other substances in the two groups.\n\n\nCONCLUSIONS\nHeavy marijuana use is associated with residual neuropsychological effects even after a day of supervised abstinence from the drug. However, the question remains open as to whether this impairment is due to a residue of drug in the brain, a withdrawal effect from the drug, or a frank neurotoxic effect of the drug. from marijuana",
"title": ""
},
{
"docid": "fe19e30124ab7472521f93fe9408dd54",
"text": "uted to the discovery and characterization of new materials. The discovery of semiconductors laid the foundation for modern electronics, while the formulation of new molecules allows us to treat diseases previously thought incurable. Looking into the future, some of the largest problems facing humanity now are likely to be solved by the discovery of new materials. In this article, we explore the techniques materials scientists are using and show how our novel artificial intelligence system, Phase-Mapper, allows materials scientists to quickly solve material systems to infer their underlying crystal structures and has led to the discovery of new solar light absorbers. Articles",
"title": ""
},
{
"docid": "5a889a9091282e50eeae2fa4fedc750d",
"text": "This study explores the role of speech register and prosody for the task of word segmentation. Since these two factors are thought to play an important role in early language acquisition, we aim to quantify their contribution for this task. We study a Japanese corpus containing both infantand adult-directed speech and we apply four different word segmentation models, with and without knowledge of prosodic boundaries. The results showed that the difference between registers is smaller than previously reported and that prosodic boundary information helps more adultthan infant-directed speech.",
"title": ""
},
{
"docid": "43beba8ec2a324546bce095e9c1d9f0c",
"text": "Scenario-based specifications such as Message Sequence Charts (MSCs) are useful as part of a requirements specification. A scenario is a partial story, describing how system components, the environment, and users work concurrently and interact in order to provide system level functionality. Scenarios need to be combined to provide a more complete description of system behavior. Consequently, scenario synthesis is central to the effective use of scenario descriptions. How should a set of scenarios be interpreted? How do they relate to one another? What is the underlying semantics? What assumptions are made when synthesizing behavior models from multiple scenarios? In this paper, we present an approach to scenario synthesis based on a clear sound semantics, which can support and integrate many of the existing approaches to scenario synthesis. The contributions of the paper are threefold. We first define an MSC language with sound abstract semantics in terms of labeled transition systems and parallel composition. The language integrates existing approaches based on scenario composition by using high-level MSCs (hMSCs) and those based on state identification by introducing explicit component state labeling. This combination allows stakeholders to break up scenario specifications into manageable parts and reuse scenarios using hMCSs; it also allows them to introduce additional domainspecific information and general assumptions explicitly into the scenario specification using state labels. Second, we provide a sound synthesis algorithm which translates scenarios into a behavioral specification in the form of Finite Sequential Processes. This specification can be analyzed with the Labeled Transition System Analyzer using model checking and animation. Finally, we demonstrate how many of the assumptions embedded in existing synthesis approaches can be made explicit and modeled in our approach. Thus, we provide the basis for a common approach to scenario-based specification, synthesis, and analysis.",
"title": ""
},
{
"docid": "5a5fbde8e0e264410fe23322a9070a39",
"text": "By asking users of career-oriented social networking sites I investigated their job search behavior. For further IS-theorizing I integrated the number of a user's contacts as an own construct into Venkatesh's et al. UTAUT2 model, which substantially rose its predictive quality from 19.0 percent to 80.5 percent concerning the variance of job search success. Besides other interesting results I found a substantial negative relationship between the number of contacts and job search success, which supports the experience of practitioners but contradicts scholarly findings. The results are useful for scholars and practitioners.",
"title": ""
},
{
"docid": "57384df0c477dca29d4a572af32a1871",
"text": "In this paper, a simple algorithm for detecting the range and shape of tumor in brain MR Images is described. Generally, CT scan or MRI that is directed into intracranial cavity produces a complete image of brain. This image is visually examined by the physician for detection and diagnosis of brain tumor. To avoid that, this project uses computer aided method for segmentation (detection) of brain tumor based on the combination of two algorithms. This method allows the segmentation of tumor tissue with accuracy and reproducibility comparable to manual segmentation. In addition, it also reduces the time for analysis. At the end of the process the tumor is extracted from the MR image and its exact position and the shape also determined. The stage of the tumor is displayed based on the amount of area calculated from the cluster.",
"title": ""
},
{
"docid": "b78d5e7047d340ebef8f4e80d28ab4d9",
"text": "Light scattering and color change are two major sources of distortion for underwater photography. Light scattering is caused by light incident on objects reflected and deflected multiple times by particles present in the water before reaching the camera. This in turn lowers the visibility and contrast of the image captured. Color change corresponds to the varying degrees of attenuation encountered by light traveling in the water with different wavelengths, rendering ambient underwater environments dominated by a bluish tone. No existing underwater processing techniques can handle light scattering and color change distortions suffered by underwater images, and the possible presence of artificial lighting simultaneously. This paper proposes a novel systematic approach to enhance underwater images by a dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artifical light source into consideration. Once the depth map, i.e., distances between the objects and the camera, is estimated, the foreground and background within a scene are segmented. The light intensities of foreground and background are compared to determine whether an artificial light source is employed during the image capturing process. After compensating the effect of artifical light, the haze phenomenon and discrepancy in wavelength attenuation along the underwater propagation path to camera are corrected. Next, the water depth in the image scene is estimated according to the residual energy ratios of different color channels existing in the background light. Based on the amount of attenuation corresponding to each light wavelength, color change compensation is conducted to restore color balance. The performance of the proposed algorithm for wavelength compensation and image dehazing (WCID) is evaluated both objectively and subjectively by utilizing ground-truth color patches and video downloaded from the Youtube website. Both results demonstrate that images with significantly enhanced visibility and superior color fidelity are obtained by the WCID proposed.",
"title": ""
},
{
"docid": "6d329c1fa679ac201387c81f59392316",
"text": "Mosquitoes represent the major arthropod vectors of human disease worldwide transmitting malaria, lymphatic filariasis, and arboviruses such as dengue virus and Zika virus. Unfortunately, no treatment (in the form of vaccines or drugs) is available for most of these diseases andvectorcontrolisstillthemainformofprevention. Thelimitationsoftraditionalinsecticide-based strategies, particularly the development of insecticide resistance, have resulted in significant efforts to develop alternative eco-friendly methods. Biocontrol strategies aim to be sustainable and target a range of different mosquito species to reduce the current reliance on insecticide-based mosquito control. In thisreview, weoutline non-insecticide basedstrategiesthat havebeenimplemented orare currently being tested. We also highlight the use of mosquito behavioural knowledge that can be exploited for control strategies.",
"title": ""
},
{
"docid": "3a7427c67b7758516af15da12b663c40",
"text": "The initial focus of recombinant protein production by filamentous fungi related to exploiting the extraordinary extracellular enzyme synthesis and secretion machinery of industrial strains, including Aspergillus, Trichoderma, Penicillium and Rhizopus species, was to produce single recombinant protein products. An early recognized disadvantage of filamentous fungi as hosts of recombinant proteins was their common ability to produce homologous proteases which could degrade the heterologous protein product and strategies to prevent proteolysis have met with some limited success. It was also recognized that the protein glycosylation patterns in filamentous fungi and in mammals were quite different, such that filamentous fungi are likely not to be the most suitable microbial hosts for production of recombinant human glycoproteins for therapeutic use. By combining the experience gained from production of single recombinant proteins with new scientific information being generated through genomics and proteomics research, biotechnologists are now poised to extend the biomanufacturing capabilities of recombinant filamentous fungi by enabling them to express genes encoding multiple proteins, including, for example, new biosynthetic pathways for production of new primary or secondary metabolites. It is recognized that filamentous fungi, most species of which have not yet been isolated, represent an enormously diverse source of novel biosynthetic pathways, and that the natural fungal host harboring a valuable biosynthesis pathway may often not be the most suitable organism for biomanufacture purposes. Hence it is expected that substantial effort will be directed to transforming other fungal hosts, non-fungal microbial hosts and indeed non microbial hosts to express some of these novel biosynthetic pathways. But future applications of recombinant expression of proteins will not be confined to biomanufacturing. Opportunities to exploit recombinant technology to unravel the causes of the deleterious impacts of fungi, for example as human, mammalian and plant pathogens, and then to bring forward solutions, is expected to represent a very important future focus of fungal recombinant protein technology.",
"title": ""
},
{
"docid": "55aea20148423bdb7296addac847d636",
"text": "This paper describes an underwater sensor network with dual communication and support for sensing and mobility. The nodes in the system are connected acoustically for broadcast communication using an acoustic modem we developed. For higher point to point communication speed the nodes are networked optically using custom built optical modems. We describe the hardware details of the underwater sensor node and the communication and networking protocols. Finally, we present and discuss the results from experiments with this system.",
"title": ""
},
{
"docid": "4ce681973defd1564e2774a38598d983",
"text": "OBJECTIVE\nThe Montreal Cognitive Assessment (MoCA; Nasreddine et al., 2005) is a cognitive screening tool that aims to differentiate healthy cognitive aging from Mild Cognitive Impairment (MCI). Several validation studies have been conducted on the MoCA, in a variety of clinical populations. Some studies have indicated that the originally suggested cutoff score of 26/30 leads to an inflated rate of false positives, particularly for those of older age and/or lower education. We conducted a systematic review and meta-analysis of the literature to determine the diagnostic accuracy of the MoCA for differentiating healthy cognitive aging from possible MCI.\n\n\nMETHODS\nOf the 304 studies identified, nine met inclusion criteria for the meta-analysis. These studies were assessed across a range of cutoff scores to determine the respective sensitivities, specificities, positive and negative predictive accuracies, likelihood ratios for positive and negative results, classification accuracies, and Youden indices.\n\n\nRESULTS\nMeta-analysis revealed a cutoff score of 23/30 yielded the best diagnostic accuracy across a range of parameters.\n\n\nCONCLUSIONS\nA MoCA cutoff score of 23, rather than the initially recommended score of 26, lowers the false positive rate and shows overall better diagnostic accuracy. We recommend the use of this cutoff score going forward. Copyright © 2017 John Wiley & Sons, Ltd.",
"title": ""
}
] | scidocsrr |
0e718877fe2f6ef795736d50498af25a | A Compact UWB Three-Way Power Divider | [
{
"docid": "6f671b7b67a543f923b3253b018ff221",
"text": "This letter presents the design and measured performance of a microstrip three-way power combiner. The combiner is designed using the conventional Wilkinson topology with the extension to three outputs, which has been rarely considered for the design and fabrication of V-way combiners. It is shown that with an appropriate design approach, the main drawback reported with this topology (nonplanarity of the circuit when N > 2) can be minimized to have a negligible effect on the circuit performance and still allow an easy MIC or MHMIC fabrication.",
"title": ""
},
{
"docid": "2baf55123171c6e2110b19b1583c3d17",
"text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.",
"title": ""
}
] | [
{
"docid": "648bfc5deeb52aaf9bc4c766e1ae4b70",
"text": "In this letter, a miniature 0.97–1.53-GHz tunable four-pole bandpass filter with constant fractional bandwidth is demonstrated. The filter consists of three quarter-wavelength resonators and one half-wavelength resonator. By introducing cross-coupling, two transmission zeroes are generated and are located at both sides of the passband. Also, source–load coupling is employed to produce two extra transmission zeroes, resulting in a miniature (<inline-formula> <tex-math notation=\"LaTeX\">$0.09\\lambda _{{\\text {g}}}\\times 0.1\\lambda _{{\\text {g}}}$ </tex-math></inline-formula>) four-pole, four-transmission zero filter with high selectivity. The measured results show a tuning range of 0.97–1.53 GHz with an insertion loss of 4.2–2 dB and 1-dB fractional bandwidth of 5.5%. The four transmission zeroes change with the passband synchronously, ensuring high selectivity over a wide tuning range. The application areas are in software-defined radios in high-interference environments.",
"title": ""
},
{
"docid": "18548de7ebb6609ff2ce9b8d9d673f57",
"text": "In this work we discuss the related challenges and describe an approach towards the fusion of state-of-the-art technologies from the Spoken Dialogue Systems (SDS) and the Semantic Web and Information Retrieval domains. We envision a dialogue system named LD-SDS that will support advanced, expressive, and engaging user requests, over multiple, complex, rich, and open-domain data sources that will leverage the wealth of the available Linked Data. Specifically, we focus on: a) improving the identification, disambiguation and linking of entities occurring in data sources and user input; b) offering advanced query services for exploiting the semantics of the data, with reasoning and exploratory capabilities; and c) expanding the typical information seeking dialogue model (slot filling) to better reflect real-world conversational search scenarios.",
"title": ""
},
{
"docid": "574c07709b65749bc49dd35d1393be80",
"text": "Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.",
"title": ""
},
{
"docid": "8bab67e95bdb7cf1ded4a05f7b9c503d",
"text": "A national sample of 295 transgender adults and their nontransgender siblings were surveyed about demographics, perceptions of social support, and violence, harassment, and discrimination. Transwomen were older than the other 4 groups. Transwomen, transmen, and genderqueers were more highly educated than nontransgender sisters and nontransgender brothers, but did not have a corresponding higher income. Other demographic differences between groups were found in religion, geographic mobility, relationship status, and sexual orientation. Transgender people were more likely to experience harassment and discrimination than nontransgender sisters and nontransgender brothers. All transgender people perceived less social support from family than nontransgender sisters. This is the first study to compare trans people to nontrans siblings as a comparison group.",
"title": ""
},
{
"docid": "eeafcab155da5229bf26ddc350e37951",
"text": "Interferons (IFNs) are the hallmark of the vertebrate antiviral system. Two of the three IFN families identified in higher vertebrates are now known to be important for antiviral defence in teleost fish. Based on the cysteine patterns, the fish type I IFN family can be divided into two subfamilies, which possibly interact with distinct receptors for signalling. The fish type II IFN family consists of two members, IFN-γ with similar functions to mammalian IFN-γ and a teleost specific IFN-γ related (IFN-γrel) molecule whose functions are not fully elucidated. These two type II IFNs also appear to bind to distinct receptors to exert their functions. It has become clear that fish IFN responses are mediated by the host pattern recognition receptors and an array of transcription factors including the IFN regulatory factors, the Jak/Stat proteins and the suppressor of cytokine signalling (SOCS) molecules.",
"title": ""
},
{
"docid": "800337ef10a4245db4e45a1a5931e578",
"text": "This paper describes a method for generating sense-tagged data using Wikipedia as a source of sense annotations. Through word sense disambiguation experiments, we show that the Wikipedia-based sense annotations are reliable and can be used to construct accurate sense classifiers.",
"title": ""
},
{
"docid": "31712d0398ac98598e77f05ebbf917a2",
"text": "This paper illustrates the mechanical structure's spherical motion, kinematic matrices and achievable workspace of an exoskeleton upper limb device. The purpose of this paper is to assist individuals that have lost their upper limb motor functions by creating an exoskeleton device that does not require an external support; but still provides a large workspace. This allows for movement according to the Activities of Daily Living (ADL).",
"title": ""
},
{
"docid": "305f0c417d1e6f6189c431078b359793",
"text": "Sentence relation extraction aims to extract relational facts from sentences, which is an important task in natural language processing field. Previous models rely on the manually labeled supervised dataset. However, the human annotation is costly and limits to the number of relation and data size, which is difficult to scale to large domains. In order to conduct largely scaled relation extraction, we utilize an existing knowledge base to heuristically align with texts, which not rely on human annotation and easy to scale. However, using distant supervised data for relation extraction is facing a new challenge: sentences in the distant supervised dataset are not directly labeled and not all sentences that mentioned an entity pair can represent the relation between them. To solve this problem, we propose a novel model with reinforcement learning. The relation of the entity pair is used as distant supervision and guide the training of relation extractor with the help of reinforcement learning method. We conduct two types of experiments on a publicly released dataset. Experiment results demonstrate the effectiveness of the proposed method compared with baseline models, which achieves 13.36% improvement.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "da217097f8ab7b08fcf7a91263785996",
"text": "Parallel bit stream algorithms exploit the SWAR (SIMD within a register) capabilities of commodity processors in high-performance text processing applications such as UTF-8 to UTF-16 transcoding, XML parsing, string search and regular expression matching. Direct architectural support for these algorithms in future SWAR instruction sets could further increase performance as well as simplifying the programming task. A set of simple SWAR instruction set extensions are proposed for this purpose based on the principle of systematic support for inductive doubling as an algorithmic technique. These extensions are shown to significantly reduce instruction count in core parallel bit stream algorithms, often providing a 3X or better improvement. The extensions are also shown to be useful for SWAR programming in other application areas, including providing a systematic treatment for horizontal operations. An implementation model for these extensions involves relatively simple circuitry added to the operand fetch components in a pipelined processor.",
"title": ""
},
{
"docid": "77e501546d95fa18cf2a459fae274875",
"text": "Complex organizations exhibit surprising, nonlinear behavior. Although organization scientists have studied complex organizations for many years, a developing set of conceptual and computational tools makes possible new approaches to modeling nonlinear interactions within and between organizations. Complex adaptive system models represent a genuinely new way of simplifying the complex. They are characterized by four key elements: agents with schemata, self-organizing networks sustained by importing energy, coevolution to the edge of chaos, and system evolution based on recombination. New types of models that incorporate these elements will push organization science forward by merging empirical observation with computational agent-based simulation. Applying complex adaptive systems models to strategic management leads to an emphasis on building systems that can rapidly evolve effective adaptive solutions. Strategic direction of complex organizations consists of establishing and modifying environments within which effective, improvised, self-organized solutions can evolve. Managers influence strategic behavior by altering the fitness landscape for local agents and reconfiguring the organizational architecture within which agents adapt. (Complexity Theory; Organizational Evolution; Strategic Management) Since the open-systems view of organizations began to diffuse in the 1960s, comnplexity has been a central construct in the vocabulary of organization scientists. Open systems are open because they exchange resources with the environment, and they are systems because they consist of interconnected components that work together. In his classic discussion of hierarchy in 1962, Simon defined a complex system as one made up of a large number of parts that have many interactions (Simon 1996). Thompson (1967, p. 6) described a complex organization as a set of interdependent parts, which together make up a whole that is interdependent with some larger environment. Organization theory has treated complexity as a structural variable that characterizes both organizations and their environments. With respect to organizations, Daft (1992, p. 15) equates complexity with the number of activities or subsystems within the organization, noting that it can be measured along three dimensions. Vertical complexity is the number of levels in an organizational hierarchy, horizontal complexity is the number of job titles or departments across the organization, and spatial complexity is the number of geographical locations. With respect to environments, complexity is equated with the number of different items or elements that must be dealt with simultaneously by the organization (Scott 1992, p. 230). Organization design tries to match the complexity of an organization's structure with the complexity of its environment and technology (Galbraith 1982). The very first article ever published in Organization Science suggested that it is inappropriate for organization studies to settle prematurely into a normal science mindset, because organizations are enormously complex (Daft and Lewin 1990). What Daft and Lewin meant is that the behavior of complex systems is surprising and is hard to 1047-7039/99/1003/0216/$05.OO ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 Copyright ? 1999, Institute for Operations Research pp. 216-232 and the Management Sciences PHILIP ANDERSON Complexity Theory and Organization Science predict, because it is nonlinear (Casti 1994). In nonlinear systems, intervening to change one or two parameters a small amount can drastically change the behavior of the whole system, and the whole can be very different from the sum of the parts. Complex systems change inputs to outputs in a nonlinear way because their components interact with one another via a web of feedback loops. Gell-Mann (1994a) defines complexity as the length of the schema needed to describe and predict the properties of an incoming data stream by identifying its regularities. Nonlinear systems can difficult to compress into a parsimonious description: this is what makes them complex (Casti 1994). According to Simon (1996, p. 1), the central task of a natural science is to show that complexity, correctly viewed, is only a mask for simplicity. Both social scientists and people in organizations reduce a complex description of a system to a simpler one by abstracting out what is unnecessary or minor. To build a model is to encode a natural system into a formal system, compressing a longer description into a shorter one that is easier to grasp. Modeling the nonlinear outcomes of many interacting components has been so difficult that both social and natural scientists have tended to select more analytically tractable problems (Casti 1994). Simple boxes-andarrows causal models are inadequate for modeling systems with complex interconnections and feedback loops, even when nonlinear relations between dependent and independent variables are introduced by means of exponents, logarithms, or interaction terms. How else might we compress complex behavior so we can comprehend it? For Perrow (1967), the more complex an organization is, the less knowable it is and the more deeply ambiguous is its operation. Modem complexity theory suggests that some systems with many interactions among highly differentiated parts can produce surprisingly simple, predictable behavior, while others generate behavior that is impossible to forecast, though they feature simple laws and few actors. As Cohen and Stewart (1994) point out, normal science shows how complex effects can be understood from simple laws; chaos theory demonstrates that simple laws can have complicated, unpredictable consequences; and complexity theory describes how complex causes can produce simple effects. Since the mid-1980s, new approaches to modeling complex systems have been emerging from an interdisciplinary invisible college, anchored on the Santa Fe Institute (see Waldrop 1992 for a historical perspective). The agenda of these scholars includes identifying deep principles underlying a wide variety of complex systems, be they physical, biological, or social (Fontana and Ballati 1999). Despite somewhat frequent declarations that a new paradigm has emerged, it is still premature to declare that a science of complexity, or even a unified theory of complex systems, exists (Horgan 1995). Holland and Miller (1991) have likened the present situation to that of evolutionary theory before Fisher developed a mathematical theory of genetic selection. This essay is not a review of the emerging body of research in complex systems, because that has been ably reviewed many times, in ways accessible to both scholars and managers. Table 1 describes a number of recent, prominent books and articles that inform this literature; Heylighen (1997) provides an excellent introductory bibliography, with a more comprehensive version available on the Internet at http://pespmcl.vub.ac.be/ Evocobib. html. Organization science has passed the point where we can regard as novel a summary of these ideas or an assertion that an empirical phenomenon is consistent with them (see Browning et al. 1995 for a pathbreaking example). Six important insights, explained at length in the works cited in Table 1, should be regarded as well-established scientifically. First, many dynamical systems (whose state at time t determines their state at time t + 1) do not reach either a fixed-point or a cyclical equilibrium (see Dooley and Van de Ven's paper in this issue). Second, processes that appear to be random may be chaotic, revolving around identifiable types of attractors in a deterministic way that seldom if ever return to the same state. An attractor is a limited area in a system's state space that it never departs. Chaotic systems revolve around \"strange attractors,\" fractal objects that constrain the system to a small area of its state space, which it explores in a neverending series that does not repeat in a finite amount of time. Tests exist that can establish whether a given process is random or chaotic (Koput 1997, Ott 1993). Similarly, time series that appear to be random walks may actually be fractals with self-reinforcing trends (Bar-Yam 1997). Third, the behavior of complex processes can be quite sensitive to small differences in initial conditions, so that two entities with very similar initial states can follow radically divergent paths over time. Consequently, historical accidents may \"tip\" outcomes strongly in a particular direction (Arthur 1989). Fourth, complex systems resist simple reductionist analyses, because interconnections and feedback loops preclude holding some subsystems constant in order to study others in isolation. Because descriptions at multiple scales are necessary to identify how emergent properties are produced (Bar-Yam 1997), reductionism and holism are complementary strategies in analyzing such systems (Fontana and Ballati ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 217 PHILIP ANDERSON Complexity Theory and Organization Science Table 1 Selected Resources that Provide an Overview of Complexity Theory Allison and Kelly, 1999 Written for managers, this book provides an overview of major themes in complexity theory and discusses practical applications rooted in-experiences at firms such as Citicorp. Bar-Yam, 1997 A very comprehensive introduction for mathematically sophisticated readers, the book discusses the major computational techniques used to analyze complex systems, including spin-glass models, cellular automata, simulation methodologies, and fractal analysis. Models are developed to describe neural networks, protein folding, developmental biology, and the evolution of human civilization. Brown and Eisenhardt, 1998 Although this book is not an introduction to complexity theory, a series of small tables throughout the text introduces and explains most of the important concepts. The purpose of the book is to view stra",
"title": ""
},
{
"docid": "b29f2d688e541463b80006fac19eaf20",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "21b8998910c792d389ccd8a6d8620555",
"text": "Theory and research suggest that people can increase their happiness through simple intentional positive activities, such as expressing gratitude or practicing kindness. Investigators have recently begun to study the optimal conditions under which positive activities increase happiness and the mechanisms by which these effects work. According to our positive-activity model, features of positive activities (e.g., their dosage and variety), features of persons (e.g., their motivation and effort), and person-activity fit moderate the effect of positive activities on well-being. Furthermore, the model posits four mediating variables: positive emotions, positive thoughts, positive behaviors, and need satisfaction. Empirical evidence supporting the model and future directions are discussed.",
"title": ""
},
{
"docid": "833ec45dfe660377eb7367e179070322",
"text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.",
"title": ""
},
{
"docid": "f4a703793623890b59a8f7471fc49d0e",
"text": "The authors investigate the interplay between answer quality and answer speed across question types in community question-answering sites (CQAs). The research questions addressed are the following: (a) How do answer quality and answer speed vary across question types? (b) How do the relationships between answer quality and answer speed vary across question types? (c) How do the best quality answers and the fastest answers differ in terms of answer quality and answer speed across question types? (d) How do trends in answer quality vary over time across question types? From the posting of 3,000 questions in six CQAs, 5,356 answers were harvested and analyzed. There was a significant difference in answer quality and answer speed across question types, and there were generally no significant relationships between answer quality and answer speed. The best quality answers had better overall answer quality than the fastest answers but generally took longer to arrive. In addition, although the trend in answer quality had been mostly random across all question types, the quality of answers appeared to improve gradually when given time. By highlighting the subtle nuances in answer quality and answer speed across question types, this study is an attempt to explore a territory of CQA research that has hitherto been relatively uncharted.",
"title": ""
},
{
"docid": "06a69f318c5967e99638a2adf5520e90",
"text": "In this article, a case is made for improving the school success of ethnically diverse students through culturally responsive teaching and for preparing teachers in preservice education programs with the knowledge, attitudes, and skills needed to do this. The ideas presented here are brief sketches of more thorough explanations included in my recent book, Culturally Responsive Teaching: Theory, Research, and Practice (2000). The specific components of this approach to teaching are based on research findings, theoretical claims, practical experiences, and personal stories of educators researching and working with underachieving African, Asian, Latino, and Native American students. These data were produced by individuals from a wide variety of disciplinary backgrounds including anthropology, sociology, psychology, sociolinguistics, communications, multicultural education, K-college classroom teaching, and teacher education. Five essential elements of culturally responsive teaching are examined: developing a knowledge base about cultural diversity, including ethnic and cultural diversity content in the curriculum, demonstrating caring and building learning communities, communicating with ethnically diverse students, and responding to ethnic diversity in the delivery of instruction. Culturally responsive teaching is defined as using the cultural characteristics, experiences, and perspectives of ethnically diverse students as conduits for teaching them more effectively. It is based on the assumption that when academic knowledge and skills are situated within the lived experiences and frames of reference of students, they are more personally meaningful, have higher interest appeal, and are learned more easily and thoroughly (Gay, 2000). As a result, the academic achievement of ethnically diverse students will improve when they are taught through their own cultural and experiential filters (Au & Kawakami, 1994; Foster, 1995; Gay, 2000; Hollins, 1996; Kleinfeld, 1975; Ladson-Billings, 1994, 1995).",
"title": ""
},
{
"docid": "19b537f7356da81830c8f7908af83669",
"text": "Investigation of the hippocampus has historically focused on computations within the trisynaptic circuit. However, discovery of important anatomical and functional variability along its long axis has inspired recent proposals of long-axis functional specialization in both the animal and human literatures. Here, we review and evaluate these proposals. We suggest that various long-axis specializations arise out of differences between the anterior (aHPC) and posterior hippocampus (pHPC) in large-scale network connectivity, the organization of entorhinal grid cells, and subfield compositions that bias the aHPC and pHPC towards pattern completion and separation, respectively. The latter two differences give rise to a property, reflected in the expression of multiple other functional specializations, of coarse, global representations in anterior hippocampus and fine-grained, local representations in posterior hippocampus.",
"title": ""
},
{
"docid": "fc453b8e101a0eae542cc69881bbe7d4",
"text": "The statistical properties of Clarke's fading model with a finite number of sinusoids are analyzed, and an improved reference model is proposed for the simulation of Rayleigh fading channels. A novel statistical simulation model for Rician fading channels is examined. The new Rician fading simulation model employs a zero-mean stochastic sinusoid as the specular (line-of-sight) component, in contrast to existing Rician fading simulators that utilize a non-zero deterministic specular component. The statistical properties of the proposed Rician fading simulation model are analyzed in detail. It is shown that the probability density function of the Rician fading phase is not only independent of time but also uniformly distributed over [-pi, pi). This property is different from that of existing Rician fading simulators. The statistical properties of the new simulators are confirmed by extensive simulation results, showing good agreement with theoretical analysis in all cases. An explicit formula for the level-crossing rate is derived for general Rician fading when the specular component has non-zero Doppler frequency",
"title": ""
},
{
"docid": "93adb6d22531c0ec6335a7bec65f4039",
"text": "The term stroke-based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke-based rendering that exploits multi agent systems. RenderBots are individual agents each of which in general represents one stroke. They form a multi agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G-buffers. The final image is created when the simulation is finished by having each RenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.",
"title": ""
},
{
"docid": "a9a5846e370fabfc8716f06397857aae",
"text": "A QR code is a special type of barcode that can encode information like numbers, letters, and any other characters. The capacity of a given QR code depends on the version and error correction level, as also the data type which are encoded. A QR code framework for mobile phone applications by exploiting the spectral diversity afforded by the cyan (C), magenta (M), and yellow (Y) print colorant channels commonly used for color printing and the complementary red (R), green (G), and blue (B) channels, which captures the color images had been proposed. Specifically, this spectral diversity to realize a threefold increase in the data rate by encoding independent data the C, Y, and M channels and decoding the data from the complementary R, G, and B channels. In most cases ReedSolomon error correction codes will be used for generating error correction codeword‟s and also to increase the interference cancellation rate. Experimental results will show that the proposed framework successfully overcomes both single and burst errors and also providing a low bit error rate and a high decoding rate for each of the colorant channels when used with a corresponding error correction scheme. Finally proposed system was successfully synthesized using QUARTUS II EDA tools.",
"title": ""
}
] | scidocsrr |
4f452ff1503a47b7a94c925f46b3c649 | Bounded Rationality, Abstraction, and Hierarchical Decision-Making: An Information-Theoretic Optimality Principle | [
{
"docid": "4e42d29a924c6e1e11456255c1f6cba0",
"text": "We present a reformulation of the stochastic optimal control problem in terms of KL divergence minimisation, not only providing a unifying perspective of previous approaches in this area, but also demonstrating that the formalism leads to novel practical approaches to the control problem. Specifically, a natural relaxation of the dual formulation gives rise to exact iterative solutions to the finite and infinite horizon stochastic optimal control problem, while direct application of Bayesian inference methods yields instances of risk sensitive control. We furthermore study corresponding formulations in the reinforcement learning setting and present model free algorithms for problems with both discrete and continuous state and action spaces. Evaluation of the proposed methods on the standard Gridworld and Cart-Pole benchmarks verifies the theoretical insights and shows that the proposed methods improve upon current approaches.",
"title": ""
},
{
"docid": "2efd26fc1e584aa5f70bdf9d24e5c2cd",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
}
] | [
{
"docid": "31b26778e230d2ea40f9fe8996e095ed",
"text": "The effects of beverage alcohol (ethanol) on the body are determined largely by the rate at which it and its main breakdown product, acetaldehyde, are metabolized after consumption. The main metabolic pathway for ethanol involves the enzymes alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH). Seven different ADHs and three different ALDHs that metabolize ethanol have been identified. The genes encoding these enzymes exist in different variants (i.e., alleles), many of which differ by a single DNA building block (i.e., single nucleotide polymorphisms [SNPs]). Some of these SNPs result in enzymes with altered kinetic properties. For example, certain ADH1B and ADH1C variants that are commonly found in East Asian populations lead to more rapid ethanol breakdown and acetaldehyde accumulation in the body. Because acetaldehyde has harmful effects on the body, people carrying these alleles are less likely to drink and have a lower risk of alcohol dependence. Likewise, an ALDH2 variant with reduced activity results in acetaldehyde buildup and also has a protective effect against alcoholism. In addition to affecting drinking behaviors and risk for alcoholism, ADH and ALDH alleles impact the risk for esophageal cancer.",
"title": ""
},
{
"docid": "d48053467e72a6a550de8cb66b005475",
"text": "In Slavic languages, verbal prefixes can be applied to perfective verbs deriving new perfective verbs, and multiple prefixes can occur in a single verb. This well-known type of data has not yet been adequately analyzed within current approaches to the semantics of Slavic verbal prefixes and aspect. The notion “aspect” covers “grammatical aspect”, or “viewpoint aspect” (see Smith 1991/1997), best characterized by the formal perfective vs. imperfective distinction, which is often expressed by inflectional morphology (as in Romance languages), and corresponds to propositional operators at the semantic level of representation. It also covers “lexical aspect”, “situation aspect” (see Smith ibid.), “eventuality types” (Bach 1981, 1986), or “Aktionsart” (as in Hinrichs 1985; Van Valin 1990; Dowty 1999; Paslawska and von Stechow 2002, for example), which regards the telic vs. atelic distinction and its Vendlerian subcategories (activities, accomplishments, achievements and states). It is lexicalized by verbs, encoded by derivational morphology, or by a variety of elements at the level of syntax, among which the direct object argument has a prominent role, however, the subject (external) argument is arguably a contributing factor, as well (see Dowty 1991, for example). These two “aspect” categories are orthogonal to each other and interact in systematic ways (see also Filip 1992, 1997, 1993/99; de Swart 1998; Paslawska and von Stechow 2002; Rothstein 2003, for example). Multiple prefixation and application of verbal prefixes to perfective bases is excluded by the common view of Slavic prefixes, according to which all perfective verbs are telic and prefixes constitute a uniform class of “perfective” markers that that are applied to imperfective verbs that are atelic and derive perfective verbs that are telic. Moreover, this view of perfective verbs and prefixes predicts rampant violations of the intuitive “one delimitation per event” constraint, whenever a prefix is applied to a perfective verb. This intuitive constraint is motivated by the observation that an event expressed within a single predication can be delimited only once: cp. *run a mile for ten minutes, *wash the clothes clean white.",
"title": ""
},
{
"docid": "cf5c6b5593ef5f0fd54c4fc7951e2460",
"text": "Aiming at inferring 3D shapes from 2D images, 3D shape reconstruction has drawn huge attention from researchers in computer vision and deep learning communities. However, it is not practical to assume that 2D input images and their associated ground truth 3D shapes are always available during training. In this paper, we propose a framework for semi-supervised 3D reconstruction. This is realized by our introduced 2D-3D self-consistency, which aligns the predicted 3D models and the projected 2D foreground segmentation masks. Moreover, our model not only enables recovering 3D shapes with the corresponding 2D masks, camera pose information can be jointly disentangled and predicted, even such supervision is never available during training. In the experiments, we qualitatively and quantitatively demonstrate the effectiveness of our model, which performs favorably against state-of-the-art approaches in either supervised or semi-supervised settings.",
"title": ""
},
{
"docid": "1f139fff7af5a49ee0e21f61bdf5a9b8",
"text": "This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.",
"title": ""
},
{
"docid": "d3d6a1793ce81ba0f4f0ffce0477a0ec",
"text": "Portable Document Format (PDF) is one of the widely-accepted document format. However, it becomes one of the most attractive targets for exploitation by malware developers and vulnerability researchers. Malicious PDF files can be used in Advanced Persistent Threats (APTs) targeting individuals, governments, and financial sectors. The existing tools such as intrusion detection systems (IDSs) and antivirus packages are inefficient to mitigate this kind of attacks. This is because these techniques need regular updates with the new malicious PDF files which are increasing every day. In this paper, a new algorithm is presented for detecting malicious PDF files based on data mining techniques. The proposed algorithm consists of feature selection stage and classification stage. The feature selection stage is used to the select the optimum number of features extracted from the PDF file to achieve high detection rate and low false positive rate with small computational overhead. Experimental results show that the proposed algorithm can achieve 99.77% detection rate, 99.84% accuracy, and 0.05% false positive rate.",
"title": ""
},
{
"docid": "df7922bcf3a0ecac69b2ac283505c312",
"text": "With the growing use of distributed information networks, there is an increasing need for algorithmic and system solutions for data-driven knowledge acquisition using distributed, heterogeneous and autonomous data repositories. In many applications, practical constraints require such systems to provide support for data analysis where the data and the computational resources are available. This presents us with distributed learning problems. We precisely formulate a class of distributed learning problems; present a general strategy for transforming traditional machine learning algorithms into distributed learning algorithms; and demonstrate the application of this strategy to devise algorithms for decision tree induction (using a variety of splitting criteria) from distributed data. The resulting algorithms are provably exact in that the decision tree constructed from distributed data is identical to that obtained by the corresponding algorithm when in the batch setting. The distributed decision tree induction algorithms have been implemented as part of INDUS, an agent-based system for data-driven knowledge acquisition from heterogeneous, distributed, autonomous data sources.",
"title": ""
},
{
"docid": "ad2e02fd3b349b2a66ac53877b82e9bb",
"text": "This paper proposes a novel approach for the evolution of artificial creatures which moves in a 3D virtual environment based on the neuroevolution of augmenting topologies (NEAT) algorithm. The NEAT algorithm is used to evolve neural networks that observe the virtual environment and respond to it, by controlling the muscle force of the creature. The genetic algorithm is used to emerge the architecture of creature based on the distance metrics for fitness evaluation. The damaged morphologies of creature are elaborated, and a crossover algorithm is used to control it. Creatures with similar morphological traits are grouped into the same species to limit the complexity of the search space. The motion of virtual creature having 2–3 limbs is recorded at three different angles to check their performance in different types of viscous mediums. The qualitative demonstration of motion of virtual creature represents that improved swimming of virtual creatures is achieved in simulating mediums with viscous drag 1–10 arbitrary unit.",
"title": ""
},
{
"docid": "b82440fdab626e7a2f02c2dc9b7c359a",
"text": "This study formulates a two-objective model to determine the optimal liner routing, ship size, and sailing frequency for container carriers by minimizing shipping costs and inventory costs. First, shipping and inventory cost functions are formulated using an analytical method. Then, based on a trade-off between shipping costs and inventory costs, Pareto optimal solutions of the twoobjective model are determined. Not only can the optimal ship size and sailing frequency be determined for any route, but also the routing decision on whether to route containers through a hub or directly to their destination can be made in objective value space. Finally, the theoretical findings are applied to a case study, with highly reasonable results. The results show that the optimal routing, ship size, and sailing frequency with respect to each level of inventory costs and shipping costs can be determined using the proposed model. The optimal routing decision tends to be shipping the cargo through a hub as the hub charge is decreased or its efficiency improved. In addition, the proposed model not only provides a tool to analyze the trade-off between shipping costs and inventory costs, but it also provides flexibility on the decision-making for container carriers. c © 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d5a7b2c027679d016c7c1ed128e48fd8",
"text": "Figure 3: Example of phase correlation between two microphones. The peak of this function indicates the inter-channel delay. index associated with peak value of f(t). This delay estimator is computationally convenient and more robust to noise and reverberation than other approaches based on cross-correlation or adaptive ltering. In ideal conditions, the output of Equation (5) is a delta function centered on the correct delay. In real applications with a wide band signal, e.g., a speech signal, the outcome is not a perfect delta function. Rather it resembles a correlation function of a random process. The time index associated with the maximum value of the output of Equation (5) provides an estimation of the delay. The system can produce wrong answers when two or more peaks of similar amplitude are present, i.e., in highly reverber-ant conditions. The resolution in delay estimation is limited in discrete systems by the sampling frequency. In order to increase the accuracy, oversampling can be applied in the neighborhood of the peak, to achieve sub-sample precision. Fig. 3 demonstrates an example of the result of a cross-power spectrum time delay estimator. Once the relative delays associated with all considered microphone pairs are known, the source position (x s ; y s) is estimated as the point that would produce the most similar delay values to the observed ones. This optimization is performed by a downhill sim-plex algorithm 6] applied to minimize the Euclidean distance between M observed delays ^ i and the corresponding M theoretical delays i : An analysis of the impulse responses associated with all the microphones, given an acoustic source emitting at a speciic position, has shown that constructive interference phenomena occur in the presence of signiicant reverberation. In some cases, the direct wavefront happens to be weaker than a coincidence of reeections, inducing a wrong estimation of the arrival direction and leading to an incorrect result. Selecting only microphone pairs that show the highest peaks of phase correlation generally alleviates this problem. Location results obtained with this strategy show comparable performance (mean posi-Reverb. Time Average Error 10 mic pairs 4 mic pairs 0.1sec 38.4 cm 29.8 cm 0.6sec 51.3 cm 32.1 cm 1.7sec 105.0 cm 46.4 cm Table 1: Average location error using either all 10 pairs or 4 pairs of microphones. Three reverberation time conditions are considered. tion error of about 0.3 m) at reverberation times of 0.1 s and 0.6 s. …",
"title": ""
},
{
"docid": "0ded64c37e44433f9822650615e0ef7a",
"text": "Transseptal catheterization is a vital component of percutaneous transvenous mitral commissurotomy. Therefore, a well-executed transseptal catheterization is the key to a safe and successful percutaneous transvenous mitral commissurotomy. Two major problems inherent in atrial septal puncture for percutaneous transvenous mitral commissurotomy are cardiac perforation and puncture of an inappropriate atrial septal site. The former may lead to serious complication of cardiac tamponade and the latter to possible difficulty in maneuvering the Inoue balloon catheter across the mitral orifice. This article details atrial septal puncture technique, including landmark selection for optimal septal puncture sites, avoidance of inappropriate puncture sites, and step-by-step description of atrial septal puncture.",
"title": ""
},
{
"docid": "27a0c382d827f920c25f7730ddbacdc0",
"text": "Some new parameters in Vivaldi Notch antennas are debated over in this paper. They can be availed for the bandwidth application amelioration. The aforementioned limiting factors comprise two parameters for the radial stub dislocation, one parameter for the stub opening angle, and one parameter for the stub’s offset angle. The aforementioned parameters are rectified by means of the optimization algorithm to accomplish a better frequency application. The results obtained in this article will eventually be collated with those of the other similar antennas. The best achieved bandwidth in this article is 17.1 GHz.",
"title": ""
},
{
"docid": "39bf990d140eb98fa7597de1b6165d49",
"text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.",
"title": ""
},
{
"docid": "581c4d11e59dc17e0cb6ecf5fa7bea93",
"text": "This paper describes the three methodologies used by CALCE in their winning entry for the IEEE 2012 PHM Data Challenge competition. An experimental data set from seventeen ball bearings was provided by the FEMTO-ST Institute. The data set consisted of data from six bearings for algorithm training and data from eleven bearings for testing. The authors developed prognostic algorithms based on the data from the training bearings to estimate the remaining useful life of the test bearings. Three methodologies are presented in this paper. Result accuracies of the winning methodology are presented.",
"title": ""
},
{
"docid": "d2af69233bf30376afb81b204b063c81",
"text": "Exploiting the security vulnerabilities in web browsers, web applications and firewalls is a fundamental trait of cross-site scripting (XSS) attacks. Majority of web population with basic web awareness are vulnerable and even expert web users may not notice the attack to be able to respond in time to neutralize the ill effects of attack. Due to their subtle nature, a victimized server, a compromised browser, an impersonated email or a hacked web application tends to keep this form of attacks alive even in the present times. XSS attacks severely offset the benefits offered by Internet based services thereby impacting the global internet community. This paper focuses on defense, detection and prevention mechanisms to be adopted at various network doorways to neutralize XSS attacks using open source tools.",
"title": ""
},
{
"docid": "c01e634ef86002a8b6fa2e78e3e1a32a",
"text": "In an effort to overcome the data deluge in computational biology and bioinformatics and to facilitate bioinformatics research in the era of big data, we identify some of the most influential algorithms that have been widely used in the bioinformatics community. These top data mining and machine learning algorithms cover classification, clustering, regression, graphical model-based learning, and dimensionality reduction. The goal of this study is to guide the focus of scalable computing experts in the endeavor of applying new storage and scalable computation designs to bioinformatics algorithms that merit their attention most, following the engineering maxim of “optimize the common case”.",
"title": ""
},
{
"docid": "13452d0ceb4dfd059f1b48dba6bf5468",
"text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "9327a13308cd713bcfb3b4717eaafef0",
"text": "A review of both laboratory and field studies on the effects of setting goals when performing a task found that in 90% of the studies, specific and challenging goals lead to higher performance than easy goals, \"do your best\" goals, or no goals. Goals affect performance by directing attention, mobilizing effort, increasing persistence, and motivating strategy development. Goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability (and ability differences are controlled), feedback is provided to show progress in relation to the goal, rewards such as money are given for goal attainment, the experimenter or manager is supportive, and assigned goals are accepted by the individual. No reliable individual differences have emerged in goal-setting studies, probably because the goals were typically assigned rather than self-set. Need for achievement and self-esteem may be the most promising individual difference variables.",
"title": ""
},
{
"docid": "460e8daf5dfc9e45c3ade5860aa9cc57",
"text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.",
"title": ""
}
] | scidocsrr |
cb0369c903de5d406e56fe9ecff85597 | Effective Botnet Detection Through Neural Networks on Convolutional Features | [
{
"docid": "8f9b22630f9bc0b86b8e51776d47de6e",
"text": "HTTP is becoming the most preferred channel for command and control (C&C) communication of botnets. One of the main reasons is that it is very easy to hide the C&C traffic in the massive amount of browser generated Web traffic. However, detecting these HTTP-based C&C packets which constitute only a minuscule portion of the overall everyday HTTP traffic is a formidable task. In this paper, we present an anomaly detection based approach to detect HTTP-based C&C traffic using statistical features based on client generated HTTP request packets and DNS server generated response packets. We use three different unsupervised anomaly detection techniques to isolate suspicious communications that have a high probability of being part of a botnet's C&C communication. Results indicate that our method can achieve more than 90% detection rate while maintaining a reasonably low false positive rate.",
"title": ""
},
{
"docid": "48f8c5ac58e9133c82242de9aff34fc1",
"text": "In recent years, the botnet phenomenon is one of the most dangerous threat to Internet security, which supports a wide range of criminal activities, including distributed denial of service (DDoS) attacks, click fraud, phishing, malware distribution, spam emails, etc. An increasing number of botnets use Domain Generation Algorithms (DGAs) to avoid detection and exclusion by the traditional methods. By dynamically and frequently generating a large number of random domain names for candidate command and control (C&C) server, botnet can be still survive even when a C&C server domain is identified and taken down. This paper presents a novel method to detect DGA botnets using Collaborative Filtering and Density-Based Clustering. We propose a combination of clustering and classification algorithm that relies on the similarity in characteristic distribution of domain names to remove noise and group similar domains. Collaborative Filtering (CF) technique is applied to find out bots in each botnet, help finding out offline malwares infected-machine. We implemented our prototype system, carried out the analysis of a huge amount of DNS traffic log of Viettel Group and obtain positive results.",
"title": ""
}
] | [
{
"docid": "4c410bb0390cc4611da4df489c89fca0",
"text": "In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models. We identify four desirable properties that are important for scalability, expressiveness and robustness, when learning and inferring with a combination of multiple models. Through analysis and experiments, we show that gPoE of Gaussian processes (GP) have these qualities, while no other existing combination schemes satisfy all of them at the same time. The resulting GP-gPoE is highly scalable as individual GP experts can be independently learned in parallel; very expressive as the way experts are combined depends on the input rather than fixed; the combined prediction is still a valid probabilistic model with natural interpretation; and finally robust to unreliable predictions from individual experts.",
"title": ""
},
{
"docid": "b5ab4c11feee31195fdbec034b4c99d9",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "5505f3e227ebba96e34e022bc59fe57a",
"text": "Social media has quickly risen to prominence as a news source, yet lingering doubts remain about its ability to spread rumor and misinformation. Systematically studying this phenomenon, however, has been difficult due to the need to collect large-scale, unbiased data along with in-situ judgements of its accuracy. In this paper we present CREDBANK, a corpus designed to bridge this gap by systematically combining machine and human computation. Specifically, CREDBANK is a corpus of tweets, topics, events and associated human credibility judgements. It is based on the real-time tracking of more than 1 billion streaming tweets over a period of more than three months, computational summarizations of those tweets, and intelligent routings of the tweet streams to human annotators—within a few hours of those events unfolding on Twitter. In total CREDBANK comprises more than 60 million tweets grouped into 1049 real-world events, each annotated by 30 human annotators. As an example, with CREDBANK one can quickly calculate that roughly 24% of the events in the global tweet stream are not perceived as credible. We have made CREDBANK publicly available, and hope it will enable new research questions related to online information credibility in fields such as social science, data mining and health.",
"title": ""
},
{
"docid": "1ecb4bd0073c16fa4d07355c12496194",
"text": "This paper gives an overview of MOSFET mismatch effects that form a performance/yield limitation for many designs. After a general description of (mis)matching, a comparison over past and future process generations is presented. The application of the matching model in CAD and analog circuit design is discussed. Mismatch effects gain importance as critical dimensions and CMOS power supply voltages decrease.",
"title": ""
},
{
"docid": "77045e77d653bfa37dfbd1a80bb152da",
"text": "We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary continuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient training data for acoustic modeling in these scenarios, we use transcribed multilingual data and semi-supervised training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute improvement of 16% in a low-resource LVCSR setting with only one hour of in-domain training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-supervised training.",
"title": ""
},
{
"docid": "5b4e2380172b90c536eb974268a930b6",
"text": "This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up.",
"title": ""
},
{
"docid": "598a45d251ae032d97db0162a9de347f",
"text": "In this paper, a 2×2 broadside array of 3D printed half-wave dipole antennas is presented. The array design leverages direct digital manufacturing (DDM) technology to realize a shaped substrate structure that is used to control the array beamwidth. The non-planar substrate allows the element spacing to be changed without affecting the length of the feed network or the distance to the underlying ground plane. The 4-element array has a broadside gain that varies between 7.0–8.5 dBi depending on the out-of-plane angle of the substrate. Acrylonitrile Butadiene Styrene (ABS) is deposited using fused deposition modeling to form the array structure (relative permittivity of 2.7 and loss tangent of 0.008) and Dupont CB028 silver paste is used to form the conductive traces.",
"title": ""
},
{
"docid": "e12e2f0d2e190d269f426a2bfefd3545",
"text": "Mordeson, J.N., Fuzzy line graphs, Pattern Recognition Letters 14 (1993) 381 384. The notion of a fuzzy line graph of a fuzzy graph is introduced. We give a necessary and sufficient condition for a fuzzy graph to be isomorphic to its corresponding fuzzy line graph. We examine when an isomorphism between two fuzzy graphs follows from an isomorphism of their corresponding fuzzy line graphs. We give a necessary and sufficient condition for a fuzzy graph to be the fuzzy line graph of some fuzzy graph.",
"title": ""
},
{
"docid": "9516d06751aa51edb0b0a3e2b75e0bde",
"text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.",
"title": ""
},
{
"docid": "852391aa93e00f9aebdbc65c2e030abf",
"text": "The iSTAR Micro Air Vehicle (MAV) is a unique 9-inch diameter ducted air vehicle weighing approximately 4 lb. The configuration consists of a ducted fan with control vanes at the duct exit plane. This VTOL aircraft not only hovers, but it can also fly at high forward speed by pitching over to a near horizontal attitude. The duct both increases propulsion efficiency and produces lift in horizontal flight, similar to a conventional planar wing. The vehicle is controlled using a rate based control system with piezo-electric gyroscopes. The Flight Control Computer (FCC) processes the pilot’s commands and the rate data from the gyroscopes to stabilize and control the vehicle. First flight of the iSTAR MAV was successfully accomplished in October 2000. Flight at high pitch angles and high speed took place in November 2000. This paper describes the vehicle, control system, and ground and flight-test results . Presented at the American Helicopter Society 57 Annual forum, Washington, DC, May 9-11, 2001. Copyright 2001 by the American Helicopter Society International, Inc. All rights reserved. Introduction The Micro Craft Inc. iSTAR is a Vertical Take-Off and Landing air vehicle (Figure 1) utilizing ducted fan technology to hover and fly at high forward speed. The duct both increases the propulsion efficiency and provides direct lift in forward flight similar to a conventional planar wing. However, there are many other benefits inherent in the iSTAR design. In terms of safety, the duct protects personnel from exposure to the propeller. The vehicle also has a very small footprint, essentially a circle equal to the diameter of the duct. This is beneficial for stowing, transporting, and in operations where space is critical, such as on board ships. The simplicity of the design is another major benefit. The absence of complex mechanical systems inherent in other VTOL designs (e.g., gearboxes, articulating blades, and counter-rotating propellers) benefits both reliability and cost. Figure 1: iSTAR Micro Air Vehicle The Micro Craft iSTAR VTOL aircraft is able to both hover and fly at high speed by pitching over towards a horizontal attitude (Figure 2). Although many aircraft in history have utilized ducted fans, most of these did not attempt to transition to high-speed forward flight. One of the few aircraft that did successfully transition was the Bell X-22 (Reference 1), first flown in 1965. The X-22, consisted of a fuselage and four ducted fans that rotated relative to the fuselage to transition the vehicle forward. The X-22 differed from the iSTAR in that its fuselage remained nearly level in forward flight, and the ducts rotated relative to the fuselage. Also planar tandem wings, not the ducts themselves, generated a large portion of the lift in forward flight. 1 Micro Craft Inc. is a division of Allied Aerospace Industry Incorporated (AAII) One of the first aircraft using an annular wing for direct lift was the French Coleoptère (Reference 1) built in the late 1950s. This vehicle successfully completed transition from hovering flight using an annular wing, however a ducted propeller was not used. Instead, a single jet engine was mounted inside the center-body for propulsion. Control was achieved by deflecting vanes inside the jet exhaust, with small external fins attached to the duct, and also with deployable strakes on the nose. Figure 2: Hover & flight at forward speed Less well-known are the General Dynamics ducted-fan Unmanned Air Vehicles, which were developed and flown starting in 1960 with the PEEK (Reference 1) aircraft. These vehicles, a precursor to the Micro Craft iSTAR, demonstrated stable hover and low speed flight in free-flight tests, and transition to forward flight in tethered ground tests. In 1999, Micro Craft acquired the patent, improved and miniaturized the design, and manufactured two 9-inch diameter flight test vehicles under DARPA funding (Reference 1). Working in conjunction with BAE systems (formerly Lockheed Sanders) and the Army/NASA Rotorcraft Division, these vehicles have recently completed a proof-ofconcept flight test program and have been demonstrated to DARPA and the US Army. Military applications of the iSTAR include intelligence, surveillance, target acquisition, and reconnaissance. Commercial applications include border patrol, bridge inspection, and police surveillance. Vehicle Description The iSTAR is composed of four major assemblies as shown in Figure 3: (1) the upper center-body, (2) the lower center body, (3) the duct, and (4) the landing ring. The majority of the vehicle’s structure is composed of Kevlar composite material resulting in a very strong and lightweight structure. Kevlar also lacks the brittleness common to other composite materials. Components that are not composite include the engine bulkhead (aluminum) and the landing ring (steel wire). The four major assemblies are described below. The upper center-body (UCB) is cylindrical in shape and contains the engine, engine controls, propeller, and payload. Three sets of hollow struts support the UCB and pass fuel and wiring to the duct. The propulsion Hover Low Speed High Speed system is a commercial-off-the-shelf (COTS) OS-32 SX single cylinder engine. This engine develops 1.2 hp and weighs approximately 250 grams (~0.5 lb.). Fuel consists of a mixture of alcohol, nitro-methane, and oil. The fixed-pitch propeller is attached directly to the engine shaft (without a gearbox). Starting the engine is accomplished by inserting a cylindrical shaft with an attached gear into the upper center-body and meshing it with a gear fit onto the propeller shaft (see Figure 4). The shaft is rotated using an off-board electric starter (Micro Craft is also investigating on-board starting systems). Figure 3: iSTAR configuration A micro video camera is mounted inside the nose cone, which is easily removable to accommodate modular payloads. The entire UCB can be removed in less than five minutes by removing eight screws securing the struts, and then disconnecting one fuel line and one electrical connector. Figure 4: Engine starting The lower center-body (LCB) is cylindrical in shape and is supported by eight stators. The sensor board is housed in the LCB, and contains three piezo-electric gyroscopes, three accelerometers, a voltage regulator, and amplifiers. The sensor signals are routed to the processor board in the duct via wires integrated into the stators. The duct is nine inches in diameter and contains a significant amount of volume for packaging. The fuel tank, flight control Computer (FCC), voltage regulator, batteries, servos, and receiver are all housed inside the duct. Fuel is contained in the leading edge of the duct. This tank is non-structural, and easily removable. It is attached to the duct with tape. Internal to the duct are eight fixed stators. The angle of the stators is set so that they produce an aerodynamic rolling moment countering the torque of the engine. Control vanes are attached to the trailing edge of the stators, providing roll, yaw, and pitch control. Four servos mounted inside the duct actuate the control vanes. Many different landing systems have been studied in the past. These trade studies have identified the landing ring as superior overall to other systems. The landing ring stabilizes the vehicle in close proximity to the ground by providing a restoring moment in dynamic situations. For example, if the vehicle were translating slowly and contacted the ground, the ring would pitch the vehicle upright. The ring also reduces blockage of the duct during landing and take-off by raising the vehicle above the ground. Blocking the duct can lead to reduced thrust and control power. Landing feet have also been considered because of their reduced weight. However, landing ‘feet’ lack the self-stabilizing characteristics of the ring in dynamic situations and tend to ‘catch’ on uneven surfaces. Electronics and Control System The Flight Control Computer (FCC) is housed in the duct (Figure 5). The computer processes the sensor output and pilot commands and generates pulse width modulated (PWM) signals to drive the servos. Pilot commands are generated using two conventional joysticks. The left joystick controls throttle position and heading. The right joystick controls pitch and yaw rate. The aircraft axis system is defined such that the longitudinal axis is coaxial with the engine shaft. Therefore, in hover the pitch attitude is 90 degrees and rolling the aircraft produces a heading change. Dedicated servos are used for pitch and yaw control. However, all control vanes are used for roll control (four quadrant roll control). The FCC provides the appropriate mixing for each servo. In each axis, the control system architecture consists of a conventional Proportional-Integral-Derivative (PID) controller with single-input and single-output. Initially, an attitude-based control system was desired, however Upper Center-body Fuel Tank Fixed Stator Control Vane Actuator Landing Ring Lower Center-body Duct Engine and Controls Prop/Fan Support struts due to the lack of acceleration information and the high gyroscope drift rates, accurate attitudes could not be calculated. For this reason, a rate system was ultimately implemented. Three Murata micro piezo-electric gyroscopes provide rates about all three axes. These gyroscopes are approximately 0.6”x0.3”x0.15” in size and weigh 1 gram each (Figure 6). Figure 5: Flight Control Computer Four COTS servos are located in the duct to actuate the control surfaces. Each servo weighs 28 grams and is 1.3”x1.3”x0.6” in size. Relative to typical UAV servos, they can generate high rates, but have low bandwidth. Bandwidth is defined by how high a frequency the servo can accurately follow an input signal. For all servos, the output lags behind the input and the signal degrades in magnitude as the frequency increases. At low frequency, the iSTAR MAV servo output signal lags by approximately 30°,",
"title": ""
},
{
"docid": "bc4d9587ba33464d74302045336ddc38",
"text": "Deep learning is a popular technique in modern online and offline services. Deep neural network based learning systems have made groundbreaking progress in model size, training and inference speed, and expressive power in recent years, but to tailor the model to specific problems and exploit data and problem structures is still an ongoing research topic. We look into two types of deep ‘‘multi-’’ objective learning problems: multi-view learning, referring to learning from data represented by multiple distinct feature sets, and multi-label learning, referring to learning from data instances belonging to multiple class labels that are not mutually exclusive. Research endeavors of both problems attempt to base on existing successful deep architectures and make changes of layers, regularization terms or even build hybrid systems to meet the problem constraints. In this report we first explain the original artificial neural network (ANN) with the backpropagation learning algorithm, and also its deep variants, e.g. deep belief network (DBN), convolutional neural network (CNN) and recurrent neural network (RNN). Next we present a survey of some multi-view and multi-label learning frameworks based on deep neural networks. At last we introduce some applications of deep multi-view and multi-label learning, including e-commerce item categorization, deep semantic hashing, dense image captioning, and our preliminary work on x-ray scattering image classification.",
"title": ""
},
{
"docid": "6dbe5a46a96857b58fc6c3d0ca7ded94",
"text": "High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003), demonstrated that HSGPA in college-preparatory courses was the best predictor of freshman grades for a sample of almost 80,000 students admitted to the University of California. Because freshman grades provide only a short-term indicator of college performance, the present study tracked four-year college outcomes, including cumulative college grades and graduation, for the same sample in order to examine the relative contribution of high-school record and standardized tests in predicting longerterm college performance. Key findings are: (1) HSGPA is consistently the strongest predictor of four-year college outcomes for all academic disciplines, campuses and freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated with HSGPA increases after the freshman year, accounting for a greater proportion of variance in cumulative fourth-year than first-year college grades; and (3) as an admissions criterion, HSGPA has less adverse impact than standardized tests on disadvantaged and underrepresented minority students. The paper concludes with a discussion of the implications of these findings for admissions policy and argues for greater emphasis on the high-school record, and a corresponding de-emphasis on standardized tests, in college admissions. * The study was supported by a grant from the Koret Foundation. Geiser and Santelices: VALIDITY OF HIGH-SCHOOL GRADES 2 CSHE Research & Occasional Paper Series Introduction and Policy Context This study examines the relative contribution of high-school grades and standardized admissions tests in predicting students’ long-term performance in college, including cumulative grade-point average and college graduation. The relative emphasis on grades vs. tests as admissions criteria has become increasingly visible as a policy issue at selective colleges and universities, particularly in states such as Texas and California, where affirmative action has been challenged or eliminated. Compared to high-school gradepoint average (HSGPA), scores on standardized admissions tests such as the SAT I are much more closely correlated with students’ socioeconomic background characteristics. As shown in Table 1, for example, among our study sample of almost 80,000 University of California (UC) freshmen, SAT I verbal and math scores exhibit a strong, positive relationship with measures of socioeconomic status (SES) such as family income, parents’ education and the academic ranking of a student’s high school, whereas HSGPA is only weakly associated with such measures. As a result, standardized admissions tests tend to have greater adverse impact than HSGPA on underrepresented minority students, who come disproportionately from disadvantaged backgrounds. The extent of the difference can be seen by rank-ordering students on both standardized tests and highschool grades and comparing the distributions. Rank-ordering students by test scores produces much sharper racial/ethnic stratification than when the same students are ranked by HSGPA, as shown in Table 2. It should be borne in mind the UC sample shown here represents a highly select group of students, drawn from the top 12.5% of California high-school graduates under the provisions of the state’s Master Plan for Higher Education. Overall, under-represented minority students account for about 17 percent of that group, although their percentage varies considerably across different HSGPA and SAT levels within the sample. When students are ranked by HSGPA, underrepresented minorities account for 28 percent of students in the bottom Family Parents' School API Income Education Decile SAT I verbal 0.32 0.39 0.32 SAT I math 0.24 0.32 0.39 HSGPA 0.04 0.06 0.01 Source: UC Corporate Student System data on 79,785 first-time freshmen entering between Fall 1996 and Fall 1999. Correlation of Admissions Factors with SES Table 1",
"title": ""
},
{
"docid": "ddd7aaa70841b172b4dc58263cc8a94e",
"text": "Fingerprint-spoofing attack often occurs when imposters gain access illegally by using artificial fingerprints, which are made of common fingerprint materials, such as silicon, latex, etc. Thus, to protect our privacy, many fingerprint liveness detection methods are put forward to discriminate fake or true fingerprint. Current work on liveness detection for fingerprint images is focused on the construction of complex handcrafted features, but these methods normally destroy or lose spatial information between pixels. Different from existing methods, convolutional neural network (CNN) can generate high-level semantic representations by learning and concatenating low-level edge and shape features from a large amount of labeled data. Thus, CNN is explored to solve the above problem and discriminate true fingerprints from fake ones in this paper. To reduce the redundant information and extract the most distinct features, ROI and PCA operations are performed for learned features of convolutional layer or pooling layer. After that, the extracted features are fed into SVM classifier. Experimental results based on the LivDet (2013) and the LivDet (2011) datasets, which are captured by using different fingerprint materials, indicate that the classification performance of our proposed method is both efficient and convenient compared with the other previous methods.",
"title": ""
},
{
"docid": "63b2c2634ec0d9507f0974203e5cc4e9",
"text": "In this paper we describe a deep network architecture that maps visual input to control actions for a robotic planar reaching task with 100% reliability in real-world trials. Our network is trained in simulation and fine-tuned with a limited number of real-world images. The policy search is guided by a kinematics-based controller (K-GPS), which works more effectively and efficiently than ε-Greedy. A critical insight in our system is the need to introduce a bottleneck in the network between the perception and control networks, and to initially train these networks independently.",
"title": ""
},
{
"docid": "82cb3db6b4738738a78fea332b075add",
"text": "This paper presents a semi-supervised learning framework for a customized semantic segmentation task using multiview image streams. A key challenge of the customized task lies in the limited accessibility of the labeled data due to the requirement of prohibitive manual annotation effort. We hypothesize that it is possible to leverage multiview image streams that are linked through the underlying 3D geometry, which can provide an additional supervisionary signal to train a segmentation model. We formulate a new cross-supervision method using a shape belief transfer—the segmentation belief in one image is used to predict that of the other image through epipolar geometry analogous to shape-from-silhouette. The shape belief transfer provides the upper and lower bounds of the segmentation for the unlabeled data where its gap approaches asymptotically to zero as the number of the labeled views increases. We integrate this theory to design a novel network that is agnostic to camera calibration, network model, and semantic category and bypasses the intermediate process of suboptimal 3D reconstruction. We validate this network by recognizing a customized semantic category per pixel from realworld visual data including non-human species and a subject of interest in social videos where attaining large-scale annotation data is infeasible.",
"title": ""
},
{
"docid": "a759ddc24cebbbf0ac71686b179962df",
"text": "Most proteins must fold into defined three-dimensional structures to gain functional activity. But in the cellular environment, newly synthesized proteins are at great risk of aberrant folding and aggregation, potentially forming toxic species. To avoid these dangers, cells invest in a complex network of molecular chaperones, which use ingenious mechanisms to prevent aggregation and promote efficient folding. Because protein molecules are highly dynamic, constant chaperone surveillance is required to ensure protein homeostasis (proteostasis). Recent advances suggest that an age-related decline in proteostasis capacity allows the manifestation of various protein-aggregation diseases, including Alzheimer's disease and Parkinson's disease. Interventions in these and numerous other pathological states may spring from a detailed understanding of the pathways underlying proteome maintenance.",
"title": ""
},
{
"docid": "47ef46ef69a23e393d8503154f110a81",
"text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",
"title": ""
},
{
"docid": "729cd7bb3f7346143db6005a56a46279",
"text": "The feature extraction stage of speech recognition is important historically and is the subject of much current research, particularly to promote robustness to acoustic disturbances such as additive noise and reverberation. Biologically inspired and biologically related approaches are an important subset of feature extraction methods for ASR.",
"title": ""
},
{
"docid": "5467003778aa2c120c36ac023f0df704",
"text": "We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units — AUs) that are structurally dependent. Their structure arises from statistically induced co-occurrence patterns of AU intensity levels. Modeling this structure is critical for improving the estimation performance; however, this performance is bounded by the quality of the input features extracted from face images. The goal of this paper is to model these structures and estimate complex feature representations simultaneously by combining conditional random field (CRF) encoded AU dependencies with deep learning. To this end, we propose a novel Copula CNN deep learning approach for modeling multivariate ordinal variables. Our model accounts for ordinal structure in output variables and their non-linear dependencies via copula functions modeled as cliques of a CRF. These are jointly optimized with deep CNN feature encoding layers using a newly introduced balanced batch iterative training algorithm. We demonstrate the effectiveness of our approach on the task of AU intensity estimation on two benchmark datasets. We show that joint learning of the deep features and the target output structure results in significant performance gains compared to existing deep structured models for analysis of facial expressions.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] | scidocsrr |
9303f490a97755ab2c14e154dedd900c | Graph Analytics Through Fine-Grained Parallelism | [
{
"docid": "cbf278a630fbc3e4b5c363d7cb976aa4",
"text": "Iterative computations are pervasive among data analysis applications in the cloud, including Web search, online social network analysis, recommendation systems, and so on. These cloud applications typically involve data sets of massive scale. Fast convergence of the iterative computation on the massive data set is essential for these applications. In this paper, we explore the opportunity for accelerating iterative computations and propose a distributed computing framework, PrIter, which enables fast iterative computation by providing the support of prioritized iteration. Instead of performing computations on all data records without discrimination, PrIter prioritizes the computations that help convergence the most, so that the convergence speed of iterative process is significantly improved. We evaluate PrIter on a local cluster of machines as well as on Amazon EC2 Cloud. The results show that PrIter achieves up to 50x speedup over Hadoop for a series of iterative algorithms.",
"title": ""
},
{
"docid": "efcfb0aac56068374d861f24775c9cce",
"text": "Hekaton is a new database engine optimized for memory resident data and OLTP workloads. Hekaton is fully integrated into SQL Server; it is not a separate system. To take advantage of Hekaton, a user simply declares a table memory optimized. Hekaton tables are fully transactional and durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both Hekaton tables and regular tables and a transaction can update data in both types of tables. T-SQL stored procedures that reference only Hekaton tables can be compiled into machine code for further performance improvements. The engine is designed for high con-currency. To achieve this it uses only latch-free data structures and a new optimistic, multiversion concurrency control technique. This paper gives an overview of the design of the Hekaton engine and reports some experimental results.",
"title": ""
},
{
"docid": "0105247ab487c2d06f3ffa0d00d4b4f9",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
}
] | [
{
"docid": "b3c947eb12abdc0abf7f3bc0de9e74fc",
"text": "This paper describes the development of two nine-storey elevators control system for a residential building. The control system adopts PLC as controller, and uses a parallel connection dispatching rule based on \"minimum waiting time\" to run two elevators in parallel mode. The paper gives the basic structure, control principle and realization method of the PLC control system in detail. It also presents the ladder diagram of the key aspects of the system. The system has simple peripheral circuit and the operation result showed that it enhanced the reliability and pe.rformance of the elevators.",
"title": ""
},
{
"docid": "349caca78b6d21b5f8853b41a8201429",
"text": "OBJECTIVE\nTo evaluate the effectiveness of a functional thumb orthosis on the dominant hand of patients with rheumatoid arthritis and boutonniere thumb.\n\n\nMETHODS\nForty patients with rheumatoid arthritis and boutonniere deformity of the thumb were randomly distributed into two groups. The intervention group used the orthosis daily and the control group used the orthosis only during the evaluation. Participants were evaluated at baseline as well as after 45 and 90 days. Assessments were preformed using the O'Connor Dexterity Test, Jamar dynamometer, pinch gauge, goniometry and the Health Assessment Questionnaire. A visual analogue scale was used to assess thumb pain in the metacarpophalangeal joint.\n\n\nRESULTS\nPatients in the intervention group experienced a statistically significant reduction in pain. The thumb orthosis did not disrupt grip and pinch strength, function, Health Assessment Questionnaire score or dexterity in either group.\n\n\nCONCLUSION\nThe use of thumb orthosis for type I and type II boutonniere deformities was effective in relieving pain.",
"title": ""
},
{
"docid": "f4d4e87dd292377115ff815cc56c001c",
"text": "We present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples.",
"title": ""
},
{
"docid": "441e22ca7323b7490cbdf7f5e6e85a80",
"text": "Familial gigantiform cementoma (FGC) is a rare autosomal dominant, benign fibro-cemento-osseous lesion of the jaws that can cause severe facial deformity. True FGC with familial history is extremely rare and there has been no literature regarding the radiological follow-up of FGC. We report a case of recurrent FGC in an Asian female child who has been under our observation for 6 years since she was 15 months old. After repeated recurrences and subsequent surgeries, the growth of the tumor had seemed to plateau on recent follow-up CT images. The transition from an enhancing soft tissue lesion to a homogeneous bony lesion on CT may indicate decreased growth potential of FGC.",
"title": ""
},
{
"docid": "1a2f2e75691e538c867b6ce58591a6a5",
"text": "Despite the profusion of NIALM researches and products using complex algorithms, addressing the market for low cost, compact, real-time and effective NIALM smart meters is still a challenge. This paper talks about the design of a NIALM smart meter for home appliances, with the ability to self-detect and disaggregate most home appliances. In order to satisfy the compact, real-time, low price requirements and to solve the challenge in slow transient and multi-state appliances, two algorithms are used: the CUSUM to improve the event detection and the Genetic Algorithm (GA) for appliance disaggregation. Evaluation of these algorithms has been done according to public NIALM REDD data set [6]. They are now in first stage of architecture design using Labview FPGA methodology. KeywordsNIALM, CUSUM, Genetic Algorithm, K-mean, classification, smart meter, FPGA.",
"title": ""
},
{
"docid": "2f566d97cf0949ae54276525b805239e",
"text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to di¤erences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second one two ambiguous elements are present, each of which functions both as a connector and",
"title": ""
},
{
"docid": "8069999c95b31e8c847091f72b694af7",
"text": "Software defined radio (SDR) is a rapidly evolving technology which implements some functional modules of a radio system in software executing on a programmable processor. SDR provides a flexible mechanism to reconfigure the radio, enabling networked devices to easily adapt to user preferences and the operating environment. However, the very mechanisms that provide the ability to reconfigure the radio through software also give rise to serious security concerns such as unauthorized modification of the software, leading to radio malfunction and interference with other users' communications. Both the SDR device and the network need to be protected from such malicious radio reconfiguration.\n In this paper, we propose a new architecture to protect SDR devices from malicious reconfiguration. The proposed architecture is based on robust separation of the radio operation environment and user application environment through the use of virtualization. A secure radio middleware layer is used to intercept all attempts to reconfigure the radio, and a security policy monitor checks the target configuration against security policies that represent the interests of various parties. Therefore, secure reconfiguration can be ensured in the radio operation environment even if the operating system in the user application environment is compromised. We have prototyped the proposed secure SDR architecture using VMware and the GNU Radio toolkit, and demonstrate that the overheads incurred by the architecture are small and tolerable. Therefore, we believe that the proposed solution could be applied to address SDR security concerns in a wide range of both general-purpose and embedded computing systems.",
"title": ""
},
{
"docid": "d1d1b85b0675c59f01c61c6f144ee8a7",
"text": "We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein’s method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.",
"title": ""
},
{
"docid": "05366166b02ebd29abeb2dcf67710981",
"text": "Wireless access to the Internet via PDAs (personal digital assistants) provides Web type services in the mobile world. What we are lacking are design guidelines for such PDA services. For Web publishing, however, there are many resources to look for guidelines. The guidelines can be classified according to which aspect of the Web media they are related: software/hardware, content and its organization, or aesthetics and layout. In order to be applicable to PDA services, these guidelines have to be modified. In this paper we analyze the main characteristics of PDAs and their influence to the guidelines.",
"title": ""
},
{
"docid": "428fea9d583921320c0377b483b1280e",
"text": "Purpose: The purpose of this paper is to perform a systematic review of articles that have used the unified theory of acceptance and use of technology (UTAUT). Design/methodology/approach: The results produced in this research are based on the literature analysis of 174 existing articles on the UTAUT model. This has been performed by collecting data including demographic details, methodological details, limitations, and significance of relationships between the constructs from the available articles based on the UTAUT. Findings: The findings were categorised by dividing the articles that used the UTAUT model into types of information systems used, research approach and methods employed, and tools and techniques implemented to analyse results. We also perform the weight analysis of variables and found that performance expectancy and behavioural intention qualified for the best predictor category. The research also analysed and presented the limitations of existing studies. Research limitations/implications: The search activities were centered on occurrences of keywords to avoid tracing a large number of publications where these keywords might have been used as casual words in the main text. However, we acknowledge that there may be a number of studies, which lack keywords in the title, but still focus upon UTAUT in some form. Originality/value: This is the first research of its type, which has extensively examined the literature on the UTAUT and provided the researchers with the accumulative knowledge about the model.",
"title": ""
},
{
"docid": "ef6678881f503c1cec330ddde3e30929",
"text": "Complex queries over high speed data streams often need to rely on approximations to keep up with their input. The research community has developed a rich literature on approximate streaming algorithms for this application. Many of these algorithms produce samples of the input stream, providing better properties than conventional random sampling. In this paper, we abstract the stream sampling process and design a new stream sample operator. We show how it can be used to implement a wide variety of algorithms that perform sampling and sampling-based aggregations. Also, we show how to implement the operator in Gigascope - a high speed stream database specialized for IP network monitoring applications. As an example study, we apply the operator within such an enhanced Gigascope to perform subset-sum sampling which is of great interest for IP network management. We evaluate this implemention on a live, high speed internet traffic data stream and find that (a) the operator is a flexible, versatile addition to Gigascope suitable for tuning and algorithm engineering, and (b) the operator imposes only a small evaluation overhead. This is the first operational implementation we know of, for a wide variety of stream sampling algorithms at line speed within a data stream management system.",
"title": ""
},
{
"docid": "e0fd648da901ed99ddbed3457bc83cfe",
"text": "This clinical trial assessed the ability of Gluma Dentin Bond to inhibit dentinal sensitivity in teeth prepared to receive complete cast restorations. Twenty patients provided 76 teeth for the study. Following tooth preparation, dentinal surfaces were coated with either sterile water (control) or two 30-second applications of Gluma Dentin Bond (test) on either intact or removed smear layers. Patients were recalled after 14 days for a test of sensitivity of the prepared dentin to compressed air, osmotic stimulus (saturated CaCl2 solution), and tactile stimulation via a scratch test under controlled loads. A significantly lower number of teeth responded to the test stimuli for both Gluma groups when compared to the controls (P less than .01). No difference was noted between teeth with smear layers intact or removed prior to treatment with Gluma.",
"title": ""
},
{
"docid": "ea304e700faa3d3cae4bff89cf01c397",
"text": "Ternary logic is a promising alternative to the conventional binary logic in VLSI design as it provides the advantages of reduced interconnects, higher operating speeds, and smaller chip area. This paper presents a pair of circuits for implementing a ternary half adder using carbon nanotube field-effect transistors. The proposed designs combine both futuristic ternary and conventional binary logic design approach. One of the proposed circuits for ternary to binary decoder simplifies further circuit implementation and provides excellent delay and power advantages in data path circuit such as adder. These circuits have been extensively simulated using HSPICE to obtain power, delay, and power delay product. The circuit performances are compared with alternative designs reported in recent literature. One of the proposed ternary adders has been demonstrated power, power delay product improvement up to 63% and 66% respectively, with lesser transistor count. So, the use of these half adders in complex arithmetic circuits will be advantageous.",
"title": ""
},
{
"docid": "d0a41ebc758439b91f96b44c40dd711b",
"text": "Chirp signals are very common in radar, communication, sonar, and etc. Little is known about chirp images, i.e., 2-D chirp signals. In fact, such images frequently appear in optics and medical science. Newton's rings fringe pattern is a classical example of the images, which is widely used in optical metrology. It is known that the fractional Fourier transform(FRFT) is a convenient method for processing chirp signals. Furthermore, it can be extended to 2-D fractional Fourier transform for processing 2-D chirp signals. It is interesting to observe the chirp images in the 2-D fractional Fourier transform domain and extract some physical parameters hidden in the images. Besides that, in the FRFT domain, it is easy to separate the 2-D chirp signal from other signals to obtain the desired image.",
"title": ""
},
{
"docid": "76081fd0b4e06c6ee5d7f1e5cef7fe84",
"text": "Systematic procedure is described for designing bandpass filters with wide bandwidths based on parallel coupled three-line microstrip structures. It is found that the tight gap sizes between the resonators of end stages and feed lines, required for wideband filters based on traditional coupled line design, can be greatly released. The relation between the circuit parameters of a three-line coupling section and an admittance inverter circuit is derived. A design graph for substrate with /spl epsiv//sub r/=10.2 is provided. Two filters of orders 3 and 5 with fractional bandwidths 40% and 50%, respectively, are fabricated and measured. Good agreement between prediction and measurement is obtained.",
"title": ""
},
{
"docid": "6490b984de3a9769cdae92208e7bb26d",
"text": "A new perspective on the topic of antibiotic resistance is beginning to emerge based on a broader evolutionary and ecological understanding rather than from the traditional boundaries of clinical research of antibiotic-resistant bacterial pathogens. Phylogenetic insights into the evolution and diversity of several antibiotic resistance genes suggest that at least some of these genes have a long evolutionary history of diversification that began well before the 'antibiotic era'. Besides, there is no indication that lateral gene transfer from antibiotic-producing bacteria has played any significant role in shaping the pool of antibiotic resistance genes in clinically relevant and commensal bacteria. Most likely, the primary antibiotic resistance gene pool originated and diversified within the environmental bacterial communities, from which the genes were mobilized and penetrated into taxonomically and ecologically distant bacterial populations, including pathogens. Dissemination and penetration of antibiotic resistance genes from antibiotic producers were less significant and essentially limited to other high G+C bacteria. Besides direct selection by antibiotics, there is a number of other factors that may contribute to dissemination and maintenance of antibiotic resistance genes in bacterial populations.",
"title": ""
},
{
"docid": "1e7d55b2d45b44ab93c39894c2ea0838",
"text": "Simulink Stateflow is widely used for the model-driven development of software. However, the increasing demand of rigorous verification for safety critical applications brings new challenge to the Simulink Stateflow because of the lack of formal semantics. In this paper, we present STU, a self-contained toolkit to bridge the Simulink Stateflow and a well-defined rigorous verification. The tool translates the Simulink Stateflow into the Uppaal timed automata for verification. Compared to existing work, more advanced and complex modeling features in Stateflow such as the event stack, conditional action and timer are supported. Then, with the strong verification power of Uppaal, we can not only find design defects that are missed by the Simulink Design Verifier, but also check more important temporal properties. The evaluation on artificial examples and real industrial applications demonstrates the effectiveness.",
"title": ""
},
{
"docid": "257ffbc75578916dc89a703598ac0447",
"text": "Implant surgery in mandibular anterior region may turn from an easy minor surgery into a complicated one for the surgeon, due to inadequate knowledge of the anatomy of the surgical area and/or ignorance toward the required surgical protocol. Hence, the purpose of this article is to present an overview on the: (a) Incidence of massive bleeding and its consequences after implant placement in mandibular anterior region. (b) Its etiology, the precautionary measures to be taken to avoid such an incidence in clinical practice and management of such a hemorrhage if at all happens. An inclusion criterion for selection of article was defined, and an electronic Medline search through different database using different keywords and manual search in journals and books was executed. Relevant articles were selected based upon inclusion criteria to form the valid protocols for implant surgery in the anterior mandible. Further, from the selected articles, 21 articles describing case reports were summarized separately in a table to alert the dental surgeons about the morbidity they could come across while operating in this region. If all the required adequate measures for diagnosis and treatment planning are taken and appropriate surgical protocol is followed, mandibular anterior region is no doubt a preferable area for implant placement.",
"title": ""
},
{
"docid": "940b907c28adeaddc2515f304b1d885e",
"text": "In this study, we intend to identify the evolutionary footprints of the South Iberian population focusing on the Berber and Arab influence, which has received little attention in the literature. Analysis of the Y-chromosome variation represents a convenient way to assess the genetic contribution of North African populations to the present-day South Iberian genetic pool and could help to reconstruct other demographic events that could have influenced on that region. A total of 26 Y-SNPs and 17 Y-STRs were genotyped in 144 samples from 26 different districts of South Iberia in order to assess the male genetic composition and the level of substructure of male lineages in this area. To obtain a more comprehensive picture of the genetic structure of the South Iberian region as a whole, our data were compared with published data on neighboring populations. Our analyses allow us to confirm the specific impact of the Arab and Berber expansion and dominion of the Peninsula. Nevertheless, our results suggest that this influence is not bigger in Andalusia than in other Iberian populations.",
"title": ""
},
{
"docid": "1328ced6939005175d3fbe2ef95fd067",
"text": "We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/mahyarnajibi/SNIPER/.",
"title": ""
}
] | scidocsrr |
60c02cef732b387703ca70aac707a40e | Pedestrian Detection: An Evaluation of the State of the Art | [
{
"docid": "13d94a3afd97c4c5f8839652c58ab05f",
"text": "We present an approach for learning to detect objects in still gray images, that is based on a sparse, part-based representation of objects. A vocabulary of information-rich object parts is automatically constructed from a set of sample images of the object class of interest. Images are then represented using parts from this vocabulary, along with spatial relations observed among them. Based on this representation, a feature-efficient learning algorithm is used to learn to detect instances of the object class. The framework developed can be applied to any object with distinguishable parts in a relatively fixed spatial configuration. We report experiments on images of side views of cars. Our experiments show that the method achieves high detection accuracy on a difficult test set of real-world images, and is highly robust to partial occlusion and background variation. In addition, we discuss and offer solutions to several methodological issues that are significant for the research community to be able to evaluate object detection",
"title": ""
},
{
"docid": "1589e72380265787a10288c5ad906670",
"text": "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.",
"title": ""
},
{
"docid": "359b6308a6e6e3d6857cb6b4f59fd1bc",
"text": "Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection method that augments widely used edge-based features with texture and color information, providing us with a much richer descriptor set. This augmentation results in an extremely high-dimensional feature space (more than 170,000 dimensions). In such high-dimensional spaces, classical machine learning algorithms such as SVMs are nearly intractable with respect to training. Furthermore, the number of training samples is much smaller than the dimensionality of the feature space, by at least an order of magnitude. Finally, the extraction of features from a densely sampled grid structure leads to a high degree of multicollinearity. To circumvent these data characteristics, we employ Partial Least Squares (PLS) analysis, an efficient dimensionality reduction technique, one which preserves significant discriminative information, to project the data onto a much lower dimensional subspace (20 dimensions, reduced from the original 170,000). Our human detection system, employing PLS analysis over the enriched descriptor set, is shown to outperform state-of-the-art techniques on three varied datasets including the popular INRIA pedestrian dataset, the low-resolution gray-scale DaimlerChrysler pedestrian dataset, and the ETHZ pedestrian dataset consisting of full-length videos of crowded scenes.",
"title": ""
},
{
"docid": "72bbc123119afa92f652d0a5332671e9",
"text": "Both detection and tracking people are challenging problems, especially in complex real world scenes that commonly involve multiple people, complicated occlusions, and cluttered or even moving backgrounds. People detectors have been shown to be able to locate pedestrians even in complex street scenes, but false positives have remained frequent. The identification of particular individuals has remained challenging as well. Tracking methods are able to find a particular individual in image sequences, but are severely challenged by real-world scenarios such as crowded street scenes. In this paper, we combine the advantages of both detection and tracking in a single framework. The approximate articulation of each person is detected in every frame based on local features that model the appearance of individual body parts. Prior knowledge on possible articulations and temporal coherency within a walking cycle are modeled using a hierarchical Gaussian process latent variable model (hGPLVM). We show how the combination of these results improves hypotheses for position and articulation of each person in several subsequent frames. We present experimental results that demonstrate how this allows to detect and track multiple people in cluttered scenes with reoccurring occlusions.",
"title": ""
}
] | [
{
"docid": "ed15e2118e219cf699c38100a0d124c3",
"text": "Is Facebook becoming a place where people mistakenly think they can literally get away with murder? In a 2011 Facebook murder-for-hire case in Philadelphia, PA, a 19-yearold mother offered $1,000 on Facebook to kill her 22-year-old boyfriend, the father of her 2-year-old daughter. The boyfriend was killed while the only two suspects responding to the mother’s post were in custody, so there is speculation that the murder was drug related. The mother pleaded guilty to conspiracy to commit murder, and was immediately paroled on a 3to 23-month sentence. Other ‘‘Facebook murder’’ perpetrators are being brought to justice, one way or another:",
"title": ""
},
{
"docid": "b140f08d25d5c37c4fa8743333664af2",
"text": " Random walks on an association graph using candidate matches as nodes. Rank candidate matches by stationary distribution Personalized jump for enforcing the matching constraints during the random walks process Matching constraints satisfying reweighting vector is calculated iteratively by inflation and bistochastic normalization Due to object motion or viewpoint change, relationships between two nodes are not exactly same Outlier Noise Deformation Noise",
"title": ""
},
{
"docid": "ca7e7fa988bf2ed1635e957ea6cd810d",
"text": "Knowledge graph (KG) is known to be helpful for the task of question answering (QA), since it provides well-structured relational information between entities, and allows one to further infer indirect facts. However, it is challenging to build QA systems which can learn to reason over knowledge graphs based on question-answer pairs alone. First, when people ask questions, their expressions are noisy (for example, typos in texts, or variations in pronunciations), which is non-trivial for the QA system to match those mentioned entities to the knowledge graph. Second, many questions require multi-hop logic reasoning over the knowledge graph to retrieve the answers. To address these challenges, we propose a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously. Our method achieves state-of-the-art performance on a recent benchmark dataset in the literature. We also derive a series of new benchmark datasets, including questions for multi-hop reasoning, questions paraphrased by neural translation model, and questions in human voice. Our method yields very promising results on all these challenging datasets.",
"title": ""
},
{
"docid": "c34b6fac632c05c73daee2f0abce3ae8",
"text": "OBJECTIVES\nUnilateral strength training produces an increase in strength of the contralateral homologous muscle group. This process of strength transfer, known as cross education, is generally attributed to neural adaptations. It has been suggested that unilateral strength training of the free limb may assist in maintaining the functional capacity of an immobilised limb via cross education of strength, potentially enhancing recovery outcomes following injury. Therefore, the purpose of this review is to examine the impact of immobilisation, the mechanisms that may contribute to cross education, and possible implications for the application of unilateral training to maintain strength during immobilisation.\n\n\nDESIGN\nCritical review of literature.\n\n\nMETHODS\nSearch of online databases.\n\n\nRESULTS\nImmobilisation is well known for its detrimental effects on muscular function. Early reductions in strength outweigh atrophy, suggesting a neural contribution to strength loss, however direct evidence for the role of the central nervous system in this process is limited. Similarly, the precise neural mechanisms responsible for cross education strength transfer remain somewhat unknown. Two recent studies demonstrated that unilateral training of the free limb successfully maintained strength in the contralateral immobilised limb, although the role of the nervous system in this process was not quantified.\n\n\nCONCLUSIONS\nCross education provides a unique opportunity for enhancing rehabilitation following injury. By gaining an understanding of the neural adaptations occurring during immobilisation and cross education, future research can utilise the application of unilateral training in clinical musculoskeletal injury rehabilitation.",
"title": ""
},
{
"docid": "9b547f43a345d2acc3a75c80a8b2f064",
"text": "A risk-metric framework that supports Enterprise Risk Management is described. At the heart of the framework is the notion of a risk profile that provides risk measurement for risk elements. By providing a generic template in which metrics can be codified in terms of metric space operators, risk profiles can be used to construct a variety of risk measures for different business contexts. These measures can vary from conventional economic risk calculations to the kinds of metrics that are used by decision support systems, such as those supporting inexact reasoning and which are considered to closely match how humans combine information.",
"title": ""
},
{
"docid": "6ddf8cc094a38ebe47d51303f4792dc6",
"text": "The symmetric travelling salesman problem is a real world combinatorial optimization problem and a well researched domain. When solving combinatorial optimization problems such as the travelling salesman problem a low-level construction heuristic is usually used to create an initial solution, rather than randomly creating a solution, which is further optimized using techniques such as tabu search, simulated annealing and genetic algorithms, amongst others. These heuristics are usually manually derived by humans and this is a time consuming process requiring many man hours. The research presented in this paper forms part of a larger initiative aimed at automating the process of deriving construction heuristics for combinatorial optimization problems.\n The study investigates genetic programming to induce low-level construction heuristics for the symmetric travelling salesman problem. While this has been examined for other combinatorial optimization problems, to the authors' knowledge this is the first attempt at evolving low-level construction heuristics for the travelling salesman problem. In this study a generational genetic programming algorithm randomly creates an initial population of low-level construction heuristics which is iteratively refined over a set number of generations by the processes of fitness evaluation, selection of parents and application of genetic operators.\n The approach is tested on 23 problem instances, of varying problem characteristics, from the TSPLIB and VLSI benchmark sets. The evolved heuristics were found to perform better than the human derived heuristic, namely, the nearest neighbourhood heuristic, generally used to create initial solutions for the travelling salesman problem.",
"title": ""
},
{
"docid": "d56855e068a4524fda44d93ac9763cab",
"text": "greatest cause of mortality from cardiovascular disease, after myocardial infarction and cerebrovascular stroke. From hospital epidemiological data it has been calculated that the incidence of PE in the USA is 1 per 1,000 annually. The real number is likely to be larger, since the condition goes unrecognised in many patients. Mortality due to PE has been estimated to exceed 15% in the first three months after diagnosis. PE is a dramatic and life-threatening complication of deep venous thrombosis (DVT). For this reason, the prevention, diagnosis and treatment of DVT is of special importance, since symptomatic PE occurs in 30% of those affected. If asymptomatic episodes are also included, it is estimated that 50-60% of DVT patients develop PE. DVT and PE are manifestations of the same entity, namely thromboembolic disease. If we extrapolate the epidemiological data from the USA to Greece, which has a population of about ten million, 20,000 new cases of thromboembolic disease may be expected annually. Of these patients, PE will occur in 10,000, of which 6,000 will have symptoms and 900 will die during the first trimester.",
"title": ""
},
{
"docid": "2caaff9258c6b7a429a8d1aa086b73e6",
"text": "Ahstract- For many people suffering from motor disabilities, assistive devices controlled with only brain activity are the only way to interact with their environment [1]. Natural tasks often require different kinds of interactions, involving different controllers the user should be able to select in a self-paced way. We developed a Brain-Computer Interface (BCI) allowing users to switch between four control modes in a self-paced way in real-time. Since the system is devised to be used in domestic environments in a user-friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (CNNs), known for their ability to find the optimal features in classification tasks. We tested our system using the Cybathlon BCI computer game, which embodies all the challenges inherent to real-time control. Our preliminary results show that an efficient architecture (SmallNet), with only one convolutional layer, can classify 4 mental activities chosen by the user. The BCI system is run and validated online. It is kept up-to-date through the use of newly collected signals along playing, reaching an online accuracy of 47.6% where most approaches only report results obtained offline. We found that models trained with data collected online better predicted the behaviour of the system in real-time. This suggests that similar (CNN based) offline classifying methods found in the literature might experience a drop in performance when applied online. Compared to our previous decoder of physiological signals relying on blinks, we increased by a factor 2 the amount of states among which the user can transit, bringing the opportunity for finer control of specific subtasks composing natural grasping in a self-paced way. Our results are comparable to those showed at the Cybathlon's BCI Race but further improvements on accuracy are required.",
"title": ""
},
{
"docid": "e5fe8cfe50499f0175cd503cdae6138e",
"text": "We aim to detect complex events in long Internet videos that may last for hours. A major challenge in this setting is that only a few shots in a long video are relevant to the event of interest while others are irrelevant or even misleading. Instead of indifferently pooling the shots, we first define a novel notion of semantic saliency that assesses the relevance of each shot with the event of interest. We then prioritize the shots according to their saliency scores since shots that are semantically more salient are expected to contribute more to the final event detector. Next, we propose a new isotonic regularizer that is able to exploit the semantic ordering information. The resulting nearly-isotonic SVM classifier exhibits higher discriminative power. Computationally, we develop an efficient implementation using the proximal gradient algorithm, and we prove new, closed-form proximal steps. We conduct extensive experiments on three real-world video datasets and confirm the effectiveness of the proposed approach.",
"title": ""
},
{
"docid": "5a6bfd63fbbe4aea72226c4aa30ac05d",
"text": "Submitted: 1 December 2015 Accepted: 6 April 2016 doi:10.1111/zsc.12190 Sotka, E.E., Bell, T., Hughes, L.E., Lowry, J.K. & Poore, A.G.B. (2016). A molecular phylogeny of marine amphipods in the herbivorous family Ampithoidae. —Zoologica Scripta, 00, 000–000. Ampithoid amphipods dominate invertebrate assemblages associated with shallow-water macroalgae and seagrasses worldwide and represent the most species-rich family of herbivorous amphipod known. To generate the first molecular phylogeny of this family, we sequenced 35 species from 10 genera at two mitochondrial genes [the cytochrome c oxidase subunit I (COI) and the large subunit of 16 s (LSU)] and two nuclear loci [sodium–potassium ATPase (NAK) and elongation factor 1-alpha (EF1)], for a total of 1453 base pairs. All 10 genera are embedded within an apparently monophyletic Ampithoidae (Amphitholina, Ampithoe, Biancolina, Cymadusa, Exampithoe, Paragrubia, Peramphithoe, Pleonexes, Plumithoe, Pseudoamphithoides and Sunamphitoe). Biancolina was previously placed within its own superfamily in another suborder. Within the family, single-locus trees were generally poor at resolving relationships among genera. Combined-locus trees were better at resolving deeper nodes, but complete resolution will require greater taxon sampling of ampithoids and closely related outgroup species, and more molecular characters. Despite these difficulties, our data generally support the monophyly of Ampithoidae, novel evolutionary relationships among genera, several currently accepted genera that will require revisions via alpha taxonomy and the presence of cryptic species. Corresponding author: Erik Sotka, Department of Biology and the College of Charleston Marine Laboratory, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mail: [email protected] Erik E. Sotka, and Tina Bell, Department of Biology and Grice Marine Laboratory, College of Charleston, 205 Fort Johnson Road, Charleston, SC 29412, USA. E-mails: [email protected], [email protected] Lauren E. Hughes, and James K. Lowry, Australian Museum Research Institute, 6 College Street, Sydney, NSW 2010, Australia. E-mails: [email protected], [email protected] Alistair G. B. Poore, Evolution & Ecology Research Centre, School of Biological, Earth and Environmental Sciences, University of New South Wales, Sydney, NSW 2052, Australia. E-mail: [email protected]",
"title": ""
},
{
"docid": "ec44e814277dd0d45a314c42ef417cbe",
"text": "INTRODUCTION Oxygen support therapy should be given to the patients with acute hypoxic respiratory insufficiency in order to provide oxygenation of the tissues until the underlying pathology improves. The inspiratory flow rate requirement of patients with respiratory insufficiency varies between 30 and 120 L/min. Low flow and high flow conventional oxygen support systems produce a maximum flow rate of 15 L/min, and FiO2 changes depending on the patient’s peak inspiratory flow rate, respiratory pattern, the mask that is used, or the characteristics of the cannula. The inability to provide adequate airflow leads to discomfort in tachypneic patients. With high-flow nasal oxygen (HFNO) cannulas, warmed and humidified air matching the body temperature can be regulated at flow rates of 5–60 L/min, and oxygen delivery varies between 21% and 100%. When HFNO, first used in infants, was reported to increase the risk of infection, its long-term use was stopped. This problem was later eliminated with the use of sterile water, and its use has become a current issue in critical adult patients as well. Studies show that HFNO treatment improves physiological parameters when compared to conventional oxygen systems. Although there are studies indicating successful applications in different patient groups, there are also studies indicating that it does not create any difference in clinical parameters, but patient comfort is better in HFNO when compared with standard oxygen therapy and noninvasive mechanical ventilation (NIMV) (1-6). In this compilation, the physiological effect mechanisms of HFNO treatment and its use in various clinical situations are discussed in the light of current studies.",
"title": ""
},
{
"docid": "fb2ab8efc11c371e7183eacaee707f71",
"text": "Direct current (DC) motors are controlled easily and have very high performance. The speed of the motors could be adjusted within a wide range. Today, classical control techniques (such as Proportional Integral Differential PID) are very commonly used for speed control purposes. However, it is observed that the classical control techniques do not have an adequate performance in the case of nonlinear systems. Thus, instead, a modern technique is preferred: fuzzy logic. In this paper the control system is modelled using MATLAB/Simulink. Using both PID controller and fuzzy logic techniques, the results are compared for different speed values.",
"title": ""
},
{
"docid": "78c3573511176ba63e2cf727e09c7eb4",
"text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.",
"title": ""
},
{
"docid": "c1981c3b0ccd26d4c8f02c2aa5e71c7a",
"text": "Functional genomics studies have led to the discovery of a large amount of non-coding RNAs from the human genome; among them are long non-coding RNAs (lncRNAs). Emerging evidence indicates that lncRNAs could have a critical role in the regulation of cellular processes such as cell growth and apoptosis as well as cancer progression and metastasis. As master gene regulators, lncRNAs are capable of forming lncRNA–protein (ribonucleoprotein) complexes to regulate a large number of genes. For example, lincRNA-RoR suppresses p53 in response to DNA damage through interaction with heterogeneous nuclear ribonucleoprotein I (hnRNP I). The present study demonstrates that hnRNP I can also form a functional ribonucleoprotein complex with lncRNA urothelial carcinoma-associated 1 (UCA1) and increase the UCA1 stability. Of interest, the phosphorylated form of hnRNP I, predominantly in the cytoplasm, is responsible for the interaction with UCA1. Moreover, although hnRNP I enhances the translation of p27 (Kip1) through interaction with the 5′-untranslated region (5′-UTR) of p27 mRNAs, the interaction of UCA1 with hnRNP I suppresses the p27 protein level by competitive inhibition. In support of this finding, UCA1 has an oncogenic role in breast cancer both in vitro and in vivo. Finally, we show a negative correlation between p27 and UCA in the breast tumor cancer tissue microarray. Together, our results suggest an important role of UCA1 in breast cancer.",
"title": ""
},
{
"docid": "bcb6ef3082d50038b456af4b942e75eb",
"text": "Vertebral angioma is a common bone tumor. We report a case of L1 vertebral angioma revealed by type A3.2 traumatic pathological fracture of the same vertebra. Management comprised emergency percutaneous osteosynthesis and, after stabilization of the multiple trauma, arterial embolization and percutaneous kyphoplasty.",
"title": ""
},
{
"docid": "8a81d5a3a91fdd0d4e55a8ce477f279a",
"text": "Sex differences are prominent in mood and anxiety disorders and may provide a window into mechanisms of onset and maintenance of affective disturbances in both men and women. With the plethora of sex differences in brain structure, function, and stress responsivity, as well as differences in exposure to reproductive hormones, social expectations and experiences, the challenge is to understand which sex differences are relevant to affective illness. This review will focus on clinical aspects of sex differences in affective disorders including the emergence of sex differences across developmental stages and the impact of reproductive events. Biological, cultural, and experiential factors that may underlie sex differences in the phenomenology of mood and anxiety disorders are discussed.",
"title": ""
},
{
"docid": "f85b08a0e3f38c1471b3c7f05e8a17ba",
"text": "In an end-to-end dialog system, the aim of dialog state tracking is to accurately estimate a compact representation of the current dialog status from a sequence of noisy observations produced by the speech recognition and the natural language understanding modules. A state tracking module is primarily meant to act as support for a dialog policy but it can also be used as support for dialog corpus summarization and other kinds of information extraction from transcription of dialogs. From a probabilistic view, this is achieved by maintaining a posterior distribution over hidden dialog states composed, in the simplest case, of a set of context dependent variables. Once a dialog policy is defined, deterministic or learnt, it is in charge of selecting an optimal dialog act given the estimated dialog state and a defined reward function. This paper introduces a novel method of dialog state tracking based on the general paradigm of machine reading and proposes to solve it using a memory-enhanced neural network architecture. We evaluate the proposed approach on the second Dialog State Tracking Challenge (DSTC-2) dataset that has been converted for the occasion in order to fit the relaxed assumption of a machine reading formulation where the true state is only provided at the very end of each dialog instead of providing the state updates at the utterance level. We show that the proposed tracker gives encouraging results. Finally, we propose to extend the DSTC-2 dataset with specific reasoning capabilities requirement like counting, list maintenance, yes-no question answering and indefinite knowledge management.",
"title": ""
},
{
"docid": "1ba931d8b32c3e1622c46c7b645608a3",
"text": "The recently introduced method, which was called ldquostretching,rdquo is extended to timed Petri nets which may have both controllable and uncontrollable transitions. Using this method, a new Petri net, called ldquostretched Petri net,rdquo which has only unit firing durations, is obtained to represent a timed-transition Petri net. Using this net, the state of the original timed Petri net can be represented easily. This representation also makes it easy to design a supervisory controller for a timed Petri net for any purpose. In this paper, supervisory controller design to avoid deadlock is considered in particular. Using this method, a controller is first designed for the stretched Petri net. Then, using this controller, a controller for the original timed Petri net is obtained. Algorithms to construct the reachability sets of the stretched and original timed Petri nets, as well as algorithms to obtain the controller for the original timed Petri net are presented. These algorithms are implemented using Matlab. Examples are also presented to illustrate the introduced approach.",
"title": ""
},
{
"docid": "1cfab58b5b57009817a54faceafacd8e",
"text": "Current Web applications are very complex and high sophisticated software products, whose usability can heavily determine their success or failure. Defining methods for ensuring usability is one of the current goals of the Web Engineering research. Also, much attention on usability is currently paid by Industry, which is recognizing the importance of adopting methods for usability evaluation before and after the application deployment. This chapter introduces principles and evaluation methods to be adopted during the whole application lifecycle for promoting usability. For each evaluation method, the main features, as well as the emerging advantages and drawbacks are illustrated, so as to support the choice of an evaluation plan that best fits the goals to be pursued and the available resources. The design and evaluation of a real application is also described for exemplifying the introduced concepts and methods.",
"title": ""
},
{
"docid": "5a777c011d7dbd82653b1b2d0f007607",
"text": "The Factored Language Model (FLM) is a flexible framework for incorporating various information sources, such as morphology and part-of-speech, into language modeling. FLMs have so far been successfully applied to tasks such as speech recognition and machine translation; it has the potential to be used in a wide variety of problems in estimating probability tables from sparse data. This tutorial serves as a comprehensive description of FLMs and related algorithms. We document the FLM functionalities as implemented in the SRI Language Modeling toolkit and provide an introductory walk-through using FLMs on an actual dataset. Our goal is to provide an easy-to-understand tutorial and reference for researchers interested in applying FLMs to their problems. Overview of the Tutorial We first describe the factored language model (Section 1) and generalized backoff (Section 2), two complementary techniques that attempt to improve statistical estimation (i.e., reduce parameter variance) in language models, and that also attempt to better describe the way in which language (and sequences of words) might be produced. Researchers familar with the algorithms behind FLMs may skip to Section 3, which describes the FLM programs and file formats in the publicly-available SRI Language Modeling (SRILM) toolkit.1 Section 4 is a step-by-step walkthrough with several FLM examples on a real language modeling dataset. This may be useful for beginning users of the FLMs. Finally, Section 5 discusses the problem of automatically tuning FLM parameters on real datasets and refers to existing software. This may be of interest to advanced users of FLMs.",
"title": ""
}
] | scidocsrr |
7663e1da0e3460b971249ce724b584d3 | Mid-Curve Recommendation System: a Stacking Approach Through Neural Networks | [
{
"docid": "be692c1251cb1dc73b06951c54037701",
"text": "Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.",
"title": ""
},
{
"docid": "50cc2033252216368c3bf19ea32b8a2c",
"text": "Sometimes you just have to clench your teeth and go for the differential matrix algebra. And the central limit theorems. Together with the maximum likelihood techniques. And the static mean variance portfolio theory. Not forgetting the dynamic asset pricing models. And these are just the tools you need before you can start making empirical inferences in financial economics.” So wrote Ruben Lee, playfully, in a review of The Econometrics of Financial Markets, winner of TIAA-CREF’s Paul A. Samuelson Award. In economist Harry M. Markowitz, who in won the Nobel Prize in Economics, published his landmark thesis “Portfolio Selection” as an article in the Journal of Finance, and financial economics was born. Over the subsequent decades, this young and burgeoning field saw many advances in theory but few in econometric technique or empirical results. Then, nearly four decades later, Campbell, Lo, and MacKinlay’s The Econometrics of Financial Markets made a bold leap forward by integrating theory and empirical work. The three economists combined their own pathbreaking research with a generation of foundational work in modern financial theory and research. The book includes treatment of topics from the predictability of asset returns to the capital asset pricing model and arbitrage pricing theory, from statistical fractals to chaos theory. Read widely in academe as well as in the business world, The Econometrics of Financial Markets has become a new landmark in financial economics, extending and enhancing the Nobel Prize– winning work established by the early trailblazers in this important field.",
"title": ""
}
] | [
{
"docid": "48aff90183293227a99ecf3911c7296a",
"text": "Based on data from a survey (n = 3291) and 14 qualitative interviews among Danish older adults, this study investigated the use of, and attitudes toward, information communications technology (ICT) and the digital delivery of public services. While age, gender, and socioeconomic status were associated with use of ICT, these determinants lost their explanatory power when we controlled for attitudes and experiences. We identified three segments that differed in their use of ICT and attitudes toward digital service delivery. As nonuse of ICT often results from the lack of willingness to use it rather than from material or cognitive deficiencies, policy measures for bridging the digital divide should focus on skills and confidence rather than on access or ability.",
"title": ""
},
{
"docid": "2f9b8ee2f7578c7820eced92fb98c696",
"text": "The Tic tac toe is very popular game having a 3 × 3 grid board and 2 players. A Special Symbol (X or O) is assigned to each player to indicate the slot is covered by the respective player. The winner of the game is the player who first cover a horizontal, vertical and diagonal row of the board having only player's own symbols. This paper presents the design model of Tic tac toe Game using Multi-Tape Turing Machine in which both player choose input randomly and result of the game is declared. The computational Model of Tic tac toe is used to describe it in a formal manner.",
"title": ""
},
{
"docid": "047c486e94c217a9ce84cdd57fc647fe",
"text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.",
"title": ""
},
{
"docid": "0afe679d5b022cc31a3ce69b967f8d77",
"text": "Cyber-crime has reached unprecedented proportions in this day and age. In addition, the internet has created a world with seemingly no barriers while making a countless number of tools available to the cyber-criminal. In light of this, Computer Forensic Specialists employ state-of-the-art tools and methodologies in the extraction and analysis of data from storage devices used at the digital crime scene. The focus of this paper is to conduct an investigation into some of these Forensic tools eg.Encase®. This investigation will address commonalities across the Forensic tools, their essential differences and ultimately point out what features need to be improved in these tools to allow for effective autopsies of storage devices.",
"title": ""
},
{
"docid": "a380ee9ea523d1a3a09afcf2fb01a70d",
"text": "Back-translation has become a commonly employed heuristic for semi-supervised neural machine translation. The technique is both straightforward to apply and has led to stateof-the-art results. In this work, we offer a principled interpretation of back-translation as approximate inference in a generative model of bitext and show how the standard implementation of back-translation corresponds to a single iteration of the wake-sleep algorithm in our proposed model. Moreover, this interpretation suggests a natural iterative generalization, which we demonstrate leads to further improvement of up to 1.6 BLEU.",
"title": ""
},
{
"docid": "9c8ab4fa4e6951990c771025cd4cc36c",
"text": "This paper presents a methodology for extracting road edge and lane information for smart and intelligent navigation of vehicles. The range information provided by a fast laser range-measuring device is processed by an extended Kalman filter to extract the road edge or curb information. The resultant road edge information is used to aid in the extraction of the lane boundary from a CCD camera image. Hough Transform (HT) is used to extract the candidate lane boundary edges, and the most probable lane boundary is determined using an Active Line Model based on minimizing an appropriate Energy function. Experimental results are presented to demonstrate the effectiveness of the combined Laser and Vision strategy for road-edge and lane boundary detection.",
"title": ""
},
{
"docid": "99d76fafe2a238a061e67e4c5e5bea52",
"text": "F/OSS software has been described by many as a puzzle. In the past five years, it has stimulated the curiosity of scholars in a variety of fields, including economics, law, psychology, anthropology and computer science, so that the number of contributions on the subject has increased exponentially. The purpose of this paper is to provide a sufficiently comprehensive account of these contributions in order to draw some general conclusions on the state of our understanding of the phenomenon and identify directions for future research. The exercise suggests that what is puzzling about F/OSS is not so much the fact that people freely contribute to a good they make available to all, but rather the complexity of its institutional structure and its ability to organizationally evolve over time. JEL Classification: K11, L22, L23, L86, O31, O34.",
"title": ""
},
{
"docid": "9de00d8cf6b3001f976fa49c42875620",
"text": "This paper is a preliminary report on the efficiency of two strategies of data reduction in a data preprocessing stage. In the first experiment, we apply the Count-Min sketching algorithm, while in the second experiment we discretize our data prior to applying the Count-Min algorithm. By conducting a discretization before sketching, the need for the increased number of buckets in sketching is reduced. This preliminary attempt of combining two methods with the same purpose has shown potential. In our experiments, we use sensor data collected to study the environmental fluctuation and its impact on the quality of fresh peaches and nectarines in cold chain.",
"title": ""
},
{
"docid": "37f4da100d31ad1da1ba21168c95d7e9",
"text": "An AC chopper controller with symmetrical Pulse-Width Modulation (PWM) is proposed to achieve better performance for a single-phase induction motor compared to phase-angle control line-commutated voltage controllers and integral-cycle control of thyristors. Forced commutated device IGBT controlled by a microcontroller was used in the AC chopper which has the advantages of simplicity, ability to control large amounts of power and low waveform distortion. In this paper the simulation and hardware models of a simple single phase IGBT An AC controller has been developed which showed good results.",
"title": ""
},
{
"docid": "6aa1c48fcde6674990a03a1a15b5dc0e",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications with band-notched function. The proposed antenna is composed of two offset microstrip-fed antenna elements with UWB performance. To achieve high isolation and polarization diversity, the antenna elements are placed perpendicular to each other. A parasitic T-shaped strip between the radiating elements is employed as a decoupling structure to further suppress the mutual coupling. In addition, the notched band at 5.5 GHz is realized by etching a pair of L-shaped slits on the ground. The antenna prototype with a compact size of 38.5 × 38.5 mm2 has been fabricated and measured. Experimental results show that the antenna has an impedance bandwidth of 3.08-11.8 GHz with reflection coefficient less than -10 dB, except the rejection band of 5.03-5.97 GHz. Besides, port isolation, envelope correlation coefficient and radiation characteristics are also investigated. The results indicate that the MIMO antenna is suitable for band-notched UWB applications.",
"title": ""
},
{
"docid": "dee5489accb832615f63623bc445212f",
"text": "In this paper a simulation-based scheduling system is discussed which was developed for a semiconductor Backend facility. Apart from the usual dispatching rules it uses heuristic search strategies for the optimization of the operating sequences. In practice hereby multiple objectives have to be considered, e. g. concurrent minimization of mean cycle time, maximization of throughput and due date compliance. Because the simulation model is very complex and simulation time itself is not negligible, we emphasize to increase the convergence of heuristic optimization methods, consequentially reducing the number of necessary iterations. Several realized strategies are presented.",
"title": ""
},
{
"docid": "311f0668e477dda8ef4716d58ff9cdc8",
"text": "A fundamental aspect of controlling humanoid robots lies in the capability to exploit the whole body to perform tasks. This work introduces a novel whole body control library called OpenSoT. OpenSoT is combined with joint impedance control to create a framework that can effectively generate complex whole body motion behaviors for humanoids according to the needs of the interaction level of the tasks. OpenSoT gives an easy way to implement tasks, constraints, bounds and solvers by providing common interfaces. We present the mathematical foundation of the library and validate it on the compliant humanoid robot COMAN to execute multiple motion tasks under a number of constraints. The framework is able to solve hierarchies of tasks of arbitrary complexity in a robust and reliable way.",
"title": ""
},
{
"docid": "246a4ed0d3a94fead44c1e48cc235a63",
"text": "With the introduction of fully convolutional neural networks, deep learning has raised the benchmark for medical image segmentation on both speed and accuracy, and different networks have been proposed for 2D and 3D segmentation with promising results. Nevertheless, most networks only handle relatively small numbers of labels (<10), and there are very limited works on handling highly unbalanced object sizes especially in 3D segmentation. In this paper, we propose a network architecture and the corresponding loss function which improve segmentation of very small structures. By combining skip connections and deep supervision with respect to the computational feasibility of 3D segmentation, we propose a fast converging and computationally efficient network architecture for accurate segmentation. Furthermore, inspired by the concept of focal loss, we propose an exponential logarithmic loss which balances the labels not only by their relative sizes but also by their segmentation difficulties. We achieve an average Dice coefficient of 82% on brain segmentation with 20 labels, with the ratio of the smallest to largest object sizes as 0.14%. Less than 100 epochs are required to reach such accuracy, and segmenting a 128×128×128 volume only takes around 0.4 s.",
"title": ""
},
{
"docid": "8d29b510fb10f8f7dc4563bca36b9e6d",
"text": "Face images that are captured by surveillance cameras usually have a very low resolution, which significantly limits the performance of face recognition systems. In the past, super-resolution techniques have been proposed to increase the resolution by combining information from multiple images. These techniques use super-resolution as a preprocessing step to obtain a high-resolution image that is later passed to a face recognition system. Considering that most state-of-the-art face recognition systems use an initial dimensionality reduction method, we propose to transfer the super-resolution reconstruction from pixel domain to a lower dimensional face space. Such an approach has the advantage of a significant decrease in the computational complexity of the super-resolution reconstruction. The reconstruction algorithm no longer tries to obtain a visually improved high-quality image, but instead constructs the information required by the recognition system directly in the low dimensional domain without any unnecessary overhead. In addition, we show that face-space super-resolution is more robust to registration errors and noise than pixel-domain super-resolution because of the addition of model-based constraints.",
"title": ""
},
{
"docid": "30a8b93f979f913f92fc8a39ae8d25ab",
"text": "Many of the recent Trajectory Optimization algorithms alternate between local approximation of the dynamics and conservative policy update. However, linearly approximating the dynamics in order to derive the new policy can bias the update and prevent convergence to the optimal policy. In this article, we propose a new model-free algorithm that backpropagates a local quadratic time-dependent Q-Function, allowing the derivation of the policy update in closed form. Our policy update ensures exact KL-constraint satisfaction without simplifying assumptions on the system dynamics demonstrating improved performance in comparison to related Trajectory Optimization algorithms linearizing the dynamics.",
"title": ""
},
{
"docid": "dd16da9d44e47fb0f7fe1a25063daeee",
"text": "The excitation and vibration triggered by the long-term operation of railway vehicles inevitably result in defective states of catenary support devices. With the massive construction of high-speed electrified railways, automatic defect detection of diverse and plentiful fasteners on the catenary support device is of great significance for operation safety and cost reduction. Nowadays, the catenary support devices are periodically captured by the cameras mounted on the inspection vehicles during the night, but the inspection still mostly relies on human visual interpretation. To reduce the human involvement, this paper proposes a novel vision-based method that applies the deep convolutional neural networks (DCNNs) in the defect detection of the fasteners. Our system cascades three DCNN-based detection stages in a coarse-to-fine manner, including two detectors to sequentially localize the cantilever joints and their fasteners and a classifier to diagnose the fasteners’ defects. Extensive experiments and comparisons of the defect detection of catenary support devices along the Wuhan–Guangzhou high-speed railway line indicate that the system can achieve a high detection rate with good adaptation and robustness in complex environments.",
"title": ""
},
{
"docid": "d911ccb1bbb761cbfee3e961b8732534",
"text": "This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection. There are various applications of SIFT that includes object recognition, robotic mapping and navigation, image stitching, 3D modeling, gesture recognition, video tracking, individual identification of wildlife and match moving.",
"title": ""
},
{
"docid": "4b8af6dfcaaea4246c10ab840ea03608",
"text": "Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy of smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto the resource-rich cloud. However, how to achieve energy-efficient computation offloading under the hard constraint for application completion time remains a challenge issue. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into the energy-efficiency cost (EEC) minimization problem while satisfying the task-dependency requirements and the completion time deadline constraint. To solve the optimization problem, we then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control and transmission power allocation. More importantly, we find that the computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, our experimental results in a real testbed demonstrate that the eDors algorithm can effectively reduce the EEC by optimally adjusting the CPU clock frequency of SMDs based on the dynamic voltage and frequency scaling (DVFS) technique in local computing, and adapting the transmission power for the wireless channel conditions in cloud computing.",
"title": ""
},
{
"docid": "8cbdd4f368ca9fd7dcf7e4f8c9748412",
"text": "We describe an efficient neural network method to automatically learn sentiment lexicons without relying on any manual resources. The method takes inspiration from the NRC method, which gives the best results in SemEval13 by leveraging emoticons in large tweets, using the PMI between words and tweet sentiments to define the sentiment attributes of words. We show that better lexicons can be learned by using them to predict the tweet sentiment labels. By using a very simple neural network, our method is fast and can take advantage of the same data volume as the NRC method. Experiments show that our lexicons give significantly better accuracies on multiple languages compared to the current best methods.",
"title": ""
},
{
"docid": "2e0e53ff34dccd5412faab5b51a3a2f2",
"text": "This study examines print and online daily newspaper journalists’ perceptions of the credibility of Internet news information, as well as the influence of several factors— most notably, professional role conceptions—on those perceptions. Credibility was measured as a multidimensional construct. The results of a survey of U.S. journalists (N = 655) show that Internet news information was viewed as moderately credible overall and that online newspaper journalists rated Internet news information as significantly more credible than did print newspaper journalists. Hierarchical regression analyses reveal that Internet reliance was a strong positive predictor of credibility. Two professional role conceptions also emerged as significant predictors. The populist mobilizer role conception was a significant positive predictor of online news credibility, while the adversarial role conception was a significant negative predictor. Demographic characteristics of print and online daily newspaper journalists did not influence their perceptions of online news credibility.",
"title": ""
}
] | scidocsrr |
58739370c4538449529104817a3ce640 | Warning traffic sign recognition using a HOG-based K-d tree | [
{
"docid": "1c1775a64703f7276e4843b8afc26117",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
}
] | [
{
"docid": "07de7621bcba13f151b8616f8ef46bb4",
"text": "There is growing evidence that client firms expect outsourcing suppliers to transform their business. Indeed, most outsourcing suppliers have delivered IT operational and business process innovation to client firms; however, achieving strategic innovation through outsourcing has been perceived to be far more challenging. Building on the growing interest in the IS outsourcing literature, this paper seeks to advance our understanding of the role that relational and contractual governance plays in achieving strategic innovation through outsourcing. We hypothesized and tested empirically the relationship between the quality of client-supplier relationships and the likelihood of achieving strategic innovation, and the interaction effect of different contract types, such as fixed-price, time and materials, partnership and their combinations. Results from a pan-European survey of 248 large firms suggest that high-quality relationships between clients and suppliers may indeed help achieve strategic innovation through outsourcing. However, within the spectrum of various outsourcing contracts, only the partnership contract, when included in the client contract portfolio alongside either fixed-price, time and materials or their combination, presents a significant positive effect on relational governance and is likely to strengthen the positive effect of the quality of client-supplier relationships on strategic innovation.",
"title": ""
},
{
"docid": "47afccb5e7bcdade764666f3b5ab042e",
"text": "Social media comprises interactive applications and platforms for creating, sharing and exchange of user-generated contents. The past ten years have brought huge growth in social media, especially online social networking services, and it is changing our ways to organize and communicate. It aggregates opinions and feelings of diverse groups of people at low cost. Mining the attributes and contents of social media gives us an opportunity to discover social structure characteristics, analyze action patterns qualitatively and quantitatively, and sometimes the ability to predict future human related events. In this paper, we firstly discuss the realms which can be predicted with current social media, then overview available predictors and techniques of prediction, and finally discuss challenges and possible future directions.",
"title": ""
},
{
"docid": "800fd3b3b6dfd21838006e643ba92a0d",
"text": "The primary goals in use of half-bridge LLC series-resonant converter (LLC-SRC) are high efficiency, low noise, and wide-range regulation. A voltage-clamped drive circuit for simultaneously driving both primary and secondary switches is proposed to achieve synchronous rectification (SR) at switching frequency higher than the dominant resonant frequency. No high/low-side driver circuit for half-bridge switches of LLC-SRC is required and less circuit complexity is achieved. The SR mode LLC-SRC developed for reducing output rectification losses is described along with steady-state analysis, gate drive strategy, and its experiments. Design consideration is described thoroughly so as to build up a reference for design and realization. A design example of 240W SR LLC-SRC is examined and an average efficiency as high as 95% at full load is achieved. All performances verified by simulation and experiment are close to the theoretical predictions.",
"title": ""
},
{
"docid": "5768212e1fa93a7321fa6c0deff10c88",
"text": "Human research biobanks have rapidly expanded in the past 20 years, in terms of both their complexity and utility. To date there exists no agreement upon classification schema for these biobanks. This is an important issue to address for several reasons: to ensure that the diversity of biobanks is appreciated, to assist researchers in understanding what type of biobank they need access to, and to help institutions/funding bodies appreciate the varying level of support required for different types of biobanks. To capture the degree of complexity, specialization, and diversity that exists among human research biobanks, we propose here a new classification schema achieved using a conceptual classification approach. This schema is based on 4 functional biobank \"elements\" (donor/participant, design, biospecimens, and brand), which we feel are most important to the major stakeholder groups (public/participants, members of the biobank community, health care professionals/researcher users, sponsors/funders, and oversight bodies), and multiple intrinsic features or \"subelements\" (eg, the element \"biospecimens\" could be further classified based on preservation method into fixed, frozen, fresh, live, and desiccated). We further propose that the subelements relating to design (scale, accrual, data format, and data content) and brand (user, leadership, and sponsor) should be specifically recognized by individual biobanks and included in their communications to the broad stakeholder audience.",
"title": ""
},
{
"docid": "b006c534bd688fb2023f56f3952390d1",
"text": "The idea of applying IOT technologies to smart home system is introduced. An original architecture of the integrated system is analyzed with its detailed introduction. This architecture has great scalability. Based on this proposed architecture many applications can be integrated into the system through uniform interface. Agents are proposed to communicate with appliances through RFID tags. Key issues to be solved to promote the development of smart home system are also discussed.",
"title": ""
},
{
"docid": "f712384911f20ce7a475c4fe7d6be35d",
"text": "Weather forecasting provides numerous societal benefits, from extreme weather warnings to agricultural planning. In recent decades, advances in forecasting have been rapid, arising from improved observations and models, and better integration of these through data assimilation and related techniques. Further improvements are not yet constrained by limits on predictability. Better forecasting, in turn, can contribute to a wide range of environmental forecasting, from forest-fire smoke to bird migrations.",
"title": ""
},
{
"docid": "a441f01dae68134b419aa33f1f9588a6",
"text": "In this work we present a technique for using natural language to help reinforcement learning generalize to unseen environments using neural machine translation techniques. These techniques are then integrated into policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, and show that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.",
"title": ""
},
{
"docid": "408ef85850165cb8ffa97811cb5dc957",
"text": "Inspired by the recent development of deep network-based methods in semantic image segmentation, we introduce an end-to-end trainable model for face mask extraction in video sequence. Comparing to landmark-based sparse face shape representation, our method can produce the segmentation masks of individual facial components, which can better reflect their detailed shape variations. By integrating convolutional LSTM (ConvLSTM) algorithm with fully convolutional networks (FCN), our new ConvLSTM-FCN model works on a per-sequence basis and takes advantage of the temporal correlation in video clips. In addition, we also propose a novel loss function, called segmentation loss, to directly optimise the intersection over union (IoU) performances. In practice, to further increase segmentation accuracy, one primary model and two additional models were trained to focus on the face, eyes, and mouth regions, respectively. Our experiment shows the proposed method has achieved a 16.99% relative improvement (from 54.50 to 63.76% mean IoU) over the baseline FCN model on the 300 Videos in the Wild (300VW) dataset.",
"title": ""
},
{
"docid": "a52673140d86780db6c73787e5f53139",
"text": "Human papillomavirus (HPV) is the most important etiological factor for cervical cancer. A recent study demonstrated that more than 20 HPV types were thought to be oncogenic for uterine cervical cancer. Notably, more than one-half of women show cervical HPV infections soon after their sexual debut, and about 90 % of such infections are cleared within 3 years. Immunity against HPV might be important for elimination of the virus. The innate immune responses involving macrophages, natural killer cells, and natural killer T cells may play a role in the first line of defense against HPV infection. In the second line of defense, adaptive immunity via cytotoxic T lymphocytes (CTLs) targeting HPV16 E2 and E6 proteins appears to eliminate cells infected with HPV16. However, HPV can evade host immune responses. First, HPV does not kill host cells during viral replication and therefore neither presents viral antigen nor induces inflammation. HPV16 E6 and E7 proteins downregulate the expression of type-1 interferons (IFNs) in host cells. The lack of co-stimulatory signals by inflammatory cytokines including IFNs during antigen recognition may induce immune tolerance rather than the appropriate responses. Moreover, HPV16 E5 protein downregulates the expression of HLA-class 1, and it facilitates evasion of CTL attack. These mechanisms of immune evasion may eventually support the establishment of persistent HPV infection, leading to the induction of cervical cancer. Considering such immunological events, prophylactic HPV16 and 18 vaccine appears to be the best way to prevent cervical cancer in women who are immunized in adolescence.",
"title": ""
},
{
"docid": "b134824f6c135a331e503b77d17380c0",
"text": "Social media sites (e.g., Flickr, YouTube, and Facebook) are a popular distribution outlet for users looking to share their experiences and interests on the Web. These sites host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events of different type and scale. By automatically identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can enable event browsing and search in state-of-the-art search engines. To address this problem, we exploit the rich \"context\" associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). Using this rich context, which includes both textual and non-textual features, we can define appropriate document similarity metrics to enable online clustering of media to events. As a key contribution of this paper, we explore a variety of techniques for learning multi-feature similarity metrics for social media documents in a principled manner. We evaluate our techniques on large-scale, real-world datasets of event images from Flickr. Our evaluation results suggest that our approach identifies events, and their associated social media documents, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "556c0c1662a64f484aff9d7556b2d0b5",
"text": "In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. In this work we treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model.",
"title": ""
},
{
"docid": "f0ca75d480ca80ab9c3f8ea35819d064",
"text": "Purpose – The purpose of this paper is to evaluate the influence of psychological hardiness, social judgment, and “Big Five” personality dimensions on leader performance in U.S. military academy cadets at West Point. Design/methodology/approach – Army Cadets were studied in two different organizational contexts: (a)summer field training, and (b)during academic semesters. Leader performance was measured with leadership grades (supervisor ratings) aggregated over four years at West Point. Findings After controlling for general intellectual abilities, hierarchical regression results showed leader performance in the summer field training environment is predicted by Big Five Extraversion, and Hardiness, and a trend for Social Judgment. During the academic period context, leader performance is predicted by mental abilities, Big Five Conscientiousness, and Hardiness, with a trend for Social Judgment. Research limitations/implications Results confirm the importance of psychological hardiness, extraversion, and conscientiousness as factors influencing leader effectiveness, and suggest that social judgment aspects of emotional intelligence can also be important. These results also show that different Big Five personality factors may influence leadership in different organizational",
"title": ""
},
{
"docid": "64c6012d2e97a1059161c295ae3b9cdb",
"text": "One of the most popular user activities on the Web is watching videos. Services like YouTube, Vimeo, and Hulu host and stream millions of videos, providing content that is on par with TV. While some of this content is popular all over the globe, some videos might be only watched in a confined, local region.\n In this work we study the relationship between popularity and locality of online YouTube videos. We investigate whether YouTube videos exhibit geographic locality of interest, with views arising from a confined spatial area rather than from a global one. Our analysis is done on a corpus of more than 20 millions YouTube videos, uploaded over one year from different regions. We find that about 50% of the videos have more than 70% of their views in a single region. By relating locality to viralness we show that social sharing generally widens the geographic reach of a video. If, however, a video cannot carry its social impulse over to other means of discovery, it gets stuck in a more confined geographic region. Finally, we analyze how the geographic properties of a video's views evolve on a daily basis during its lifetime, providing new insights on how the geographic reach of a video changes as its popularity peaks and then fades away.\n Our results demonstrate how, despite the global nature of the Web, online video consumption appears constrained by geographic locality of interest: this has a potential impact on a wide range of systems and applications, spanning from delivery networks to recommendation and discovery engines, providing new directions for future research.",
"title": ""
},
{
"docid": "b06fc6126bf086cdef1d5ac289cf5ebe",
"text": "Rhinophyma is a subtype of rosacea characterized by nodular thickening of the skin, sebaceous gland hyperplasia, dilated pores, and in its late stage, fibrosis. Phymatous changes in rosacea are most common on the nose but can also occur on the chin (gnatophyma), ears (otophyma), and eyelids (blepharophyma). In severe cases, phymatous changes result in the loss of normal facial contours, significant disfigurement, and social isolation. Additionally, patients with profound rhinophyma can experience nare obstruction and difficulty breathing due to the weight and bulk of their nose. Treatment options for severe advanced rhinophyma include cryosurgery, partial-thickness decortication with subsequent secondary repithelialization, carbon dioxide (CO2) or erbium-doped yttrium aluminum garnet (Er:YAG) laser ablation, full-thickness resection with graft or flap reconstruction, excision by electrocautery or radio frequency, and sculpting resection using a heated Shaw scalpel. We report a severe case of rhinophyma resulting in marked facial disfigurement and nasal obstruction treated successfully using the Shaw scalpel. Rhinophymectomy using the Shaw scalpel allows for efficient and efficacious treatment of rhinophyma without the need for multiple procedures or general anesthesia and thus should be considered in patients with nare obstruction who require intervention.",
"title": ""
},
{
"docid": "5ff7a82ec704c8fb5c1aa975aec0507c",
"text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.",
"title": ""
},
{
"docid": "177db8a6f89528c1e822f52395a34468",
"text": "Design of a low-energy power-ON reset (POR) circuit is proposed to reduce the energy consumed by the stable supply of the dual supply static random access memory (SRAM), as the other supply is ramping up. The proposed POR circuit, when embedded inside dual supply SRAM, removes its ramp-up constraints related to voltage sequencing and pin states. The circuit consumes negligible energy during ramp-up, does not consume dynamic power during operations, and includes hysteresis to improve noise immunity against voltage fluctuations on the power supply. The POR circuit, designed in the 40-nm CMOS technology within 10.6-μm2 area, enabled 27× reduction in the energy consumed by the SRAM array supply during periphery power-up in typical conditions.",
"title": ""
},
{
"docid": "fc421a5ef2556b86c34d6f2bb4dc018e",
"text": "It's been over a decade now. We've forgotten how slow the adoption of consumer Internet commerce has been compared to other Internet growth metrics. And we're surprised when security scares like spyware and phishing result in lurches in consumer use.This paper re-visits an old theme, and finds that consumer marketing is still characterised by aggression and dominance, not sensitivity to customer needs. This conclusion is based on an examination of terms and privacy policy statements, which shows that businesses are confronting the people who buy from them with fixed, unyielding interfaces. Instead of generating trust, marketers prefer to wield power.These hard-headed approaches can work in a number of circumstances. Compelling content is one, but not everyone sells sex, gambling services, short-shelf-life news, and even shorter-shelf-life fashion goods. And, after decades of mass-media-conditioned consumer psychology research and experimentation, it's far from clear that advertising can convert everyone into salivating consumers who 'just have to have' products and services brand-linked to every new trend, especially if what you sell is groceries or handyman supplies.The thesis of this paper is that the one-dimensional, aggressive concept of B2C has long passed its use-by date. Trading is two-way -- consumers' attention, money and loyalty, in return for marketers' products and services, and vice versa.So B2C is conceptually wrong, and needs to be replaced by some buzzphrase that better conveys 'B-with-C' rather than 'to-C' and 'at-C'. Implementations of 'customised' services through 'portals' have to mature beyond data-mining-based manipulation to support two-sided relationships, and customer-managed profiles.It's all been said before, but now it's time to listen.",
"title": ""
},
{
"docid": "0808637a7768609502b63bff5ffda1cb",
"text": "Blur is a key determinant in the perception of image quality. Generally, blur causes spread of edges, which leads to shape changes in images. Discrete orthogonal moments have been widely studied as effective shape descriptors. Intuitively, blur can be represented using discrete moments since noticeable blur affects the magnitudes of moments of an image. With this consideration, this paper presents a blind image blur evaluation algorithm based on discrete Tchebichef moments. The gradient of a blurred image is first computed to account for the shape, which is more effective for blur representation. Then the gradient image is divided into equal-size blocks and the Tchebichef moments are calculated to characterize image shape. The energy of a block is computed as the sum of squared non-DC moment values. Finally, the proposed image blur score is defined as the variance-normalized moment energy, which is computed with the guidance of a visual saliency model to adapt to the characteristic of human visual system. The performance of the proposed method is evaluated on four public image quality databases. The experimental results demonstrate that our method can produce blur scores highly consistent with subjective evaluations. It also outperforms the state-of-the-art image blur metrics and several general-purpose no-reference quality metrics.",
"title": ""
},
{
"docid": "1592e0150e4805a1fab68e5daaed8ed7",
"text": "Knowledge management (KM) has emerged as a tool that allows the creation, use, distribution and transfer of knowledge in organizations. There are different frameworks that propose KM in the scientific literature. The majority of these frameworks are structured based on a strong theoretical background. This study describes a guide for the implementation of KM in a higher education institution (HEI) based on a framework with a clear description on the practical implementation. This framework is based on a technological infrastructure that includes enterprise architecture, business intelligence and educational data mining. Furthermore, a case study which describes the experience of the implementation in a HEI is presented. As a conclusion, the pros and cons on the use of the framework are analyzed.",
"title": ""
},
{
"docid": "b26724af5b086315f219ae63bcd083d1",
"text": "BACKGROUND\nHyperhomocysteinemia arising from impaired methionine metabolism, probably usually due to a deficiency of cystathionine beta-synthase, is associated with premature cerebral, peripheral, and possibly coronary vascular disease. Both the strength of this association and its independence of other risk factors for cardiovascular disease are uncertain. We studied the extent to which the association could be explained by heterozygous cystathionine beta-synthase deficiency.\n\n\nMETHODS\nWe first established a diagnostic criterion for hyperhomocysteinemia by comparing peak serum levels of homocysteine after a standard methionine-loading test in 25 obligate heterozygotes with respect to cystathionine beta-synthase deficiency (whose children were known to be homozygous for homocystinuria due to this enzyme defect) with the levels in 27 unrelated age- and sex-matched normal subjects. A level of 24.0 mumol per liter or more was 92 percent sensitive and 100 percent specific in distinguishing the two groups. The peak serum homocysteine levels in these normal subjects were then compared with those in 123 patients whose vascular disease had been diagnosed before they were 55 years of age.\n\n\nRESULTS\nHyperhomocysteinemia was detected in 16 of 38 patients with cerebrovascular disease (42 percent), 7 of 25 with peripheral vascular disease (28 percent), and 18 of 60 with coronary vascular disease (30 percent), but in none of the 27 normal subjects. After adjustment for the effects of conventional risk factors, the lower 95 percent confidence limit for the odds ratio for vascular disease among the patients with hyperhomocysteinemia, as compared with the normal subjects, was 3.2. The geometric-mean peak serum homocysteine level was 1.33 times higher in the patients with vascular disease than in the normal subjects (P = 0.002). The presence of cystathionine beta-synthase deficiency was confirmed in 18 of 23 patients with vascular disease who had hyperhomocysteinemia.\n\n\nCONCLUSIONS\nHyperhomocysteinemia is an independent risk factor for vascular disease, including coronary disease, and in most instances is probably due to cystathionine beta-synthase deficiency.",
"title": ""
}
] | scidocsrr |
57976eabf115bf9ce2fe2d70fe8d36c9 | An Empirical Study on the Usage of the Swift Programming Language | [
{
"docid": "40d7847859a974d2a91cccab55ba625b",
"text": "Programming question and answer (Q&A) websites, such as Stack Overflow, leverage the knowledge and expertise of users to provide answers to technical questions. Over time, these websites turn into repositories of software engineering knowledge. Such knowledge repositories can be invaluable for gaining insight into the use of specific technologies and the trends of developer discussions. Previous work has focused on analyzing the user activities or the social interactions in Q&A websites. However, analyzing the actual textual content of these websites can help the software engineering community to better understand the thoughts and needs of developers. In the article, we present a methodology to analyze the textual content of Stack Overflow discussions. We use latent Dirichlet allocation (LDA), a statistical topic modeling technique, to automatically discover the main topics present in developer discussions. We analyze these discovered topics, as well as their relationships and trends over time, to gain insights into the development community. Our analysis allows us to make a number of interesting observations, including: the topics of interest to developers range widely from jobs to version control systems to C# syntax; questions in some topics lead to discussions in other topics; and the topics gaining the most popularity over time are web development (especially jQuery), mobile applications (especially Android), Git, and MySQL.",
"title": ""
}
] | [
{
"docid": "9688fbd2b207937bff340ee1cf6878b3",
"text": "AIMS\n(a) To investigate how widespread is the use of long term treatment without improvement amongst clinicians treating individuals with low back pain. (b) To study the beliefs behind the reasons why chiropractors, osteopaths and physiotherapists continue to treat people whose low back pain appears not to be improving.\n\n\nMETHODS\nA mixed methods study, including a questionnaire survey and qualitative analysis of semi-structured interviews. Questionnaire survey; 354/600 (59%) clinicians equally distributed between chiropractic, osteopathy and physiotherapy professions. Interview study; a purposive sample of fourteen clinicians from each profession identified from the survey responses. Methodological techniques ranged from grounded theory analysis to sorting of categories by both the research team and the subjects themselves.\n\n\nRESULTS\nAt least 10% of each of the professions reported that they continued to treat patients with low back pain who showed almost no improvement for over three months. There is some indication that this is an underestimate. reasons for continuing unsuccessful management of low back pain were not found to be primarily monetary in nature; rather it appears to have much more to do with the scope of care that extends beyond issues addressed in the current physical therapy guidelines. The interview data showed that clinicians viewed their role as including health education and counselling rather than a 'cure or refer' approach. Additionally, participants raised concerns that discharging patients from their care meant sending them to into a therapeutic void.\n\n\nCONCLUSION\nLong-term treatment of patients with low back pain without objective signs of improvement is an established practice in a minority of clinicians studied. This approach contrasts with clinical guidelines that encourage self-management, reassurance, re-activation, and involvement of multidisciplinary teams for patients who do not recover. Some of the rationale provided makes a strong case for ongoing contact. However, the practice is also maintained through poor communication with other professions and mistrust of the healthcare system.",
"title": ""
},
{
"docid": "0a0f4f5fc904c12cacb95e87f62005d0",
"text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.",
"title": ""
},
{
"docid": "ad1000d0975bb0c605047349267c5e47",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "33789f718bc299fa63762f72595dcd77",
"text": "Resource allocation efficiency and energy consumption are among the top concerns to today's Cloud data center. Finding the optimal point where users' multiple job requests can be accomplished timely with minimum electricity and hardware cost is one of the key factors for system designers and managers to optimize the system configurations. Understanding the characteristics of the distribution of user task is an essential step for this purpose. At large-scale Cloud Computing data centers, a precise workload prediction will significantly help designers and operators to schedule hardware/software resources and power supplies in a more efficient manner, and make appropriate decisions to upgrade the Cloud system when the workload grows. While a lot of study has been conducted for hypervisor-based Cloud, container-based virtualization is becoming popular because of the low overhead and high efficiency in utilizing computing resources. In this paper, we have studied a set of real-world container data center traces from part of Google's cluster. We investigated the distribution of job duration, waiting time and machine utilization and the number of jobs submitted in a fix time period. Based on the quantitative study, an Ensemble Workload Prediction (EnWoP) method and a novel prediction evaluation parameter called Cloud Workload Correction Rate (C-Rate) have been proposed. The experimental results have verified that the EnWoP method achieved high prediction accuracy and the C-Rate evaluates the prediction methods more objective.",
"title": ""
},
{
"docid": "79f1473d4eb0c456660543fda3a648f1",
"text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.",
"title": ""
},
{
"docid": "6a4a76e48ff8bfa9ad17f116c3258d49",
"text": "Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.",
"title": ""
},
{
"docid": "c0e70347999c028516eb981a15b8a6c8",
"text": "Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.",
"title": ""
},
{
"docid": "9def0e866bb96a64d2629ad2ec208ebc",
"text": "Article history: Received 6 March 2008 Received in revised form 14 August 2008 Accepted 6 October 2008",
"title": ""
},
{
"docid": "0ee3f8fcb319eedbe160e57db8a4b3ed",
"text": "Dissolved gas analysis (DGA) is used to assess the condition of power transformers. It uses the concentrations of various gases dissolved in the transformer oil due to decomposition of the oil and paper insulation. DGA has gained worldwide acceptance as a method for the detection of incipient faults in transformers.",
"title": ""
},
{
"docid": "13529522be402878286138168f264478",
"text": "I. Cantador (), P. Castells Universidad Autónoma de Madrid 28049 Madrid, Spain e-mails: [email protected], [email protected] Abstract An increasingly important type of recommender systems comprises those that generate suggestions for groups rather than for individuals. In this chapter, we revise state of the art approaches on group formation, modelling and recommendation, and present challenging problems to be included in the group recommender system research agenda in the context of the Social Web.",
"title": ""
},
{
"docid": "2002d3a2ed0e9d96c3e68b5b30dc202b",
"text": "This paper summarizes the current knowledge regarding the possible modes of action and nutritional factors involved in the use of essential oils (EOs) for swine and poultry. EOs have recently attracted increased interest as feed additives to be fed to swine and poultry, possibly replacing the use of antibiotic growth promoters which have been prohibited in the European Union since 2006. In general, EOs enhance the production of digestive secretions and nutrient absorption, reduce pathogenic stress in the gut, exert antioxidant properties and reinforce the animal’s immune status, which help to explain the enhanced performance observed in swine and poultry. However, the mechanisms involved in causing this growth promotion are far from being elucidated, since data on the complex gut ecosystem, gut function, in vivo oxidative status and immune system are still lacking. In addition, limited information is available regarding the interaction between EOs and feed ingredients or other feed additives (especially pro- or prebiotics and organic acids). This knowledge may help feed formulators to better utilize EOs when they formulate diets for poultry and swine.",
"title": ""
},
{
"docid": "cefcd78be7922f4349f1bb3aa59d2e1d",
"text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …",
"title": ""
},
{
"docid": "346ce9d0377f94f268479d578b700e9c",
"text": "From a system architecture perspective, 3D technology can satisfy the high memory bandwidth demands that future multicore/manycore architectures require. This article presents a 3D DRAM architecture design and the potential for using 3D DRAM stacking for both L2 cache and main memory in 3D multicore architecture.",
"title": ""
},
{
"docid": "d7780a122b51adc30f08eeb13af78bd1",
"text": "Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the \"wear and tear\" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system images that exhibit a realistic wear-and-tear state.",
"title": ""
},
{
"docid": "a826fbbf8919dfdef901b1acc2a8167c",
"text": "This paper proposes a new scheme for multi-image projective reconstruction based on a projective grid space. The projective grid space is defined by two basis views and the fundamental matrix relating these views. Given fundamental matrices relating other views to each of the two basis views, this projective grid space can be related to any view. In the projective grid space as a general space that is related to all images, a projective shape can be reconstructed from all the images of weakly calibrated cameras. The projective reconstruction is one way to reduce the effort of the calibration because it does not need Euclid metric information, but rather only correspondences of several points between the images. For demonstrating the effectiveness of the proposed projective grid definition, we modify the voxel coloring algorithm for the projective voxel scheme. The quality of the virtual view images re-synthesized from the projective shape demonstrates the effectiveness of our proposed scheme for projective reconstruction from a large number of images.",
"title": ""
},
{
"docid": "632a0aa55a7a7a024402de6aa507d36f",
"text": "Emotionally Focused Therapy for Couples (EFT) is a brief evidence-based couple therapy based in attachment theory. Since the development of EFT, efficacy and effectiveness research has accumulated to address a range of couple concerns. EFT meets or exceeds the guidelines for classification as an evidence-based couple therapy outlined for couple and family research. Furthermore, EFT researchers have examined the process of change and predictors of outcome in EFT. Future research in EFT will continue to examine the process of change in EFT and test the efficacy and effectiveness of EFT in new applications and for couples of diverse backgrounds and concerns.",
"title": ""
},
{
"docid": "8aaa4ab4879ad55f43114cf8a0bd3855",
"text": "Photo-based activity on social networking sites has recently been identified as contributing to body image concerns. The present study aimed to investigate experimentally the effect of number of likes accompanying Instagram images on women's own body dissatisfaction. Participants were 220 female undergraduate students who were randomly assigned to view a set of thin-ideal or average images paired with a low or high number of likes presented in an Instagram frame. Results showed that exposure to thin-ideal images led to greater body and facial dissatisfaction than average images. While the number of likes had no effect on body dissatisfaction or appearance comparison, it had a positive effect on facial dissatisfaction. These effects were not moderated by Instagram involvement, but greater investment in Instagram likes was associated with more appearance comparison and facial dissatisfaction. The results illustrate how the uniquely social interactional aspects of social media (e.g., likes) can affect body image.",
"title": ""
},
{
"docid": "7251ff8a3ff1adbf13ddd62ab9a9c9c3",
"text": "The performance of a brushless motor which has a surface-mounted magnet rotor and a trapezoidal back-emf waveform when it is operated in BLDC and BLAC modes is evaluated, in both constant torque and flux-weakening regions, assuming the same torque, the same peak current, and the same rms current. It is shown that although the motor has an essentially trapezoidal back-emf waveform, the output power and torque when operated in the BLAC mode in the flux-weakening region are significantly higher than that can be achieved when operated in the BLDC mode due to the influence of the winding inductance and back-emf harmonics",
"title": ""
},
{
"docid": "484f869fce642b268575d55cb47ebe36",
"text": "Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describe domainindependent neural models of discourse coherence that are capable of measuring multiple aspects of coherence in existing sentences and can maintain coherence while generating new sentences. We study both discriminative models that learn to distinguish coherent from incoherent discourse, and generative models that produce coherent text, including a novel neural latentvariable Markovian generative model that captures the latent discourse dependencies between sentences in a text. Our work achieves state-of-the-art performance on multiple coherence evaluations, and marks an initial step in generating coherent texts given discourse contexts.",
"title": ""
},
{
"docid": "522efee981fb9eb26ba31d02230604fa",
"text": "The lack of an integrated medical information service model has been considered as a main issue in ensuring the continuity of healthcare from doctors, healthcare professionals to patients; the resultant unavailable, inaccurate, or unconformable healthcare information services have been recognized as main causes to the annual millions of medication errors. This paper proposes an Internet computing model aimed at providing an affordable, interoperable, ease of integration, and systematic approach to the development of a medical information service network to enable the delivery of continuity of healthcare. Web services, wireless, and advanced automatic identification technologies are fully integrated in the proposed service model. Some preliminary research results are presented.",
"title": ""
}
] | scidocsrr |
4d14f8c90632d703b3564aee1ae15fcc | Disassembling gamification: the effects of points and meaning on user motivation and performance | [
{
"docid": "fd6b7a0e915a32fe172a757b5a08e5ef",
"text": "More Americans now play video games than go to the movies (NPD Group, 2009). The meteoric rise in popularity of video games highlights the need for research approaches that can deepen our scientific understanding of video game engagement. This article advances a theory-based motivational model for examining and evaluating the ways by which video game engagement shapes psychological processes and influences well-being. Rooted in self-determination theory (Deci & Ryan, 2000; Ryan & Deci, 2000a), our approach suggests that both the appeal and well-being effects of video games are based in their potential to satisfy basic psychological needs for competence, autonomy, and relatedness. We review recent empirical evidence applying this perspective to a number of topics including need satisfaction in games and short-term well-being, the motivational appeal of violent game content, motivational sources of postplay aggression, the antecedents and consequences of disordered patterns of game engagement, and the determinants and effects of immersion. Implications of this model for the future study of game motivation and the use of video games in interventions are discussed.",
"title": ""
},
{
"docid": "1a2afe6610c82c512a94e16ff42f6a27",
"text": "We conduct a natural field experiment that explores the relationship between the “meaningfulness” of a task and people’s willingness to work. Our study uses workers from Amazon’s Mechanical Turk (MTurk), an online marketplace for task-based work. All participants are given an identical task of labeling medical images. However, the task is presented differently depending on treatment. Subjects assigned to the meaningful treatment are told they would be helping researchers label tumor cells, whereas subjects in the zero-context treatment are not told the purpose of their task and only told that they would be labeling “objects of interest”. Our experimental design specifically hires US and Indian workers in order to test for heterogeneous effects. We find that US, but not Indian, workers are induced to work at a higher proportion when given cues that their task was meaningful. However, conditional on working, whether a task was framed as meaningful does not induce greater or higher quality output in either the US or in India.",
"title": ""
},
{
"docid": "f1c00253a57236ead67b013e7ce94a5e",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] | [
{
"docid": "95b48a41d796aec0a1f23b3fc0879ed9",
"text": "Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame’s representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.",
"title": ""
},
{
"docid": "59bc11cd78549304225ab630ef0f5701",
"text": "This study presents and examines SamEx, a mobile learning system used by 305 students in formal and informal learning in a primary school in Singapore. Students use SamEx in situ to capture media such as pictures, video clips and audio recordings, comment on them, and share them with their peers. In this paper we report on the experiences of students in using the application throughout a one-year period with a focus on self-directedness, quality of contributions, and answers to contextual question prompts. We examine how the usage of tools such as SamEx predicts students' science examination results, discuss the role of badges as an extrinsic motivational tool, and explore how individual and collaborative learning emerge. Our research shows that the quantity and quality of contributions provided by the students in SamEx predict the end-year assessment score. With respect to specific system features, contextual answers given by the students and the overall likes received by students are also correlated with the end-year assessment score. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "335912dc59a4043dee983ec40434273f",
"text": "People increasingly use smartwatches in tandem with other devices such as smartphones, laptops or tablets. This allows for novel cross-device applications that use the watch as both input device and output display. However, despite the increasing availability of smartwatches, prototyping cross-device watch-centric applications remains a challenging task. Developers are limited in the applications they can explore as available toolkits provide only limited access to different types of input sensors for cross-device interactions. To address this problem, we introduce WatchConnect, a toolkit for rapidly prototyping cross-device applications and interaction techniques with smartwatches. The toolkit provides developers with (i) an extendable hardware platform that emulates a smartwatch, (ii) a UI framework that integrates with an existing UI builder, and (iii) a rich set of input and output events using a range of built-in sensor mappings. We demonstrate the versatility and design space of the toolkit with five interaction techniques and applications.",
"title": ""
},
{
"docid": "41c35407c55878910f5dfc2dfe083955",
"text": "This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.",
"title": ""
},
{
"docid": "57a974ecdbc1911f161e17f8fad7c173",
"text": "This paper reviews the technology trends of BCD (Bipolar-CMOS-DMOS) technology in terms of voltage capability, switching speed of power transistor, and high integration of logic CMOS for SoC (System-on-Chip) solution requiring high-voltage devices. Recent trends such like modularity of the process, power metal routing, and high-density NVM (Non-Volatile Memory) are also discussed.",
"title": ""
},
{
"docid": "32ed0f6d7dd3b5cc3c1613685eb76de7",
"text": "Images captured in low-light conditions usually suffer from very low contrast, which increases the difficulty of subsequent computer vision tasks in a great extent. In this paper, a low-light image enhancement model based on convolutional neural network and Retinex theory is proposed. Firstly, we show that multi-scale Retinex is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels. Motivated by this fact, we consider a Convolutional Neural Network(MSR-net) that directly learns an end-to-end mapping between dark and bright images. Different fundamentally from existing approaches, low-light image enhancement in this paper is regarded as a machine learning problem. In this model, most of the parameters are optimized by back-propagation, while the parameters of traditional models depend on the artificial setting. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods from the qualitative and quantitative perspective.",
"title": ""
},
{
"docid": "0aab03fe46d4f04b2bb8d10fa32ce049",
"text": "Nowadays, World Wide Web (WWW) surfing is becoming a risky task with the Web becoming rich in all sorts of attack. Websites are the main source of many scams, phishing attacks, identity theft, SPAM commerce and malware. Nevertheless, browsers, blacklists, and popup blockers are not enough to protect users. According to this, fast and accurate systems still to be needed with the ability to detect new malicious content. By taking into consideration, researchers have developed various Malicious Website detection techniques in recent years. Analyzing those works available in the literature can provide good knowledge on this topic and also, it will lead to finding the recent problems in Malicious Website detection. Accordingly, I have planned to do a comprehensive study with the literature of Malicious Website detection techniques. To categorize the techniques, all articles that had the word “malicious detection” in its title or as its keyword published between January 2003 to august 2016, is first selected from the scientific journals: IEEE, Elsevier, Springer and international journals. After the collection of research articles, we discuss every research paper. In addition, this study gives an elaborate idea about malicious detection.",
"title": ""
},
{
"docid": "6eb7bb6f623475f7ca92025fd00dbc27",
"text": "Support vector machines (SVMs) have been recognized as one o f th most successful classification methods for many applications including text classific ation. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational com plexity is an essential issue to efficiently handle a large number of terms in practical applicat ions of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dim nsion of the document vectors dramatically. We also introduce decision functions for the centroid-based classification algorithm and support vector classifiers to handle the classification p r blem where a document may belong to multiple classes. Our substantial experimental results sh ow t at with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accu ra y of text classification even when the dimension of the input space is significantly reduced.",
"title": ""
},
{
"docid": "6cb46b57b657a90fb5b4b91504cdfd8f",
"text": "One of the themes of Emotion and Decision-Making Explained (Rolls, 2014c) is that there are multiple routes to emotionrelated responses, with some illustrated in Fig. 1. Brain systems involved in decoding stimuli in terms of whether they are instrumental reinforcers so that goal directed actions may be performed to obtain or avoid the stimuli are emphasized as being important for emotional states, for an intervening state may be needed to bridge the time gap between the decoding of a goal-directed stimulus, and the actions that may need to be set into train and directed to obtain or avoid the emotionrelated stimulus. In contrast, when unconditioned or classically conditioned responses such as autonomic responses, freezing, turning away etc. are required, there is no need for intervening states such as emotional states. These points are covered in Chapters 2e4 and 10 of the book. Ono and Nishijo (2014) raise the issue of the extent to which subcortical pathways are involved in the elicitation of some of these emotion-related responses. They describe interesting research that pulvinar neurons in macaques may respond to snakes, and may provide a route that does not require cortical processing for some probably innately specified visual stimuli to produce responses. With respect to Fig. 1, the pathway is that some of the inputs labeled as primary reinforcers may reach brain regions including the amygdala by a subcortical route. LeDoux (2012) provides evidence in the same direction, in his case involving a ‘low road’ for auditory stimuli such as tones (which do not required cortical processing) to reach, via a subcortical pathway, the amygdala, where classically conditioned e.g., freezing and autonomic responses may be learned. Consistently, there is evidence (Chapter 4) that humans with damage to the primary visual cortex who describe themselves as blind do nevertheless show some responses to stimuli such as a face expression (de Gelder, Vroomen, Pourtois, & Weiskrantz, 1999; Tamietto et al., 2009; Tamietto & de Gelder, 2010). I agree that the elicitation of unconditioned and conditioned responses to these particular types of stimuli (LeDoux, 2014) is of interest (Rolls, 2014a). However, in Emotion and Decision-Making Explained, I emphasize that there aremassive cortical inputs to structures involved in emotion such as the amygdala and orbitofrontal cortex, and that neurons in both structures can have viewinvariant responses to visual stimuli including faces which specify face identity, and can have responses that are selective for particular emotional expressions (Leonard, Rolls, Wilson, & Baylis, 1985; Rolls, 1984, 2007, 2011, 2012; Rolls, Critchley, Browning, & Inoue, 2006) which reflect the neuronal responses found in the temporal cortical and related visual areas, as we discovered (Perrett, Rolls, & Caan, 1982; Rolls, 2007, 2008a, 2011, 2012; Sanghera, Rolls, & Roper-Hall, 1979). View invariant representations are important for",
"title": ""
},
{
"docid": "7e6182248b3c3d7dedce16f8bfa58b28",
"text": "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.",
"title": ""
},
{
"docid": "e2fd61cef4ec32c79b059552e7820092",
"text": "This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs. The HONE framework is highly expressive and flexible with many interchangeable components. The experimental results demonstrate the effectiveness of learning higher-order network representations. In all cases, HONE outperforms recent embedding methods that are unable to capture higher-order structures with a mean relative gain in AUC of 19% (and up to 75% gain) across a wide variety of networks and embedding methods.",
"title": ""
},
{
"docid": "dfa1269878b384b24c7ba6aea6a11373",
"text": "Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing.",
"title": ""
},
{
"docid": "b838cd18098a4824e8ae16d55c297cfb",
"text": "While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuomotor policies for real robotic systems without relying entirely on large real-world robot datasets.",
"title": ""
},
{
"docid": "b79b3497ae4987e00129eab9745e1398",
"text": "The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.",
"title": ""
},
{
"docid": "51c5dbc32d37777614936a77a10e42bc",
"text": "During the last decade, the applications of signal processing have drastically improved with deep learning. However areas of affecting computing such as emotional speech synthesis or emotion recognition from spoken language remains challenging. In this paper, we investigate the use of a neural Automatic Speech Recognition (ASR) as a feature extractor for emotion recognition. We show that these features outperform the eGeMAPS feature set to predict the valence and arousal emotional dimensions, which means that the audio-to-text mapping learned by the ASR system contains information related to the emotional dimensions in spontaneous speech. We also examine the relationship between first layers (closer to speech) and last layers (closer to text) of the ASR and valence/arousal.",
"title": ""
},
{
"docid": "83728a9b746c7d3c3ea1e89ef01f9020",
"text": "This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.",
"title": ""
},
{
"docid": "9d2583618e9e00333d044ac53da65ceb",
"text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.",
"title": ""
},
{
"docid": "a4dea5e491657e1ba042219401ebcf39",
"text": "Beam scanning arrays typically suffer from scan loss; an increasing degradation in gain as the beam is scanned from broadside toward the horizon in any given scan plane. Here, a metasurface is presented that reduces the effects of scan loss for a leaky-wave antenna (LWA). The metasurface is simple, being composed of an ultrathin sheet of subwavelength split-ring resonators. The leaky-wave structure is balanced, scanning from the forward region, through broadside, and into the backward region, and designed to scan in the magnetic plane. The metasurface is effectively invisible at broadside, where balanced LWAs are most sensitive to external loading. It is shown that the introduction of the metasurface results in increased directivity, and hence, gain, as the beam is scanned off broadside, having an increasing effect as the beam is scanned to the horizon. Simulations show that the metasurface improves the effective aperture distribution at higher scan angles, resulting in a more directive main beam, while having a negligible impact on cross-polarization gain. Experimental validation results show that the scan range of the antenna is increased from $-39 {^{\\circ }} \\leq \\theta \\leq +32 {^{\\circ }}$ to $-64 {^{\\circ }} \\leq \\theta \\leq +70 {^{\\circ }}$ , when loaded with the metasurface, demonstrating a flattened gain profile over a 135° range centered about broadside. Moreover, this scan range occurs over a frequency band spanning from 9 to 15.5 GHz, demonstrating a relative bandwidth of 53% for the metasurface.",
"title": ""
},
{
"docid": "8624bdce9b571418f88f4adb52984462",
"text": "Video-based traffic flow monitoring is a fast emerging field based on the continuous development of computer vision. A survey of the state-of-the-art video processing techniques in traffic flow monitoring is presented in this paper. Firstly, vehicle detection is the first step of video processing and detection methods are classified into background modeling based methods and non-background modeling based methods. In particular, nighttime detection is more challenging due to bad illumination and sensitivity to light. Then tracking techniques, including 3D model-based, region-based, active contour-based and feature-based tracking, are presented. A variety of algorithms including MeanShift algorithm, Kalman Filter and Particle Filter are applied in tracking process. In addition, shadow detection and vehicles occlusion bring much trouble into vehicle detection, tracking and so on. Based on the aforementioned video processing techniques, discussion on behavior understanding including traffic incident detection is carried out. Finally, key challenges in traffic flow monitoring are discussed.",
"title": ""
},
{
"docid": "42a6b6ac31383046cf11bcf16da3207e",
"text": "Epigenome-wide association studies represent one means of applying genome-wide assays to identify molecular events that could be associated with human phenotypes. The epigenome is especially intriguing as a target for study, as epigenetic regulatory processes are, by definition, heritable from parent to daughter cells and are found to have transcriptional regulatory properties. As such, the epigenome is an attractive candidate for mediating long-term responses to cellular stimuli, such as environmental effects modifying disease risk. Such epigenomic studies represent a broader category of disease -omics, which suffer from multiple problems in design and execution that severely limit their interpretability. Here we define many of the problems with current epigenomic studies and propose solutions that can be applied to allow this and other disease -omics studies to achieve their potential for generating valuable insights.",
"title": ""
}
] | scidocsrr |
dc61ad88c896f5df31456923867cbb14 | Wide Pulse Combined With Narrow-Pulse Generator for Food Sterilization | [
{
"docid": "19e16c7618b0f1a623f3446e4d84fc08",
"text": "Apoptosis — the regulated destruction of a cell — is a complicated process. The decision to die cannot be taken lightly, and the activity of many genes influence a cell's likelihood of activating its self-destruction programme. Once the decision is taken, proper execution of the apoptotic programme requires the coordinated activation and execution of multiple subprogrammes. Here I review the basic components of the death machinery, describe how they interact to regulate apoptosis in a coordinated manner, and discuss the main pathways that are used to activate cell death.",
"title": ""
}
] | [
{
"docid": "07be6a2df7360ef53d7e6d9cc30f621d",
"text": "Fire accidents can cause numerous casualties and heavy property losses, especially, in petrochemical industry, such accidents are likely to cause secondary disasters. However, common fire drill training would cause loss of resources and pollution. We designed a multi-dimensional interactive somatosensory (MDIS) cloth system based on virtual reality technology to simulate fire accidents in petrochemical industry. It provides a vivid visual and somatosensory experience. A thermal radiation model is built in a virtual environment, and it could predict the destruction radius of a fire. The participant position changes are got from Kinect, and shown in virtual environment synchronously. The somatosensory cloth, which could both heat and refrigerant, provides temperature feedback based on thermal radiation results and actual distance. In this paper, we demonstrate the details of the design, and then verified its basic function. Heating deviation from model target is lower than 3.3 °C and refrigerant efficiency is approximately two times faster than heating efficiency.",
"title": ""
},
{
"docid": "20662e12b45829c00c67434277ab9a26",
"text": "Given the significance of placement in IC physical design, extensive research studies performed over the last 50 years addressed numerous aspects of global and detailed placement. The objectives and the constraints dominant in placement have been revised many times over, and continue to evolve. Additionally, the increasing scale of placement instances affects the algorithms of choice for high-performance tools. We survey the history of placement research, the progress achieved up to now, and outstanding challenges.",
"title": ""
},
{
"docid": "9fa20791d2e847dbd2c7204d00eec965",
"text": "As neurobiological evidence points to the neocortex as the brain region mainly involved in high-level cognitive functions, an innovative model of neocortical information processing has been recently proposed. Based on a simplified model of a neocortical neuron, and inspired by experimental evidence of neocortical organisation, the Hierarchical Temporal Memory (HTM) model attempts at understanding intelligence, but also at building learning machines. This paper focuses on analysing HTM's ability for online, adaptive learning of sequences. In particular, we seek to determine whether the approach is robust to noise in its inputs, and to compare and contrast its performance and attributes to an alternative Hidden Markov Model (HMM) approach. We reproduce a version of a HTM network and apply it to a visual pattern recognition task under various learning conditions. Our first set of experiments explore the HTM network's capability to learn repetitive patterns and sequences of patterns within random data streams. Further experimentation involves assessing the network's learning performance in terms of inference and prediction under different noise conditions. HTM results are compared with those of a HMM trained at the same tasks. Online learning performance results demonstrate the HTM's capacity to make use of context in order to generate stronger predictions, whereas results on robustness to noise reveal an ability to deal with noisy environments. Our comparisons also, however, emphasise a manner in which HTM differs significantly from HMM, which is that HTM generates predicted observations rather than hidden states, and each observation is a sparse distributed representation.",
"title": ""
},
{
"docid": "33b63fe07849be342beaf3b31dc0d6da",
"text": "Infrared sensors are used in Photoplethysmography measurements (PPG) to get blood flow parameters in the vascular system. It is a simple, low-cost non-invasive optical technique that is commonly placed on a finger or toe, to detect blood volume changes in the micro-vascular bed of tissue. The sensor use an infrared source and a photo detector to detect the infrared wave which is not absorbed. The recorded infrared waveform at the detector side is called the PPG signal. This paper reviews the various blood flow parameters that can be extracted from this PPG signal including the existence of an endothelial disfunction as an early detection tool of vascular diseases.",
"title": ""
},
{
"docid": "3f23f5452c53ae5fcc23d95acdcdafd8",
"text": "Metamorphism is a technique that mutates the binary code using different obfuscations and never keeps the same sequence of opcodes in the memory. This stealth technique provides the capability to a malware for evading detection by simple signature-based (such as instruction sequences, byte sequences and string signatures) anti-malware programs. In this paper, we present a new scheme named Annotated Control Flow Graph (ACFG) to efficiently detect such kinds of malware. ACFG is built by annotating CFG of a binary program and is used for graph and pattern matching to analyse and detect metamorphic malware. We also optimize the runtime of malware detection through parallelization and ACFG reduction, maintaining the same accuracy (without ACFG reduction) for malware detection. ACFG proposed in this paper: (i) captures the control flow semantics of a program; (ii) provides a faster matching of ACFGs and can handle malware with smaller CFGs, compared with other such techniques, without compromising the accuracy; (iii) contains more information and hence provides more accuracy than a CFG. Experimental evaluation of the proposed scheme using an existing dataset yields malware detection rate of 98.9% and false positive rate of 4.5%.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "bbb9412a61bb8497e1d8b6e955e0217b",
"text": "There has been great interest in developing methodologies that are capable of dealing with imprecision and uncertainty. The large amount of research currently being carried out in fuzzy and rough sets is representative of this. Many deep relationships have been established, and recent studies have concluded as to the complementary nature of the two methodologies. Therefore, it is desirable to extend and hybridize the underlying concepts to deal with additional aspects of data imperfection. Such developments offer a high degree of flexibility and provide robust solutions and advanced tools for data analysis. Fuzzy-rough set-based feature (FS) selection has been shown to be highly useful at reducing data dimensionality but possesses several problems that render it ineffective for large datasets. This paper proposes three new approaches to fuzzy-rough FS-based on fuzzy similarity relations. In particular, a fuzzy extension to crisp discernibility matrices is proposed and utilized. Initial experimentation shows that the methods greatly reduce dimensionality while preserving classification accuracy.",
"title": ""
},
{
"docid": "cedfccb3fd6433e695082594cf0beb45",
"text": "Among the different existing cryptographic file systems, EncFS has a unique feature that makes it attractive for backup setups involving untrusted (cloud) storage. It is a file-based overlay file system in normal operation (i.e., it maintains a directory hierarchy by storing encrypted representations of files and folders in a specific source folder), but its reverse mode allows to reverse this process: Users can mount deterministic, encrypted views of their local, unencrypted files on the fly, allowing synchronization to untrusted storage using standard tools like rsync without having to store encrypted representations on the local hard drive. So far, EncFS is a single-user solution: All files of a folder are encrypted using the same, static key; file access rights are passed through to the encrypted representation, but not otherwise considered. In this paper, we work out how multi-user support can be integrated into EncFS and its reverse mode in particular. We present an extension that a) stores individual files' owner/group information and permissions in a confidential and authenticated manner, and b) cryptographically enforces thereby specified read rights. For this, we introduce user-specific keys and an appropriate, automatic key management. Given a user's key and a complete encrypted source directory, the extension allows access to exactly those files the user is authorized for according to the corresponding owner/group/permissions information. Just like EncFS, our extension depends only on symmetric cryptographic primitives.",
"title": ""
},
{
"docid": "410a4df5b17ec0c4b160c378ca08bc17",
"text": "We present the results of an investigation into the nature of information needs of software developers who work in projects that are part of larger ecosystems. This work is based on a quantitative survey of 75 professional software developers. We corroborate the results identified in the survey with needs and motivations proposed in a previous survey and discover that tool support for developers working in an ecosystem context is even more meager than we thought: mailing lists and internet search are the most popular tools developers use to satisfy their ecosystem-related information needs.",
"title": ""
},
{
"docid": "9c1f7dae555efd9c05ce7d3a90616c17",
"text": "Shallow trench isolation(STI) is the mainstream CMOS isolation technology for advanced integrated circuits. While STI process gives the isolation benefits due to its scalable characteristics, exploiting the compressive stress exerted by STI wells on device active regions to improve performance of devices has been one of the major industry focuses. However, in the present research of VLSI physical design, there has no yet a global optimization methodology on the whole chip layout to control the size of the STI wells, which affects the stress magnitude along with the size of active region of transistors. In this paper, we present a novel methodology that is capable of determining globally the optimal STI well width following the chip placement stage. The methodology is based on the observation that both of the terms in charge of chip width minimization and transistor channel mobility optimization in the objective function can be modeled as posynomials of the design variables, that is, the width of STI wells. Then, this stress aware placement optimization problem could be solved efficiently as a convex geometric programming (GP) problem. Finally, by a MOSEK GP problem solver, we do our STI width aware placement optimization on the given placements of some GSRC and IBM-PLACE benchmarks. Experiment results demonstrated that our methodology can obtain decent results with an acceptable runtime when satisfy the necessary location constraints from DRC specifications.",
"title": ""
},
{
"docid": "c5113ff741d9e656689786db10484a07",
"text": "Pulmonary administration of drugs presents several advantages in the treatment of many diseases. Considering local and systemic delivery, drug inhalation enables a rapid and predictable onset of action and induces fewer side effects than other routes of administration. Three main inhalation systems have been developed for the aerosolization of drugs; namely, nebulizers, pressurized metered-dose inhalers (MDIs) and dry powder inhalers (DPIs). The latter are currently the most convenient alternative as they are breath-actuated and do not require the use of any propellants. The deposition site in the respiratory tract and the efficiency of inhaled aerosols are critically influenced by the aerodynamic diameter, size distribution, shape and density of particles. In the case of DPIs, since micronized particles are generally very cohesive and exhibit poor flow properties, drug particles are usually blended with coarse and fine carrier particles. This increases particle aerodynamic behavior and flow properties of the drugs and ensures accurate dosage of active ingredients. At present, particles with controlled properties are obtained by milling, spray drying or supercritical fluid techniques. Several excipients such as sugars, lipids, amino acids, surfactants, polymers and absorption enhancers have been tested for their efficacy in improving drug pulmonary administration. The purpose of this article is to describe various observations that have been made in the field of inhalation product development, especially for the dry powder inhalation formulation, and to review the use of various additives, their effectiveness and their potential toxicity for pulmonary administration.",
"title": ""
},
{
"docid": "7c05ef9ac0123a99dd5d47c585be391c",
"text": "Memory access bugs, including buffer overflows and uses of freed heap memory, remain a serious problem for programming languages like C and C++. Many memory error detectors exist, but most of them are either slow or detect a limited set of bugs, or both. This paper presents AddressSanitizer, a new memory error detector. Our tool finds out-of-bounds accesses to heap, stack, and global objects, as well as use-after-free bugs. It employs a specialized memory allocator and code instrumentation that is simple enough to be implemented in any compiler, binary translation system, or even in hardware. AddressSanitizer achieves efficiency without sacrificing comprehensiveness. Its average slowdown is just 73% yet it accurately detects bugs at the point of occurrence. It has found over 300 previously unknown bugs in the Chromium browser and many bugs in other software.",
"title": ""
},
{
"docid": "d7ff935c38f2adad660ba580e6f3bc6c",
"text": "In this report, we provide a comparative analysis of different techniques for user intent classification towards the task of app recommendation. We analyse the performance of different models and architectures for multi-label classification over a dataset with a relative large number of classes and only a handful examples of each class. We focus, in particular, on memory network architectures, and compare how well the different versions perform under the task constraints. Since the classifier is meant to serve as a module in a practical dialog system, it needs to be able to work with limited training data and incorporate new data on the fly. We devise a 1-shot learning task to test the models under the above constraint. We conclude that relatively simple versions of memory networks perform better than other approaches. Although, for tasks with very limited data, simple non-parametric methods perform comparably, without needing the extra training data.",
"title": ""
},
{
"docid": "40bb8660fd02dc402d80e0f5970fa9dc",
"text": "Dengue is the second most common mosquito-borne disease affecting human beings. In 2009, WHO endorsed new guidelines that, for the first time, consider neurological manifestations in the clinical case classification for severe dengue. Dengue can manifest with a wide range of neurological features, which have been noted--depending on the clinical setting--in 0·5-21% of patients with dengue admitted to hospital. Furthermore, dengue was identified in 4-47% of admissions with encephalitis-like illness in endemic areas. Neurological complications can be categorised into dengue encephalopathy (eg, caused by hepatic failure or metabolic disorders), encephalitis (caused by direct virus invasion), neuromuscular complications (eg, Guillain-Barré syndrome or transient muscle dysfunctions), and neuro-ophthalmic involvement. However, overlap of these categories is possible. In endemic countries and after travel to these regions, dengue should be considered in patients presenting with fever and acute neurological manifestations.",
"title": ""
},
{
"docid": "d99d4bdf1af85c14653c7bbde10eca7b",
"text": "Plants endure a variety of abiotic and biotic stresses, all of which cause major limitations to production. Among abiotic stressors, heavy metal contamination represents a global environmental problem endangering humans, animals, and plants. Exposure to heavy metals has been documented to induce changes in the expression of plant proteins. Proteins are macromolecules directly responsible for most biological processes in a living cell, while protein function is directly influenced by posttranslational modifications, which cannot be identified through genome studies. Therefore, it is necessary to conduct proteomic studies, which enable the elucidation of the presence and role of proteins under specific environmental conditions. This review attempts to present current knowledge on proteomic techniques developed with an aim to detect the response of plant to heavy metal stress. Significant contributions to a better understanding of the complex mechanisms of plant acclimation to metal stress are also discussed.",
"title": ""
},
{
"docid": "e95e043f3a783d95cf4f490bdf6cb6e0",
"text": "The fundamental problem of finding a suitable representation of the orientation of 3D surfaces is considered. A representation is regarded suitable if it meets three basic requirements: Uniqueness, Uniformity and Polar separability. A suitable tensor representation is given. At the heart of the problem lies the fact that orientation can only be defined mod 180◦ , i.e the fact that a 180◦ rotation of a line or a plane amounts to no change at all. For this reason representing a plane using its normal vector leads to ambiguity and such a representation is consequently not suitable. The ambiguity can be eliminated by establishing a mapping between R3 and a higherdimensional tensor space. The uniqueness requirement implies a mapping that map all pairs of 3D vectors x and -x onto the same tensor T. Uniformity implies that the mapping implicitly carries a definition of distance between 3D planes (and lines) that is rotation invariant and monotone with the angle between the planes. Polar separability means that the norm of the representing tensor T is rotation invariant. One way to describe the mapping is that it maps a 3D sphere into 6D in such a way that the surface is uniformly stretched and all pairs of antipodal points maps onto the same tensor. It is demonstrated that the above mapping can be realized by sampling the 3D space using a specified class of symmetrically distributed quadrature filters. It is shown that 6 quadrature filters are necessary to realize the desired mapping, the orientations of the filters given by lines trough the vertices of an icosahedron. The desired tensor representation can be obtained by simply performing a weighted summation of the quadrature filter outputs. This situation is indeed satisfying as it implies a simple implementation of the theory and that requirements on computational capacity can be kept within reasonable limits. Noisy neigborhoods and/or linear combinations of tensors produced by the mapping will in general result in a tensor that has no direct counterpart in R3. In an adaptive hierarchical signal processing system, where information is flowing both up (increasing the level of abstraction) and down (for adaptivity and guidance), it is necessary that a meaningful inverse exists for each levelaltering operation. It is shown that the point in R3 that corresponds to the best approximation of a given tensor is given by the largest eigenvalue times the corresponding eigenvector of the tensor.",
"title": ""
},
{
"docid": "65685bafe88b596530d4280e7e75d1c4",
"text": "The supernodal method for sparse Cholesky factorization represents the factor L as a set of supernodes, each consisting of a contiguous set of columns of L with identical nonzero pattern. A conventional supernode is stored as a dense submatrix. While this is suitable for sparse Cholesky factorization where the nonzero pattern of L does not change, it is not suitable for methods that modify a sparse Cholesky factorization after a low-rank change to A (an update/downdate, Ā = A ± WWT). Supernodes merge and split apart during an update/downdate. Dynamic supernodes are introduced which allow a sparse Cholesky update/downdate to obtain performance competitive with conventional supernodal methods. A dynamic supernodal solver is shown to exceed the performance of the conventional (BLAS-based) supernodal method for solving triangular systems. These methods are incorporated into CHOLMOD, a sparse Cholesky factorization and update/downdate package which forms the basis of x = A\\b MATLAB when A is sparse and symmetric positive definite.",
"title": ""
},
{
"docid": "e3316e7fa5a042d0a973c621cec5c3bc",
"text": "Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions.",
"title": ""
},
{
"docid": "10f5b005960094bdc1676facc4badf10",
"text": "Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors.",
"title": ""
},
{
"docid": "50708eb1617b59f605b926583d9215bf",
"text": "Due to filmmakers focusing on violence, traumatic events, and hallucinations when depicting characters with schizophrenia, critics have scrutinized the representation of mental disorders in contemporary films for years. This study compared previous research on schizophrenia with the fictional representation of the disease in contemporary films. Through content analysis, this study examined 10 films featuring a schizophrenic protagonist, tallying moments of violence and charting if they fell into four common stereotypes. Results showed a high frequency of violent behavior in films depicting schizophrenic characters, implying that those individuals are overwhelmingly dangerous and to be feared.",
"title": ""
}
] | scidocsrr |
796d0369a1cbef976cd1d5a5d2c86987 | Actuator design for high force proprioceptive control in fast legged locomotion | [
{
"docid": "fa7da02d554957f92364d4b37219feba",
"text": "This paper shows mechanisms for artificial finger based on a planetary gear system (PGS). Using the PGS as a transmitter provides an under-actuated system for driving three joints of a finger with back-drivability that is crucial characteristics for fingers as an end-effector when it interacts with external environment. This paper also shows the artificial finger employed with the originally developed mechanism called “double planetary gear system” (DPGS). The DPGS provides not only back-drivable and under-actuated flexion-extension of the three joints of a finger, which is identical to the former, but also adduction-abduction of the MP joint. Both of the above finger mechanisms are inherently safe due to being back-drivable with no electric device or sensor in the finger part. They are also rigorously solvable in kinematics and kinetics as shown in this paper.",
"title": ""
},
{
"docid": "81b03da5e09cb1ac733c966b33d0acb1",
"text": "Abstrud In the last two years a third generation of torque-controlled light weight robots has been developed in DLR‘s robotics and mechatronics lab which is based on all the experiences that have been made with the first two generations. It aims at reaching the limits of what seems achievable with present day technologies not only with respect to light-weight, but also with respect to minimal power consumption and losses. One of the main gaps we tried to close in version III was the development of a new, robot-dedicated high energy motor designed with the best available techniques of concurrent engineering, and the renewed efforts to save weight in the links by using ultralight carbon fibres.",
"title": ""
}
] | [
{
"docid": "b9f774ccd37e0bf0e399dd2d986f258d",
"text": "Predicting the final state of a running process, the remaining time to completion or the next activity of a running process are important aspects of runtime process management. Runtime management requires the ability to identify processes that are at risk of not meeting certain criteria in order to offer case managers decision information for timely intervention. This in turn requires accurate prediction models for process outcomes and for the next process event, based on runtime information available at the prediction and decision point. In this paper, we describe an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event. This is both a novel method in process prediction, which has previously relied on explicit process models in the form of Hidden Markov Models (HMM) or annotated transition systems, and also a novel application for deep learning methods.",
"title": ""
},
{
"docid": "a8477be508fab67456c5f6b61d3642b5",
"text": "Although three-phase permanent magnet (PM) motors are quite common in industry, multi-phase PM motors are used in special applications where high power and redundancy are required. Multi-phase PM motors offer higher torque/power density than conventional three-phase PM motors. In this paper, a novel multi-phase consequent pole PM (CPPM) synchronous motor is proposed. The constant power–speed range of the proposed motor is quite wide as opposed to conventional PM motors. The design and the detailed finite-element analysis of the proposed nine-phase CPPM motor and performance comparison with a nine-phase surface mounted PM motor are completed to illustrate the benefits of the proposed motor.",
"title": ""
},
{
"docid": "a95761b5a67a07d02547c542ddc7e677",
"text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; [email protected]. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.",
"title": ""
},
{
"docid": "6fb006066fa1a25ae348037aa1ee7be3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "47df1bd26f99313cfcf82430cb98d442",
"text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical",
"title": ""
},
{
"docid": "2342c92f91c243474a53323a476ae3d9",
"text": "Gesture recognition has emerged recently as a promising application in our daily lives. Owing to low cost, prevalent availability, and structural simplicity, RFID shall become a popular technology for gesture recognition. However, the performance of existing RFID-based gesture recognition systems is constrained by unfavorable intrusiveness to users, requiring users to attach tags on their bodies. To overcome this, we propose GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices. Our work stems from the key insight that the RFID phase information is capable of capturing the spatial features of various gestures with low-cost commodity hardware. In GRfid, after data are collected by hardware, we process the data by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition. We have implemented GRfid with a commercial RFID reader and multiple tags, and conducted extensive experiments in different scenarios to evaluate its performance. The results demonstrate that GRfid can achieve an average recognition accuracy of <inline-formula> <tex-math notation=\"LaTeX\">$96.5$</tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq1-2549518.gif\"/> </alternatives></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$92.8$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2549518.gif\"/></alternatives></inline-formula> percent in the identical-position and diverse-positions scenario, respectively. Moreover, experiment results show that GRfid is robust against environmental interference and tag orientations.",
"title": ""
},
{
"docid": "02ea5b61b22d5af1b9362ca46ead0dea",
"text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.",
"title": ""
},
{
"docid": "ab132902ce21c35d4b5befb8ff2898b5",
"text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.",
"title": ""
},
{
"docid": "c9be0a4079800f173cf9553b9a69581c",
"text": "A 500W classical three-way Doherty power amplifier (DPA) with LDMOS devices at 1.8GHz is presented. Optimized device ratio is selected to achieve maximum efficiency as well as linearity. With a simple passive input driving network implementation, the demonstrator exhibits more than 55% efficiency with 9.9PAR WCDMA signal from 1805MHz-1880MHz. It can be linearized at -60dBc level with 20MHz LTE signal at an average output power of 49dBm.",
"title": ""
},
{
"docid": "b7b664d1749b61f2f423d7080a240a60",
"text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.",
"title": ""
},
{
"docid": "9fcda5fc485df8b69aa7bab806d95f84",
"text": "DoS attacks on sensor measurements used for industrial control can cause the controller of the process to use stale data. If the DoS attack is not timed properly, the use of stale data by the controller will have limited impact on the process; however, if the attacker is able to launch the DoS attack at the correct time, the use of stale data can cause the controller to drive the system to an unsafe state.\n Understanding the timing parameters of the physical processes does not only allow an attacker to construct a successful attack but also to maximize its impact (damage to the system). In this paper we use Tennessee Eastman challenge process to study an attacker that has to identify (in realtime) the optimal timing to launch a DoS attack. The choice of time to begin an attack is forward-looking, requiring the attacker to consider each opportunity against the possibility of a better opportunity in the future, and this lends itself to the theory of optimal stopping problems. In particular we study the applicability of the Best Choice Problem (also known as the Secretary Problem), quickest change detection, and statistical process outliers. Our analysis can be used to identify specific sensor measurements that need to be protected, and the time that security or safety teams required to respond to attacks, before they cause major damage.",
"title": ""
},
{
"docid": "9c5535f218f6228ba6b2a8e5fdf93371",
"text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.",
"title": ""
},
{
"docid": "064aba7f2bd824408bd94167da5d7b3a",
"text": "Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.",
"title": ""
},
{
"docid": "bcbbc8913330378af7c986549ab4bb30",
"text": "Anomaly detection involves identifying the events which do not conform to an expected pattern in data. A common approach to anomaly detection is to identify outliers in a latent space learned from data. For instance, PCA has been successfully used for anomaly detection. Variational autoencoder (VAE) is a recently-developed deep generative model which has established itself as a powerful method for learning representation from data in a nonlinear way. However, the VAE does not take the temporal dependence in data into account, so it limits its applicability to time series. In this paper we combine the echo-state network, which is a simple training method for recurrent networks, with the VAE, in order to learn representation from multivariate time series data. We present an echo-state conditional variational autoencoder (ES-CVAE) and demonstrate its useful behavior in the task of anomaly detection in multivariate time series data.",
"title": ""
},
{
"docid": "5fc3da9b59e9a2a7c26fa93445c68933",
"text": "A country's growth is strongly measured by quality of its education system. Education sector, across the globe has witnessed sea change in its functioning. Today it is recognized as an industry and like any other industry it is facing challenges, the major challenges of higher education being decrease in students' success rate and their leaving a course without completion. An early prediction of students' failure may help the management provide timely counseling as well coaching to increase success rate and student retention. We use different classification techniques to build performance prediction model based on students' social integration, academic integration, and various emotional skills which have not been considered so far. Two algorithms J48 (Implementation of C4.5) and Random Tree have been applied to the records of MCA students of colleges affiliated to Guru Gobind Singh Indraprastha University to predict third semester performance. Random Tree is found to be more accurate in predicting performance than J48 algorithm.",
"title": ""
},
{
"docid": "719783be7139d384d24202688f7fc555",
"text": "Big sensing data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity. Cloud computing provides a promising platform for big sensing data processing and storage as it provides a flexible stack of massive computing, storage, and software services in a scalable manner. Current big sensing data processing on Cloud have adopted some data compression techniques. However, due to the high volume and velocity of big sensing data, traditional data compression techniques lack sufficient efficiency and scalability for data processing. Based on specific on-Cloud data compression requirements, we propose a novel scalable data compression approach based on calculating similarity among the partitioned data chunks. Instead of compressing basic data units, the compression will be conducted over partitioned data chunks. To restore original data sets, some restoration functions and predictions will be designed. MapReduce is used for algorithm implementation to achieve extra scalability on Cloud. With real world meteorological big sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable compression approach based on data chunk similarity can significantly improve data compression efficiency with affordable data accuracy loss.",
"title": ""
},
{
"docid": "2410a4b40b833d1729fac37020ec13be",
"text": "Understanding how ecological conditions influence physiological responses is fundamental to forensic entomology. When determining the minimum postmortem interval with blow fly evidence in forensic investigations, using a reliable and accurate model of development is integral. Many published studies vary in results, source populations, and experimental designs. Accordingly, disentangling genetic causes of developmental variation from environmental causes is difficult. This study determined the minimum time of development and pupal sizes of three populations of Lucilia sericata Meigen (Diptera: Calliphoridae; from California, Michigan, and West Virginia) at two temperatures (20 degrees C and 33.5 degrees C). Development times differed significantly between strain and temperature. In addition, California pupae were the largest and fastest developing at 20 degrees C, but at 33.5 degrees C, though they still maintained their rank in size among the three populations, they were the slowest to develop. These results indicate a need to account for genetic differences in development, and genetic variation in environmental responses, when estimating a postmortem interval with entomological data.",
"title": ""
},
{
"docid": "98a820c806b392e18b38d075b91a4fe9",
"text": "This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.",
"title": ""
},
{
"docid": "a797ab99ed7983bd7372de56d34caca1",
"text": "The discovery of stem cells that can generate neural tissue has raised new possibilities for repairing the nervous system. A rush of papers proclaiming adult stem cell plasticity has fostered the notion that there is essentially one stem cell type that, with the right impetus, can create whatever progeny our heart, liver or other vital organ desires. But studies aimed at understanding the role of stem cells during development have led to a different view — that stem cells are restricted regionally and temporally, and thus not all stem cells are equivalent. Can these views be reconciled?",
"title": ""
},
{
"docid": "12be3f9c1f02ad3f26462ab841a80165",
"text": "Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).",
"title": ""
}
] | scidocsrr |
c35e7e52def503d263f4bb3cd50ff96a | Online Collaborative Learning for Open-Vocabulary Visual Classifiers | [
{
"docid": "df163d94fbf0414af1dde4a9e7fe7624",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
}
] | [
{
"docid": "71b09fba5c4054af268da7c0037253e6",
"text": "Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.",
"title": ""
},
{
"docid": "6da632d61dbda324da5f74b38f25b1b9",
"text": "Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.",
"title": ""
},
{
"docid": "3cc97542631d734d8014abfbef652c79",
"text": "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.",
"title": ""
},
{
"docid": "a1367b21acfebfe35edf541cdc6e3f48",
"text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.",
"title": ""
},
{
"docid": "2ad2c5fe41133827fa0fdcbf62b3c1e6",
"text": "We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.",
"title": ""
},
{
"docid": "3380497ab11a7f0e34e8095d35a83f71",
"text": "The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective. However, this technique does not easily apply to commonly used distributions such as beta or gamma without further approximations, and most practical applications of the reparameterization gradient fit Gaussian distributions. In this paper, we introduce the generalized reparameterization gradient, a method that extends the reparameterization gradient to a wider class of variational distributions. Generalized reparameterizations use invertible transformations of the latent variables which lead to transformed distributions that weakly depend on the variational parameters. This results in new Monte Carlo gradients that combine reparameterization gradients and score function gradients. We demonstrate our approach on variational inference for two complex probabilistic models. The generalized reparameterization is e ective: even a single sample from the variational distribution is enough to obtain a low-variance gradient.",
"title": ""
},
{
"docid": "d2bf22468506f1f8b9119796da465f0a",
"text": "We define a language G for querying data represented as a labeled graph G. By considering G as a relation, this graphical query language can be viewed as a relational query language, and its expressive power can be compared to that of other relational query languages. We do not propose G as an alternative to general purpose relational query languages, but rather as a complementary language in which recursive queries are simple to formulate. The user is aided in this formulation by means of a graphical interface. The provision of regular expressions in G allows recursive queries more general than transitive closure to be posed, although the language is not as powerful as those based on function-free Horn clauses. However, we hope to be able to exploit well-known graph algorithms in evaluating recursive queries efficiently, a topic which has received widespread attention recently.",
"title": ""
},
{
"docid": "81349ac7f7a4011ccad32e5c2b392533",
"text": "In this literature a new design of printed antipodal UWB vivaldi antenna is proposed. The design is further modified for acquiring notch characteristics in the WLAN band and high front to backlobe ratio (F/B). The modifications are done on the ground plane of the antenna. Previous literatures have shown that the incorporation of planar meta-material structures on the CPW plane along the feed can produce notch characteristics. Here, a novel concept is introduced regarding antipodal vivaldi antenna. In the ground plane of the antenna, square ring resonator (SRR) structure slot and circular ring resonator (CRR) structure slot are cut to produce the notch characteristic on the WLAN band. The designed antenna covers a bandwidth of 6.8 GHz (2.7 GHz–9.5 GHz) and it can be useful for a large range of wireless applications like satellite communication applications and biomedical applications where directional radiation characteristic is needed. The designed antenna shows better impedance matching in the above said band. A parametric study is also performed on the antenna design to optimize the performance of the antenna. The size of the antenna is 40×44×1.57 mm3. It is designed and simulated using HFSS. The presented prototype offers well directive radiation characteristics, good gain and efficiency.",
"title": ""
},
{
"docid": "9eaf4895f0bf86f8403de61d4a82d39a",
"text": "OBJECTIVE\nTo describe a new surgical technique to treat pectus excavatum utilizing low hardness solid silicone block that can be carved during the intraoperative period promoting a better aesthetic result.\n\n\nMETHODS\nBetween May 1994 and February 2013, 34 male patients presenting pectus excavatum were submitted to surgical repair with the use of low hardness solid silicone block, 10 to 30 Shore A. A block-shaped parallelepiped was used with height and base size coinciding with those of the bone defect. The block was carved intraoperatively according to the shape of the dissected space. The patients were followed for a minimum of 120 days postoperatively. The results and the complications were recorded.\n\n\nRESULTS\nFrom the 34 patients operated on, 28 were primary surgeries and 6 were secondary treatment, using other surgical techniques, bone or implant procedures. Postoperative complications included two case of hematomas and eight of seromas. It was necessary to remove the implant in one patient due to pain, and review surgery was performed in another to check prothesis dimensions. Two patients were submitted to fat grafting to improve the chest wall contour. The result was considered satisfactory in 33 patients.\n\n\nCONCLUSION\nThe procedure proved to be fast and effective. The results of carved silicone block were more effective for allowing a more refined contour as compared to custom made implants.",
"title": ""
},
{
"docid": "ad49595bd04c3285be2939e4ced77551",
"text": "Embedded systems have found a very strong foothold in global Information Technology (IT) market since they can provide very specialized and intricate functionality to a wide range of products. On the other hand, the migration of IT functionality to a plethora of new smart devices (like mobile phones, cars, aviation, game or households machines) has enabled the collection of a considerable number of data that can be characterized sensitive. Therefore, there is a need for protecting that data through IT security means. However, eare usually dployed in hostile environments where they can be easily subject of physical attacks. In this paper, we provide an overview from ES hardware perspective of methods and mechanisms for providing strong security and trust. The various categories of physical attacks on security related embedded systems are presented along with countermeasures to thwart them and the importance of reconfigurable logic flexibility, adaptability and scalability along with trust protection mechanisms is highlighted. We adopt those mechanisms in order to propose a FPGA based embedded system hardware architecture capable of providing security and trust along with physical attack protection using trust zone separation. The benefits of such approach are discussed and a subsystem of the proposed architecture is implemented in FPGA technology as a proof of concept case study. From the performed analysis and implementation, it is concluded that flexibility, security and trust are fully realistic options for embedded system security enhancement. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6f96ac41b772d7b0134dcf613a726e87",
"text": "OBJECTIVE\nThe objective of this research was to explore the effects of risperidone on cognitive processes in children with autism and irritable behavior.\n\n\nMETHOD\nThirty-eight children, ages 5-17 years with autism and severe behavioral disturbance, were randomly assigned to risperidone (0.5 to 3.5 mg/day) or placebo for 8 weeks. This sample of 38 was a subset of 101 subjects who participated in the clinical trial; 63 were unable to perform the cognitive tasks. A double-blind placebo-controlled parallel groups design was used. Dependent measures included tests of sustained attention, verbal learning, hand-eye coordination, and spatial memory assessed before, during, and after the 8-week treatment. Changes in performance were compared by repeated measures ANOVA.\n\n\nRESULTS\nTwenty-nine boys and 9 girls with autism and severe behavioral disturbance and a mental age >or=18 months completed the cognitive part of the study. No decline in performance occurred with risperidone. Performance on a cancellation task (number of correct detections) and a verbal learning task (word recognition) was better on risperidone than on placebo (without correction for multiplicity). Equivocal improvement also occurred on a spatial memory task. There were no significant differences between treatment conditions on the Purdue Pegboard (hand-eye coordination) task or the Analog Classroom Task (timed math test).\n\n\nCONCLUSION\nRisperidone given to children with autism at doses up to 3.5 mg for up to 8 weeks appears to have no detrimental effect on cognitive performance.",
"title": ""
},
{
"docid": "4592c8f5758ccf20430dbec02644c931",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "68f797b34880bf08a8825332165a955b",
"text": "The immune system responds to pathogens by a variety of pattern recognition molecules such as the Toll-like receptors (TLRs), which promote recognition of dangerous foreign pathogens. However, recent evidence indicates that normal intestinal microbiota might also positively influence immune responses, and protect against the development of inflammatory diseases. One of these elements may be short-chain fatty acids (SCFAs), which are produced by fermentation of dietary fibre by intestinal microbiota. A feature of human ulcerative colitis and other colitic diseases is a change in ‘healthy’ microbiota such as Bifidobacterium and Bacteriodes, and a concurrent reduction in SCFAs. Moreover, increased intake of fermentable dietary fibre, or SCFAs, seems to be clinically beneficial in the treatment of colitis. SCFAs bind the G-protein-coupled receptor 43 (GPR43, also known as FFAR2), and here we show that SCFA–GPR43 interactions profoundly affect inflammatory responses. Stimulation of GPR43 by SCFAs was necessary for the normal resolution of certain inflammatory responses, because GPR43-deficient (Gpr43-/-) mice showed exacerbated or unresolving inflammation in models of colitis, arthritis and asthma. This seemed to relate to increased production of inflammatory mediators by Gpr43-/- immune cells, and increased immune cell recruitment. Germ-free mice, which are devoid of bacteria and express little or no SCFAs, showed a similar dysregulation of certain inflammatory responses. GPR43 binding of SCFAs potentially provides a molecular link between diet, gastrointestinal bacterial metabolism, and immune and inflammatory responses.",
"title": ""
},
{
"docid": "01ff7e55830977622482ab018acd2cfe",
"text": "Dictionary learning has been widely used in many image processing tasks. In most of these methods, the number of basis vectors is either set by experience or coarsely evaluated empirically. In this paper, we propose a new scale adaptive dictionary learning framework, which jointly estimates suitable scales and corresponding atoms in an adaptive fashion according to the training data, without the need of prior information. We design an atom counting function and develop a reliable numerical scheme to solve the challenging optimization problem. Extensive experiments on texture and video data sets demonstrate quantitatively and visually that our method can estimate the scale, without damaging the sparse reconstruction ability.",
"title": ""
},
{
"docid": "73545ef815fb22fa048fed3e0bc2cc8b",
"text": "Redox-based resistive switching devices (ReRAM) are an emerging class of nonvolatile storage elements suited for nanoscale memory applications. In terms of logic operations, ReRAM devices were suggested to be used as programmable interconnects, large-scale look-up tables or for sequential logic operations. However, without additional selector devices these approaches are not suited for use in large scale nanocrossbar memory arrays, which is the preferred architecture for ReRAM devices due to the minimum area consumption. To overcome this issue for the sequential logic approach, we recently introduced a novel concept, which is suited for passive crossbar arrays using complementary resistive switches (CRSs). CRS cells offer two high resistive storage states, and thus, parasitic “sneak” currents are efficiently avoided. However, until now the CRS-based logic-in-memory approach was only shown to be able to perform basic Boolean logic operations using a single CRS cell. In this paper, we introduce two multi-bit adder schemes using the CRS-based logic-in-memory approach. We proof the concepts by means of SPICE simulations using a dynamical memristive device model of a ReRAM cell. Finally, we show the advantages of our novel adder concept in terms of step count and number of devices in comparison to a recently published adder approach, which applies the conventional ReRAM-based sequential logic concept introduced by Borghetti et al.",
"title": ""
},
{
"docid": "ddd8c2c44ecb82f7892bed163610f4aa",
"text": "Our aim is to make shape memory alloys (SMAs) accessible and visible as creative crafting materials by combining them with paper. In this paper, we begin by presenting mechanisms for actuating paper with SMAs along with a set of design guidelines for achieving dramatic movement. We then describe how we tested the usability and educational potential of one of these mechanisms in a workshop where participants, age 9 to 15, made actuated electronic origami cranes. We found that participants were able to successfully build constructions integrating SMAs and paper, that they enjoyed doing so, and were able to learn skills like circuitry design and soldering over the course of the workshop.",
"title": ""
},
{
"docid": "1f247e127866e62029310218c380bc31",
"text": "Human Resource is the most important asset for any organization and it is the resource of achieving competitive advantage. Managing human resources is very challenging as compared to managing technology or capital and for its effective management, organization requires effective HRM system. HRM system should be backed up by strong HRM practices. HRM practices refer to organizational activities directed at managing the group of human resources and ensuring that the resources are employed towards the fulfillment of organizational goals. The purpose of this study is to explore contribution of Human Resource Management (HRM) practices including selection, training, career planning, compensation, performance appraisal, job definition and employee participation on perceived employee performance. This research describe why human resource management (HRM) decisions are likely to have an important and unique influence on organizational performance. This research forum will help advance research on the link between HRM and organizational performance. Unresolved questions is trying to identify in need of future study and make several suggestions intended to help researchers studying these questions build a more cumulative body of knowledge that will have key implications for body theory and practice. This study comprehensively evaluated the links between systems of High Performance Work Practices and firm performance. Results based on a national sample of firms indicate that these practices have an economically and statistically significant impact on employee performance. Support for predictions that the impact of High Performance Work Practices on firm performance is in part contingent on their interrelationships and links with competitive strategy was limited.",
"title": ""
},
{
"docid": "4df6bbfaa8842d88df0b916946c59ea3",
"text": "Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. \n We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.",
"title": ""
},
{
"docid": "05a7c2820178ea33f79ace6c5bb1a4fa",
"text": "Researchers have explored the design of ambient information systems across a wide range of physical and screen-based media. This work has yielded rich examples of design approaches to the problem of presenting information about a user's world in a way that is not distracting, but is aesthetically pleasing, and tangible to varying degrees. Despite these successes, accumulating theoretical and craft knowledge has been stymied by the lack of a unified vocabulary to describe these systems and a consequent lack of a framework for understanding their design attributes. We argue that this area would significantly benefit from consensus about the design space of ambient information systems and the design attributes that define and distinguish existing approaches. We present a definition of ambient information systems and a taxonomy across four design dimensions: Information Capacity, Notification Level, Representational Fidelity, and Aesthetic Emphasis. Our analysis has uncovered four patterns of system design and points to unexplored regions of the design space, which may motivate future work in the field.",
"title": ""
},
{
"docid": "05bb807afbfa8397c76039afe8c50274",
"text": "In autonomous drone racing, a drone is required to fly through the gates quickly without any collision. Therefore, it is important to detect the gates reliably using computer vision. However, due to the complications such as varying lighting conditions and gates seen overlapped, traditional image processing algorithms based on color and geometry of the gates tend to fail during the actual racing. In this letter, we introduce a convolutional neural network to estimate the center of a gate robustly. Using the detection results, we apply a line-of-sight guidance algorithm. The proposed algorithm is implemented using low cost, off-the-shelf hardware for validation. All vision processing is performed in real time on the onboard NVIDIA Jetson TX2 embedded computer. In a number of tests our proposed framework successfully exhibited fast and reliable detection and navigation performance in indoor environment.",
"title": ""
}
] | scidocsrr |
4ea2b3d5bd3a9f626da0053bab0ba924 | High-Spectral-Efficiency Optical Modulation Formats | [
{
"docid": "f818a1cab06c4650a0aa250c076f5f88",
"text": "Shannon’s determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimumbandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.",
"title": ""
}
] | [
{
"docid": "a11c3f75f6ced7f43e3beeb795948436",
"text": "A new concept of building the controller of a thyristor based three-phase dual converter is presented in this paper. The controller is implemented using mixed mode digital-analog circuitry to achieve optimized performance. The realtime six state pulse patterns needed for the converter are generated by a specially designed ROM based circuit synchronized to the power frequency by a phase-locked-loop. The phase angle and other necessary commands for the converter are managed by an AT89C51 microcontroller. The proposed architecture offers 128-steps in the phase angle control, a resolution sufficient for most converter applications. Because of the hybrid nature of the implementation, the controller can change phase angles online smoothly. The computation burden on the microcontroller is nominal and hence it can easily undertake the tasks of monitoring diagnostic data like overload, loss of excitation and phase sequence. Thus a full fledged system is realizable with only one microcontroller chip, making the control system economic, reliable and efficient.",
"title": ""
},
{
"docid": "13c3e0c082bc89aa5dc9e6e7b7a13119",
"text": "We study the problem of Key Exchange (KE), where authentication is two-factor and based on both electronically stored long keys and human-supplied credentials (passwords or biometrics). The latter credential has low entropy and may be adversarily mistyped. Our main contribution is the first formal treatment of mistyping in this setting. Ensuring security in presence of mistyping is subtle. We show mistypingrelated limitations of previous KE definitions and constructions (of Boyen et al. [7, 6, 10] and Kolesnikov and Rackoff [16]). We concentrate on the practical two-factor authenticated KE setting where servers exchange keys with clients, who use short passwords (memorized) and long cryptographic keys (stored on a card). Our work is thus a natural generalization of Halevi-Krawczyk [15] and Kolesnikov-Rackoff [16]. We discuss the challenges that arise due to mistyping. We propose the first KE definitions in this setting, and formally discuss their guarantees. We present efficient KE protocols and prove their security.",
"title": ""
},
{
"docid": "4310a55c8e96f26f060ec8ded7647d8c",
"text": "Chronotherapeutics aim at treating illnesses according to the endogenous biologic rhythms, which moderate xenobiotic metabolism and cellular drug response. The molecular clocks present in individual cells involve approximately fifteen clock genes interconnected in regulatory feedback loops. They are coordinated by the suprachiasmatic nuclei, a hypothalamic pacemaker, which also adjusts the circadian rhythms to environmental cycles. As a result, many mechanisms of diseases and drug effects are controlled by the circadian timing system. Thus, the tolerability of nearly 500 medications varies by up to fivefold according to circadian scheduling, both in experimental models and/or patients. Moreover, treatment itself disrupted, maintained, or improved the circadian timing system as a function of drug timing. Improved patient outcomes on circadian-based treatments (chronotherapy) have been demonstrated in randomized clinical trials, especially for cancer and inflammatory diseases. However, recent technological advances have highlighted large interpatient differences in circadian functions resulting in significant variability in chronotherapy response. Such findings advocate for the advancement of personalized chronotherapeutics through interdisciplinary systems approaches. Thus, the combination of mathematical, statistical, technological, experimental, and clinical expertise is now shaping the development of dedicated devices and diagnostic and delivery algorithms enabling treatment individualization. In particular, multiscale systems chronopharmacology approaches currently combine mathematical modeling based on cellular and whole-body physiology to preclinical and clinical investigations toward the design of patient-tailored chronotherapies. We review recent systems research works aiming to the individualization of disease treatment, with emphasis on both cancer management and circadian timing system-resetting strategies for improving chronic disease control and patient outcomes.",
"title": ""
},
{
"docid": "71c6c714535ae1bfd749cbb8bbb34f5e",
"text": "This paper tackles the problem of relative pose estimation between two monocular camera images in textureless scenes. Due to a lack of point matches, point-based approaches such as the 5-point algorithm often fail when used in these scenarios. Therefore we investigate relative pose estimation from line observations. We propose a new approach in which the relative pose estimation from lines is extended by a 3D line direction estimation step. The estimated line directions serve to improve the robustness and the efficiency of all processing phases: they enable us to guide the matching of line features and allow an efficient calculation of the relative pose. First, we describe in detail the novel 3D line direction estimation from a single image by clustering of parallel lines in the world. Secondly, we propose an innovative guided matching in which only clusters of lines with corresponding 3D line directions are considered. Thirdly, we introduce the new relative pose estimation based on 3D line directions. Finally, we combine all steps to a visual odometry system. We evaluate the different steps on synthetic and real sequences and demonstrate that in the targeted scenarios we outperform the state-of-the-art in both accuracy and computation time.",
"title": ""
},
{
"docid": "6a1fa32d9a716b57a321561dfce83879",
"text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .",
"title": ""
},
{
"docid": "321b5e5f05344b25605b289bcc5fab94",
"text": "We revisit a pioneer unsupervised learning technique called archetypal analysis, [5] which is related to successful data analysis methods such as sparse coding [18] and non-negative matrix factorization [19]. Since it was proposed, archetypal analysis did not gain a lot of popularity even though it produces more interpretable models than other alternatives. Because no efficient implementation has ever been made publicly available, its application to important scientific problems may have been severely limited. Our goal is to bring back into favour archetypal analysis. We propose a fast optimization scheme using an active-set strategy, and provide an efficient open-source implementation interfaced with Matlab, R, and Python. Then, we demonstrate the usefulness of archetypal analysis for computer vision tasks, such as codebook learning, signal classification, and large image collection visualization.",
"title": ""
},
{
"docid": "ea87bfc0d6086e367e8950b445529409",
"text": " Queue stability (Chapter 2.1) Scheduling for stability, capacity regions (Chapter 2.3) Linear programs (Chapter 2.3, Chapter 3) Energy optimality (Chapter 3.2) Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6) Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3) Inequality constraints and virtual queues (Chapter 4.4) Drift-plus-penalty algorithm (Chapter 4.5) Performance and delay tradeoffs (Chapter 3.2, 4.5) Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)",
"title": ""
},
{
"docid": "25058c265e505ed15910dd30dfe03119",
"text": "Endowing machines with sensing capabilities similar to those of humans is a prevalent quest in engineering and computer science. In the pursuit of making computers sense their surroundings, a huge effort has been conducted to allow machines and computers to acquire, process, analyze and understand their environment in a human-like way. Focusing on the sense of hearing, the ability of computers to sense their acoustic environment as humans do goes by the name of machine hearing. To achieve this ambitious aim, the representation of the audio signal is of paramount importance. In this paper, we present an up-to-date review of the most relevant audio feature extraction techniques developed to analyze the most usual audio signals: speech, music and environmental sounds. Besides revisiting classic approaches for completeness, we include the latest advances in the field based on new domains of analysis together with novel bio-inspired proposals. These approaches are described following a taxonomy that organizes them according to their physical or perceptual basis, being subsequently divided depending on the domain of computation (time, frequency, wavelet, image-based, cepstral, or other domains). The description of the approaches is accompanied with recent examples of their application to machine hearing related problems.",
"title": ""
},
{
"docid": "5de29983943c3cfa30bb1e94854b606d",
"text": "Designing reliable user authentication on mobile phones is becoming an increasingly important task to protect users' private information and data. Since biometric approaches can provide many advantages over the traditional authentication methods, they have become a significant topic for both academia and industry. The major goal of biometric user authentication is to authenticate legitimate users and identify impostors based on physiological and behavioral characteristics. In this paper, we survey the development of existing biometric authentication techniques on mobile phones, particularly on touch-enabled devices, with reference to 11 biometric approaches (five physiological and six behavioral). We present a taxonomy of existing efforts regarding biometric authentication on mobile phones and analyze their feasibility of deployment on touch-enabled mobile phones. In addition, we systematically characterize a generic biometric authentication system with eight potential attack points and survey practical attacks and potential countermeasures on mobile phones. Moreover, we propose a framework for establishing a reliable authentication mechanism through implementing a multimodal biometric user authentication in an appropriate way. Experimental results are presented to validate this framework using touch dynamics, and the results show that multimodal biometrics can be deployed on touch-enabled phones to significantly reduce the false rates of a single biometric system. Finally, we identify challenges and open problems in this area and suggest that touch dynamics will become a mainstream aspect in designing future user authentication on mobile phones.",
"title": ""
},
{
"docid": "9fc7f8ef20cf9c15f9d2d2ce5661c865",
"text": "This paper presents a new iris database that contains images with noise. This is in contrast with the existing databases, that are noise free. UBIRIS is a tool for the development of robust iris recognition algorithms for biometric proposes. We present a detailed description of the many characteristics of UBIRIS and a comparison of several image segmentation approaches used in the current iris segmentation methods where it is evident their small tolerance to noisy images.",
"title": ""
},
{
"docid": "5eccbb19af4a1b19551ce4c93c177c07",
"text": "This paper presents the design and development of a microcontroller based heart rate monitor using fingertip sensor. The device uses the optical technology to detect the flow of blood through the finger and offers the advantage of portability over tape-based recording systems. The important feature of this research is the use of Discrete Fourier Transforms to analyse the ECG signal in order to measure the heart rate. Evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense physical activity. The performance of HRM device was compared with ECG signal represented on an oscilloscope and manual pulse measurement of heartbeat, giving excellent results. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly.",
"title": ""
},
{
"docid": "09806e0fdb434c181d9bceed140fed6c",
"text": "Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the “Chess-board Extraction by Subtraction and Summation” (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chessboard pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects.",
"title": ""
},
{
"docid": "2d0765e6b695348dea8822f695dcbfa1",
"text": "Social networks are currently gaining increasing impact especially in the light of the ongoing growth of web-based services like facebook.com. A central challenge for the social network analysis is the identification of key persons within a social network. In this context, the article aims at presenting the current state of research on centrality measures for social networks. In view of highly variable findings about the quality of various centrality measures, we also illustrate the tremendous importance of a reflected utilization of existing centrality measures. For this purpose, the paper analyzes five common centrality measures on the basis of three simple requirements for the behavior of centrality measures.",
"title": ""
},
{
"docid": "3848b727cfda3031742cec04abd74608",
"text": "This paper presents SemFrame, a system that induces frame semantic verb classes from WordNet and LDOCE. Semantic frames are thought to have significant potential in resolving the paraphrase problem challenging many languagebased applications. When compared to the handcrafted FrameNet, SemFrame achieves its best recall-precision balance with 83.2% recall (based on SemFrame's coverage of FrameNet frames) and 73.8% precision (based on SemFrame verbs’ semantic relatedness to frame-evoking verbs). The next best performing semantic verb classes achieve 56.9% recall and 55.0% precision.",
"title": ""
},
{
"docid": "fd69e05a9be607381c4b8cd69d758f41",
"text": "The increase in electronically mediated self-servic e technologies in the banking industry has impacted on the way banks service consumers. Despit e a large body of research on electronic banking channels, no study has been undertaken to e xplor the fit between electronic banking channels and banking tasks. Nor has there been rese a ch into how the ‘task-channel fit’ and other factors impact on consumers’ intention to use elect ronic banking channels. This paper proposes a theoretical model addressing these gaps. An explora tory study was first conducted, investigating industry experts’ perceptions towards the concept o f ‘task-channel fit’ and its relationship to other electronic banking channel variables. The findings demonstrated that the concept was perceived as being highly relevant by bank managers. A resear ch model was then developed drawing on the existing literature. To evaluate the research mode l quantitatively, a survey will be developed and validated, administered to a sample of consumers, a nd the resulting data used to test both measurement and structural aspects of the research model.",
"title": ""
},
{
"docid": "ecbdb56c52a59f26cf8e33fc533d608f",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
},
{
"docid": "9fd5e182851ff0be67e8865c336a1f77",
"text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.",
"title": ""
},
{
"docid": "6cfedfc45ea1b3db23d022b06c46743a",
"text": "This study examined the relationship between financial knowledge and credit card behavior of college students. The widespread availability of credit cards has raised concerns over how college students might use those cards given the negative consequences (both immediate and long-term) associated with credit abuse and mismanagement. Using a sample of 1,354 students from a major southeastern university, results suggest that financial knowledge is a significant factor in the credit card decisions of college students. Students with higher scores on a measure of personal financial knowledge are more likely to engage in more responsible credit card use. Specific behaviors chosen have been associated with greater costs of borrowing and adverse economic consequences in the past.",
"title": ""
},
{
"docid": "a4fa2faf888728e4861cd47377dd8fd8",
"text": "Fully-automatic facial expression recognition (FER) is a key component of human behavior analysis. Performing FER from still images is a challenging task as it involves handling large interpersonal morphological differences, and as partial occlusions can occasionally happen. Furthermore, labelling expressions is a time-consuming process that is prone to subjectivity, thus the variability may not be fully covered by the training data. In this work, we propose to train random forests upon spatially-constrained random local subspaces of the face. The output local predictions form a categorical expression-driven high-level representation that we call local expression predictions (LEPs). LEPs can be combined to describe categorical facial expressions as well as action units (AUs). Furthermore, LEPs can be weighted by confidence scores provided by an autoencoder network. Such network is trained to locally capture the manifold of the non-occluded training data in a hierarchical way. Extensive experiments show that the proposed LEP representation yields high descriptive power for categorical expressions and AU occurrence prediction, and leads to interesting perspectives towards the design of occlusion-robust and confidence-aware FER systems.",
"title": ""
},
{
"docid": "44ae81b3961a682b9b881c8077fb9506",
"text": "Osteoarthritis is a common disease, clinically manifested by joint pain, swelling and progressive loss of function. The severity of disease manifestations can vary but most of the patients only need intermittent symptom relief without major interventions. However, there is a group of patients that shows fast progression of the disease process leading to disability and ultimately joint replacement. Apart from symptom relief, no treatments have been identified that arrest or reverse the disease process. Therefore, there has been increasing attention devoted to the understanding of the mechanisms that are driving the disease process. Among these mechanisms, the biology of the cartilage-subchondral bone unit has been highlighted as key in osteoarthritis, and pathways that involve both cartilage and bone formation and turnover have become prime targets for modulation, and thus therapeutic intervention. Studies in developmental, genetic and joint disease models indicate that Wnt signaling is critically involved in these processes. Consequently, targeting Wnt signaling in a selective and tissue specific manner is an exciting opportunity for the development of disease modifying drugs for osteoarthritis.",
"title": ""
}
] | scidocsrr |
7168693549485567e291d3e70e28e135 | Context Contrasted Feature and Gated Multi-scale Aggregation for Scene Segmentation | [
{
"docid": "8a77882cfe06eaa88db529432ed31b0c",
"text": "We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.",
"title": ""
},
{
"docid": "93314112049e3bccd7853e63afc97f73",
"text": "In this paper, we address the challenging task of scene segmentation. In order to capture the rich contextual dependencies over image regions, we propose Directed Acyclic Graph-Recurrent Neural Networks (DAG-RNN) to perform context aggregation over locally connected feature maps. More specifically, DAG-RNN is placed on top of pre-trained CNN (feature extractor) to embed context into local features so that their representative capability can be enhanced. In comparison with plain CNN (as in Fully Convolutional Networks-FCN), DAG-RNN is empirically found to be significantly more effective at aggregating context. Therefore, DAG-RNN demonstrates noticeably performance superiority over FCNs on scene segmentation. Besides, DAG-RNN entails dramatically less parameters as well as demands fewer computation operations, which makes DAG-RNN more favorable to be potentially applied on resource-constrained embedded devices. Meanwhile, the class occurrence frequencies are extremely imbalanced in scene segmentation, so we propose a novel class-weighted loss to train the segmentation network. The loss distributes reasonably higher attention weights to infrequent classes during network training, which is essential to boost their parsing performance. We evaluate our segmentation network on three challenging public scene segmentation benchmarks: Sift Flow, Pascal Context and COCO Stuff. On top of them, we achieve very impressive segmentation performance.",
"title": ""
}
] | [
{
"docid": "85713bc895a5477e9e99bd4884d01d3c",
"text": "Recently, Fan-out Wafer Level Packaging (FOWLP) has been emerged as a promising technology to meet the ever increasing demands of the consumer electronic products. However, conventional FOWLP technology is limited to small size packages with single chip and Low to Mid-range Input/ Output (I/O) count due to die shift, warpage and RDL scaling issues. In this paper, we are presenting new RDL-First FOWLP approach which enables RDL scaling, overcomes the die shift, die protrusion and warpage challenges of conventional FOWLP, and extend the FOWLP technology for multi-chip and high I/O count package applications. RDL-First FOWLP process integration flow was demonstrated and fabricated test vehicles of large multi-chip package of 20 x 20 mm2 with 3 layers fine pitch RDL of LW/LS of 2μm/2μm and ~2400 package I/Os. Two Through Mold Interconnections (TMI) fabrication approaches (tall Cu pillar and vertical Cu wire) were evaluated on this platform for Package-on-Package (PoP) application. Backside RDL process on over molded Chip-to-Wafer (C2W) with carrier wafer was demonstrated for PoP applications. Laser de-bonding and sacrificial release layer material cleaning processes were established, and successfully used in the integration flow to fabricate the test vehicles. Assembly processes were optimized and successfully demonstrated large multi-chip RDL-first FOWLP package and PoP assembly on test boards. The large multi-chip FOWLP packages samples were passed JEDEC component level test Moisture Sensitivity Test Level 1 & Level 3 (MST L1 & MST L3) and 30 drops of board level drop test, and results will be presented.",
"title": ""
},
{
"docid": "4f096ba7fc6164cdbf5d37676d943fa8",
"text": "This work presents an intelligent clothes search system based on domain knowledge, targeted at creating a virtual assistant to search clothes matched to fashion and userpsila expectation using all what have already been in real closet. All what garment essentials and fashion knowledge are from visual images. Users can simply submit the desired image keywords, such as elegant, sporty, casual, and so on, and occasion type, such as formal meeting, outdoor dating, and so on, to the system. And then the fashion style recognition module is activated to search the desired clothes within the personal garment database. Category learning with supervised neural networking is applied to cluster garments into different impression groups. The input stimuli of the neural network are three sensations, warmness, loudness, and softness, which are transformed from the physical garment essentials like major color tone, print type, and fabric material. The system aims to provide such an intelligent user-centric services system functions as a personal fashion advisor.",
"title": ""
},
{
"docid": "6fd1e9896fc1aaa79c769bd600d9eac3",
"text": "In future planetary exploration missions, rovers will be required to autonomously traverse challenging environments. Much of the previous work in robot motion planning cannot be successfully applied to the rough-terrain planning problem. A model-based planning method is presented in this paper that is computationally efficient and takes into account uncertainty in the robot model, terrain model, range sensor data, and rover pathfollowing errors. It is based on rapid path planning through the visible terrain map with a simple graph-search algorithm, followed by a physics-based evaluation of the path with a rover model. Simulation results are presented which demonstrate the method’s effectiveness.",
"title": ""
},
{
"docid": "29749091f6ccdc0c2697c9faf3682c90",
"text": "In traditional video conferencing systems, it is impossible for users to have eye contact when looking at the conversation partner’s face displayed on the screen, due to the disparity between the locations of the camera and the screen. In this work, we implemented a gaze correction system that can automatically maintain eye contact by replacing the eyes of the user with the direct looking eyes (looking directly into the camera) captured in the initialization stage. Our real-time system has good robustness against different lighting conditions and head poses, and it provides visually convincing and natural results while relying only on a single webcam that can be positioned almost anywhere around the",
"title": ""
},
{
"docid": "4859363a5f64977336d107794251a203",
"text": "The paper treats a modular program in which transfers of control between modules follow a semi-Markov process. Each module is failure-prone, and the different failure processes are assumed to be Poisson. The transfers of control between modules (interfaces) are themselves subject to failure. The overall failure process of the program is described, and an asymptotic Poisson process approximation is given for the case when the individual modules and interfaces are very reliable. A simple formula gives the failure rate of the overall program (and hence mean time between failures) under this limiting condition. The remainder of the paper treats the consequences of failures. Each failure results in a cost, represented by a random variable with a distribution typical of the type of failure. The quantity of interest is the total cost of running the program for a time t, and a simple approximating distribution is given for large t. The parameters of this limiting distribution are functions only of the means and variances of the underlying distributions, and are thus readily estimable. A calculation of program availability is given as an example of the cost process. There follows a brief discussion of methods of estimating the parameters of the model, with suggestions of areas in which it might be used.",
"title": ""
},
{
"docid": "7cecfd37e44b26a67bee8e9c7dd74246",
"text": "Forecasting hourly spot prices for real-time electricity usage is a challenging task. This paper investigates a series of forecasting methods to 90 and 180 days of load data collection acquired from the Iberian Electricity Market (MIBEL). This dataset was used to train and test multiple forecast models. The Mean Absolute Percentage Error (MAPE) for the proposed Hybrid combination of Auto Regressive Integrated Moving Average (ARIMA) and Generalized Linear Model (GLM) was compared against ARIMA, GLM, Random forest (RF) and Support Vector Machines (SVM) methods. The results indicate significant improvement in MAPE and correlation co-efficient values for the proposed hybrid ARIMA-GLM method.",
"title": ""
},
{
"docid": "3984af6a6b9dbae761490e8595d22d60",
"text": "In 2013, the IEEE Future Directions Committee (FDC) formed an SDN work group to explore the amount of interest in forming an IEEE Software-Defined Network (SDN) Community. To this end, a Workshop on “SDN for Future Networks and Services” (SDN4FNS’13) was organized in Trento, Italy (Nov. 11-13 2013). Following the results of the workshop, in this paper, we have further analyzed scenarios, prior-art, state of standardization, and further discussed the main technical challenges and socio-economic aspects of SDN and virtualization in future networks and services. A number of research and development directions have been identified in this white paper, along with a comprehensive analysis of the technical feasibility and business availability of those fundamental technologies. A radical industry transition towards the “economy of information through softwarization” is expected in the near future. Keywords—Software-Defined Networks, SDN, Network Functions Virtualization, NFV, Virtualization, Edge, Programmability, Cloud Computing.",
"title": ""
},
{
"docid": "548fb90bf9d665e57ced0547db1477b7",
"text": "In the application of face recognition, eyeglasses could significantly degrade the recognition accuracy. A feasible method is to collect large-scale face images with eyeglasses for training deep learning methods. However, it is difficult to collect the images with and without glasses of the same identity, so that it is difficult to optimize the intra-variations caused by eyeglasses. In this paper, we propose to address this problem in a virtual synthesis manner. The high-fidelity face images with eyeglasses are synthesized based on 3D face model and 3D eyeglasses. Models based on deep learning methods are then trained on the synthesized eyeglass face dataset, achieving better performance than previous ones. Experiments on the real face database validate the effectiveness of our synthesized data for improving eyeglass face recognition performance.",
"title": ""
},
{
"docid": "c091e5b24dc252949b3df837969e263a",
"text": "The emergence of powerful portable computers, along with advances in wireless communication technologies, has made mobile computing a reality. Among the applications that are finding their way to the market of mobile computingthose that involve data managementhold a prominent position. In the past few years, there has been a tremendous surge of research in the area of data management in mobile computing. This research has produced interesting results in areas such as data dissemination over limited bandwith channels, location-dependent querying of data, and advanced interfaces for mobile computers. This paper is an effort to survey these techniques and to classify this research in a few broad areas.",
"title": ""
},
{
"docid": "61bb811aa336e77d2549c51939f9668d",
"text": "Policy languages (such as privacy and rights) have had little impact on the wider community. Now that Social Networks have taken off, the need to revisit Policy languages and realign them towards Social Networks requirements has become more apparent. One such language is explored as to its applicability to the Social Networks masses. We also argue that policy languages alone are not sufficient and thus they should be paired with reasoning mechanisms to provide precise and unambiguous execution models of the policies. To this end we propose a computationally oriented model to represent, reason with and execute policies for Social Networks.",
"title": ""
},
{
"docid": "0d2b905bc0d7f117d192a8b360cc13f0",
"text": "We investigate a previously unknown phase of phosphorus that shares its layered structure and high stability with the black phosphorus allotrope. We find the in-plane hexagonal structure and bulk layer stacking of this structure, which we call \"blue phosphorus,\" to be related to graphite. Unlike graphite and black phosphorus, blue phosphorus displays a wide fundamental band gap. Still, it should exfoliate easily to form quasi-two-dimensional structures suitable for electronic applications. We study a likely transformation pathway from black to blue phosphorus and discuss possible ways to synthesize the new structure.",
"title": ""
},
{
"docid": "263c04402cfe80649b1d3f4a8578e99b",
"text": "This paper presents M3Express (Modular-Mobile-Multirobot), a new design for a low-cost modular robot. The robot is self-mobile, with three independently driven wheels that also serve as connectors. The new connectors can be automatically operated, and are based on stationary magnets coupled to mechanically actuated ferromagnetic yoke pieces. Extensive use is made of plastic castings, laser cut plastic sheets, and low-cost motors and electronic components. Modules interface with a host PC via Bluetooth® radio. An off-board camera, along with a set of modules and a control PC form a convenient, low-cost system for rapidly developing and testing control algorithms for modular reconfigurable robots. Experimental results demonstrate mechanical docking, connector strength, and accuracy of dead reckoning locomotion.",
"title": ""
},
{
"docid": "5e0cff7f2b8e5aa8d112eacf2f149d60",
"text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.",
"title": ""
},
{
"docid": "0cf97758f5f7dab46e969af14bb36db9",
"text": "The design complexity of modern high performance processors calls for innovative design methodologies for achieving time-to-market goals. New design techniques are also needed to curtail power increases that inherently arise from ever increasing performance targets. This paper describes new design approaches employed by the POWER8 processor design team to address complexity and power consumption challenges. Improvements in productivity are attained by leveraging a new and more synthesis-centric design methodology. New optimization strategies for synthesized macros allow power reduction without sacrificing performance. These methodology innovations contributed to the industry leading performance of the POWER8 processor. Overall, POWER8 delivers a 2.5x increase in per-socket performance over its predecessor, POWER7+, while maintaining the same power dissipation.",
"title": ""
},
{
"docid": "275aef345bf090486831faf7b243ac99",
"text": "Honey bee colony feeding trials were conducted to determine whether differential effects of carbohydrate feeding (sucrose syrup (SS) vs. high fructose corn syrup, or HFCS) could be measured between colonies fed exclusively on these syrups. In one experiment, there was a significant difference in mean wax production between the treatment groups and a significant interaction between time and treatment for the colonies confined in a flight arena. On average, the colonies supplied with SS built 7916.7 cm(2) ± 1015.25 cm(2) honeycomb, while the colonies supplied with HFCS built 4571.63 cm(2) ± 786.45 cm(2). The mean mass of bees supplied with HFCS was 4.65 kg (± 0.97 kg), while those supplied with sucrose had a mean of 8.27 kg (± 1.26). There was no significant difference between treatment groups in terms of brood rearing. Differences in brood production were complicated due to possible nutritional deficiencies experienced by both treatment groups. In the second experiment, colonies supplemented with SS through the winter months at a remote field site exhibited increased spring brood production when compared to colonies fed with HFCS. The differences in adult bee populations were significant, having an overall average of 10.0 ± 1.3 frames of bees fed the sucrose syrup between November 2008 and April 2009, compared to 7.5 ± 1.6 frames of bees fed exclusively on HFCS. For commercial queen beekeepers, feeding the right supplementary carbohydrates could be especially important, given the findings of this study.",
"title": ""
},
{
"docid": "e7afe834b7ca7be145cb9db57febab39",
"text": "Current approaches to cross-lingual sentiment analysis try to leverage the wealth of labeled English data using bilingual lexicons, bilingual vector space embeddings, or machine translation systems. Here we show that it is possible to use a single linear transformation, with as few as 2000 word pairs, to capture fine-grained sentiment relationships between words in a cross-lingual setting. We apply these cross-lingual sentiment models to a diverse set of tasks to demonstrate their functionality in a non-English context. By effectively leveraging English sentiment knowledge without the need for accurate translation, we can analyze and extract features from other languages with scarce data at a very low cost, thus making sentiment and related analyses for many languages inexpensive.",
"title": ""
},
{
"docid": "6b52bb06c140e5f55f7094cbbf906769",
"text": "A method for tracking and predicting cloud movement using ground based sky imagery is presented. Sequences of partial sky images, with each image taken one second apart with a size of 640 by 480 pixels, were processed to determine the time taken for clouds to reach a user defined region in the image or the Sun. The clouds were first identified by segmenting the image based on the difference between the blue and red colour channels, producing a binary detection image. Good features to track were then located in the image and tracked utilising the Lucas-Kanade method for optical flow. From the trajectory of the tracked features and the binary detection image, cloud signals were generated. The trajectory of the individual features were used to determine the risky cloud signals (signals that pass over the user defined region or Sun). Time to collision estimates were produced based on merging these risky cloud signals. Estimates of times up to 40 seconds were achieved with error in the estimate increasing when the estimated time is larger. The method presented has the potential for tracking clouds travelling in different directions and at different velocities.",
"title": ""
},
{
"docid": "c73d2c65892d5f257b3d4ab1710cd63f",
"text": "Neural-network training can be slow and energy intensive, owing to the need to transfer the weight data for the network between conventional digital memory chips and processor chips. Analogue non-volatile memory can accelerate the neural-network training algorithm known as backpropagation by performing parallelized multiply–accumulate operations in the analogue domain at the location of the weight data. However, the classification accuracies of such in situ training using non-volatile-memory hardware have generally been less than those of software-based training, owing to insufficient dynamic range and excessive weight-update asymmetry. Here we demonstrate mixed hardware–software neural-network implementations that involve up to 204,900 synapses and that combine long-term storage in phase-change memory, near-linear updates of volatile capacitors and weight-data transfer with ‘polarity inversion’ to cancel out inherent device-to-device variations. We achieve generalization accuracies (on previously unseen data) equivalent to those of software-based training on various commonly used machine-learning test datasets (MNIST, MNIST-backrand, CIFAR-10 and CIFAR-100). The computational energy efficiency of 28,065 billion operations per second per watt and throughput per area of 3.6 trillion operations per second per square millimetre that we calculate for our implementation exceed those of today’s graphical processing units by two orders of magnitude. This work provides a path towards hardware accelerators that are both fast and energy efficient, particularly on fully connected neural-network layers. Analogue-memory-based neural-network training using non-volatile-memory hardware augmented by circuit simulations achieves the same accuracy as software-based training but with much improved energy efficiency and speed.",
"title": ""
},
{
"docid": "7f81e1d6a6955cec178c1c811810322b",
"text": "The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems",
"title": ""
}
] | scidocsrr |
c25968e403b102d2bcef809b6d05c7ef | Multimodal Network Embedding via Attention based Multi-view Variational Autoencoder | [
{
"docid": "17dd72e274d25a02e9a8183237092f0c",
"text": "Network representation is the basis of many applications and of extensive interest in various fields, such as information retrieval, social network analysis, and recommendation systems. Most previous methods for network representation only consider the incomplete aspects of a problem, including link structure, node information, and partial integration. The present study introduces a deep network representation model that seamlessly integrates the text information and structure of a network. The model captures highly non-linear relationships between nodes and complex features of a network by exploiting the variational autoencoder (VAE), which is a deep unsupervised generation algorithm. The representation learned with a paragraph vector model is merged with that learned with the VAE to obtain the network representation, which preserves both structure and text information. Comprehensive experiments is conducted on benchmark datasets and find that the introduced model performs better than state-of-the-art techniques.",
"title": ""
},
{
"docid": "a1ef2bce061c11a2d29536d7685a56db",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
},
{
"docid": "df163d94fbf0414af1dde4a9e7fe7624",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
},
{
"docid": "4337f8c11a71533d38897095e5e6847a",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
}
] | [
{
"docid": "7dde24346f2df846b9dbbe45cd9a99d6",
"text": "The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a \"happy individual\" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered adequate. Most of the validity hypotheses formulated a priori (convergent and know-group) was further confirmed. The cut-off value of higher than 7 in remembered PHI was identified (AUC = 0.780, sensitivity = 69.2%, specificity = 78.2%) as the best one to identify a happy individual.We concluded that the Universal Portuguese version of the PHI is valid and reliable for use in the Brazilian population using online surveys.",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "5897b87a82d5bc11757e33a8a46b1f21",
"text": "BACKGROUND\nProspective data from over 10 years of follow-up were used to examine neighbourhood deprivation, social fragmentation and trajectories of health.\n\n\nMETHODS\nFrom the third phase (1991-93) of the Whitehall II study of British civil servants, SF-36 health functioning was measured on up to five occasions for 7834 participants living in 2046 census wards. Multilevel linear regression models assessed the Townsend deprivation index and social fragmentation index as predictors of initial health and health trajectories.\n\n\nRESULTS\nIndependent of individual socioeconomic factors, deprivation was inversely associated with initial SF-36 physical component summary (PCS) score. Social fragmentation was not associated with PCS scores. Deprivation and social fragmentation were inversely associated with initial mental component summary (MCS) score. Neighbourhood characteristics were not associated with trajectories of PCS score or MCS score for the whole set. However, restricted analysis on longer term residents revealed that residents in deprived or socially fragmented neighbourhoods had lowest initial and smallest improvements in MCS score.\n\n\nCONCLUSIONS\nThis longitudinal study provides evidence that residence in a deprived or fragmented neighbourhood is associated with poorer mental health and that longer exposure to such neighbourhood environments has incremental effects. Associations between physical health functioning and neighbourhood characteristics were less clear. Mindful of the importance of individual socioeconomic factors, the findings warrant more detailed examination of materially and socially deprived neighbourhoods and their consequences for health.",
"title": ""
},
{
"docid": "a05d87b064ab71549d373599700cfcbf",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "820ae89c1ce626e52ed1ee6d61ee0aee",
"text": "Induction motor especially three phase induction motor plays vital role in the industry due to their advantages over other electrical motors. Therefore, there is a strong demand for their reliable and safe operation. If any fault and failures occur in the motor it can lead to excessive downtimes and generate great losses in terms of revenue and maintenance. Therefore, an early fault detection is needed for the protection of the motor. In the current scenario, the health monitoring of the induction motor are increasing due to its potential to reduce operating costs, enhance the reliability of operation and improve service to the customers. The health monitoring of induction motor is an emerging technology for online detection of incipient faults. The on-line health monitoring involves taking measurements on a machine while it is in operating conditions in order to detect faults with the aim of reducing both unexpected failure and maintenance costs. In the present paper, a comprehensive survey of induction machine faults, diagnostic methods and future aspects in the health monitoring of induction motor has been discussed.",
"title": ""
},
{
"docid": "9b646ef8c6054f9a4d85cf25e83d415c",
"text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed",
"title": ""
},
{
"docid": "a3391be7ac84ceb8c024c1d32eb83c6c",
"text": "This paper presents a new approach to find energy-efficient motion plans for mobile robots. Motion planning has two goals: finding the routes and determining the velocities. We model the relationship of motors' speed and their power consumption with polynomials. The velocity of the robot is related to its wheels' velocities by performing a linear transformation. We compare the energy consumption of different routes at different velocities and consider the energy consumed for acceleration and turns. We use experiment-validated simulation to demonstrate up to 51% energy savings for searching an open area.",
"title": ""
},
{
"docid": "66243ce54120d2c61525ad71d501a724",
"text": "Ameloblastic fibrosarcoma is a mixed odontogenic tumor that can originate de novo or from a transformed ameloblastic fibroma. This report describes the case of a 34-year-old woman with a recurrent, rapidly growing, debilitating lesion. This lesion appeared as a large painful mandibular swelling that filled the oral cavity and extended to the infratemporal fossa. The lesion had been previously misdiagnosed as ameloblastoma. Twenty months after final surgery and postoperative chemotherapy, lung metastases were diagnosed after she reported respiratory signs and symptoms.",
"title": ""
},
{
"docid": "061c8e8e9d6a360c36158193afee5276",
"text": "Distribution transformers are one of the most important equipment in power network. Because of, the large number of transformers distributed over a wide area in power electric systems, the data acquisition and condition monitoring is a important issue. This paper presents design and implementation of a mobile embedded system and a novel software to monitor and diagnose condition of transformers, by record key operation indictors of a distribution transformer like load currents, transformer oil, ambient temperatures and voltage of three phases. The proposed on-line monitoring system integrates a Global Service Mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. Data of operation condition of transformer receives in form of SMS (Short Message Service) and will be save in computer server. Using the suggested online monitoring system will help utility operators to keep transformers in service for longer of time.",
"title": ""
},
{
"docid": "a4b1a04647b8d4f8a9cc837304c7cbae",
"text": "The human brain automatically attempts to interpret the physical visual inputs from our eyes in terms of plausible motion of the viewpoint and/or of the observed object or scene [Ellis 1938; Graham 1965; Giese and Poggio 2003]. In the physical world, the rules that define plausible motion are set by temporal coherence, parallax, and perspective projection. Our brain, however, refuses to feel constrained by the unrelenting laws of physics in what it deems plausible motion. Image metamorphosis experiments, in which unnatural, impossible in-between images are interpolated, demonstrate that under certain circumstances, we willingly accept chimeric images as plausible transition stages between images of actual, known objects [Beier and Neely 1992; Seitz and Dyer 1996]. Or think of cartoon animations which for the longest time were hand-drawn pieces of art that didn't need to succumb to physical correctness. The goal of our work is to exploit this freedom of perception for space-time interpolation, i.e., to generate transitions between still images that our brain accepts as plausible motion in a moving 3D world.",
"title": ""
},
{
"docid": "36a9f1c016d0e2540460e28c4c846e9a",
"text": "Nowadays PDF documents have become a dominating knowledge repository for both the academia and industry largely because they are very convenient to print and exchange. However, the methods of automated structure information extraction are yet to be fully explored and the lack of effective methods hinders the information reuse of the PDF documents. To enhance the usability for PDF-formatted electronic books, we propose a novel computational framework to analyze the underlying physical structure and logical structure. The analysis is conducted at both page level and document level, including global typographies, reading order, logical elements, chapter/section hierarchy and metadata. Moreover, two characteristics of PDF-based books, i.e., style consistency in the whole book document and natural rendering order of PDF files, are fully exploited in this paper to improve the conventional image-based structure extraction methods. This paper employs the bipartite graph as a common structure for modeling various tasks, including reading order recovery, figure and caption association, and metadata extraction. Based on the graph representation, the optimal matching (OM) method is utilized to find the global optima in those tasks. Extensive benchmarking using real-world data validates the high efficiency and discrimination ability of the proposed method.",
"title": ""
},
{
"docid": "7b55b39902d40295ea14088dddaf77e0",
"text": "Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.",
"title": ""
},
{
"docid": "e22a3cd1887d905fffad0f9d14132ed6",
"text": "Relativistic electron beam generation studies have been carried out in LIA-400 system through explosive electron emission for various cathode materials. This paper presents the emission properties of different cathode materials at peak diode voltages varying from 10 to 220 kV and at peak current levels from 0.5 to 2.2 kA in a single pulse duration of 160-180 ns. The cathode materials used are graphite, stainless steel, and red polymer velvet. The perveance data calculated from experimental waveforms are compared with 1-D Child Langmuir formula to obtain the cathode plasma expansion velocity for various cathode materials. Various diode parameters are subject to shot to shot variation analysis. Velvet cathode proves to be the best electron emitter because of its lower plasma expansion velocity and least shot to shot variability.",
"title": ""
},
{
"docid": "9ebf703bcf5004a74189638514b20313",
"text": "In many real-world tasks, there are abundant unlabeled examples but the number of labeled training examples is limited, because labeling the examples requires human efforts and expertise. So, semi-supervised learning which tries to exploit unlabeled examples to improve learning performance has become a hot topic. Disagreement-based semi-supervised learning is an interesting paradigm, where multiple learners are trained for the task and the disagreements among the learners are exploited during the semi-supervised learning process. This survey article provides an introduction to research advances in this paradigm.",
"title": ""
},
{
"docid": "a3148ce66c9cd871df7f3ec008d7666c",
"text": "This priming study investigates the role of conceptual structure during language production, probing whether English speakers are sensitive to the structure of the event encoded by a prime sentence. In two experiments, participants read prime sentences aloud before describing motion events. Primes differed in 1) syntactic frame, 2) degree of lexical and conceptual overlap with target events, and 3) distribution of event components within frames. Results demonstrate that conceptual overlap between primes and targets led to priming of (a) the information that speakers chose to include in their descriptions of target events, (b) the way that information was mapped to linguistic elements, and (c) the syntactic structures that were built to communicate that information. When there was no conceptual overlap between primes and targets, priming was not successful. We conclude that conceptual structure is a level of representation activated during priming, and that it has implications for both Message Planning and Linguistic Formulation.",
"title": ""
},
{
"docid": "5e86f40cfc3b2e9664ea1f7cc5bf730c",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "7d43cf2e0fcc795f6af4bdbcfb56d13e",
"text": "Vehicular Ad hoc Networks is a special kind of mobile ad hoc network to provide communication among nearby vehicles and between vehicles and nearby fixed equipments. VANETs are mainly used for improving efficiency and safety of (future) transportation. There are chances of a number of possible attacks in VANET due to open nature of wireless medium. In this paper, we have classified these security attacks and logically organized/represented in a more lucid manner based on the level of effect of a particular security attack on intelligent vehicular traffic. Also, an effective solution is proposed for DOS based attacks which use the redundancy elimination mechanism consists of rate decreasing algorithm and state transition mechanism as its components. This solution basically adds a level of security to its already existing solutions of using various alternative options like channel-switching, frequency-hopping, communication technology switching and multiple-radio transceivers to counter affect the DOS attacks. Proposed scheme enhances the security in VANETs without using any cryptographic scheme.",
"title": ""
},
{
"docid": "9d04b10ebe8a65777aacf20fe37b55cb",
"text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.",
"title": ""
},
{
"docid": "06c1398ba85aa22bf796f3033c1b2d90",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.",
"title": ""
}
] | scidocsrr |
35c81e99bc7bb0be3ec777516308dfb9 | Supply chain ontology: Review, analysis and synthesis | [
{
"docid": "910c42c4737d38db592f7249c2e0d6d2",
"text": "This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended",
"title": ""
}
] | [
{
"docid": "928ed1aed332846176ad52ce7cc0754c",
"text": "What is the price of anarchy when unsplittable demands are ro uted selfishly in general networks with load-dependent edge dela ys? Motivated by this question we generalize the model of [14] to the case of weighted congestion games. We show that varying demands of users crucially affect the n ature of these games, which are no longer isomorphic to exact potential gam es, even for very simple instances. Indeed we construct examples where even a single-commodity (weighted) network congestion game may have no pure Nash equ ilibrium. On the other hand, we study a special family of networks (whic h we call the l-layered networks ) and we prove that any weighted congestion game on such a network with resource delays equal to the congestions, pos sesses a pure Nash Equilibrium. We also show how to construct one in pseudo-pol yn mial time. Finally, we give a surprising answer to the question above for s uch games: The price of anarchy of any weighted l-layered network congestion game with m edges and edge delays equal to the loads, is Θ (",
"title": ""
},
{
"docid": "c237facfc6639dfff82659f927a25267",
"text": "The scientific approach to understand the nature of consciousness revolves around the study of human brain. Neurobiological studies that compare the nervous system of different species have accorded highest place to the humans on account of various factors that include a highly developed cortical area comprising of approximately 100 billion neurons, that are intrinsically connected to form a highly complex network. Quantum theories of consciousness are based on mathematical abstraction and Penrose-Hameroff Orch-OR Theory is one of the most promising ones. Inspired by Penrose-Hameroff Orch-OR Theory, Behrman et. al. (Behrman, 2006) have simulated a quantum Hopfield neural network with the structure of a microtubule. They have used an extremely simplified model of the tubulin dimers with each dimer represented simply as a qubit, a single quantum two-state system. The extension of this model to n-dimensional quantum states, or n-qudits presented in this work holds considerable promise for even higher mathematical abstraction in modelling consciousness systems.",
"title": ""
},
{
"docid": "755f7e93dbe43a0ed12eb90b1d320cb2",
"text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).",
"title": ""
},
{
"docid": "e72ed2b388577122402831d4cd75aa0f",
"text": "Development and testing of a compact 200-kV, 10-kJ/s industrial-grade power supply for capacitor charging applications is described. Pulse repetition rate (PRR) can be from single shot to 250 Hz, depending on the storage capacitance. Energy dosing (ED) topology enables high efficiency at switching frequency of up to 55 kHz using standard slow IGBTs. Circuit simulation examples are given. They clearly show zero-current switching at variable frequency during the charge set by the ED governing equations. Peak power drawn from the primary source is about only 60% higher than the average power, which lowers the stress on the input rectifier. Insulation design was assisted by electrostatic field analyses. Field plots of the main transformer insulation illustrate field distribution and stresses in it. Subsystem and system tests were performed including limited insulation life test. A precision, high-impedance, fast HV divider was developed for measuring voltages up to 250 kV with risetime down to 10 μs. The charger was successfully tested with stored energy of up to 550 J at discharge via a custom designed open-air spark gap at PRR up to 20 Hz (in bursts). Future work will include testing at customer sites.",
"title": ""
},
{
"docid": "b0103474ecd369a9f0ba637c34bacc56",
"text": "BACKGROUND\nThe Internet Addiction Test (IAT) by Kimberly Young is one of the most utilized diagnostic instruments for Internet addiction. Although many studies have documented psychometric properties of the IAT, consensus on the optimal overall structure of the instrument has yet to emerge since previous analyses yielded markedly different factor analytic results.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the psychometric properties of the Italian version of the IAT, specifically testing the factor structure stability across cultures.\n\n\nMETHODS\nIn order to determine the dimensional structure underlying the questionnaire, both exploratory and confirmatory factor analyses were performed. The reliability of the questionnaire was computed by the Cronbach alpha coefficient.\n\n\nRESULTS\nData analyses were conducted on a sample of 485 college students (32.3%, 157/485 males and 67.7%, 328/485 females) with a mean age of 24.05 years (SD 7.3, range 17-47). Results showed 176/485 (36.3%) participants with IAT score from 40 to 69, revealing excessive Internet use, and 11/485 (1.9%) participants with IAT score from 70 to 100, suggesting significant problems because of Internet use. The IAT Italian version showed good psychometric properties, in terms of internal consistency and factorial validity. Alpha values were satisfactory for both the one-factor solution (Cronbach alpha=.91), and the two-factor solution (Cronbach alpha=.88 and Cronbach alpha=.79). The one-factor solution comprised 20 items, explaining 36.18% of the variance. The two-factor solution, accounting for 42.15% of the variance, showed 11 items loading on Factor 1 (Emotional and Cognitive Preoccupation with the Internet) and 7 items on Factor 2 (Loss of Control and Interference with Daily Life). Goodness-of-fit indexes (NNFI: Non-Normed Fit Index; CFI: Comparative Fit Index; RMSEA: Root Mean Square Error of Approximation; SRMR: Standardized Root Mean Square Residual) from confirmatory factor analyses conducted on a random half subsample of participants (n=243) were satisfactory in both factorial solutions: two-factor model (χ²₁₃₂= 354.17, P<.001, χ²/df=2.68, NNFI=.99, CFI=.99, RMSEA=.02 [90% CI 0.000-0.038], and SRMR=.07), and one-factor model (χ²₁₆₉=483.79, P<.001, χ²/df=2.86, NNFI=.98, CFI=.99, RMSEA=.02 [90% CI 0.000-0.039], and SRMR=.07).\n\n\nCONCLUSIONS\nOur study was aimed at determining the most parsimonious and veridical representation of the structure of Internet addiction as measured by the IAT. Based on our findings, support was provided for both single and two-factor models, with slightly strong support for the bidimensionality of the instrument. Given the inconsistency of the factor analytic literature of the IAT, researchers should exercise caution when using the instrument, dividing the scale into factors or subscales. Additional research examining the cross-cultural stability of factor solutions is still needed.",
"title": ""
},
{
"docid": "ef6160d304908ea87287f2071dea5f6d",
"text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.",
"title": ""
},
{
"docid": "e8e3f77626742ef7aa40703e3113f148",
"text": "This paper presents a multi-agent based framework for target tracking. We exploit the agent-oriented software paradigm with its characteristics that provide intelligent autonomous behavior together with a real time computer vision system to achieve high performance real time target tracking. The framework consists of four layers; interface, strategic, management, and operation layers. Interface layer receives from the user the tracking parameters such as the number and type of trackers and targets and type of the tracking environment, and then delivers these parameters to the subsequent layers. Strategic (decision making) layer is provided with a knowledge base of target tracking methodologies that are previously implemented by researchers in diverse target tracking applications and are proven successful. And by inference in the knowledge base using the user input a tracking methodology is chosen. Management layer is responsible for pursuing and controlling the tracking methodology execution. Operation layer represents the phases in the tracking methodology and is responsible for communicating with the real-time computer vision system to execute the algorithms in the phases. The framework is presented with a case study to show its ability to tackle the target tracking problem and its flexibility to solve the problem with different tracking parameters. This paper describes the ability of the agent-based framework to deploy any real-time vision system that fits in solving the target tracking problem. It is a step towards a complete open standard, real-time, agent-based framework for target tracking.",
"title": ""
},
{
"docid": "871af4524fcbbae44ba9139bef3481d0",
"text": "AIM\n'Othering' is described as a social process whereby a dominant group or person uses negative attributes to define and subordinate others. Literature suggests othering creates exclusive relationships and puts patients at risk for suboptimal care. A concept analysis delineating the properties of othering was conducted to develop knowledge to support inclusionary practices in nursing.\n\n\nDESIGN\nRodgers' Evolutionary Method for concept analysis guided this study.\n\n\nMETHODS\nThe following databases were searched spanning the years 1999-2015: CINAHL, PUBMED, PsychINFO and Google. Search terms included \"othering\", \"nurse\", \"other\", \"exclusion\" and \"patient\".\n\n\nRESULTS\nTwenty-eight papers were analyzed whereby definitions, related concepts and othering attributes were identified. Findings support that othering in nursing is a sequential process with a trajectory aimed at marginalization and exclusion, which in turn has a negative impact on patient care and professional relationships. Implications are discussed in terms of deriving practical solutions to disrupt othering. We conclude with a conceptual foundation designed to support inclusionary strategies in nursing.",
"title": ""
},
{
"docid": "b15f185258caa9d355fae140a41ae03c",
"text": "The current approaches in terms of information security awareness and education are descriptive (i.e. they are not accomplishment-oriented nor do they recognize the factual/normative dualism); and current research has not explored the possibilities offered by motivation/behavioural theories. The first situation, level of descriptiveness, is deemed to be questionable because it may prove eventually that end-users fail to internalize target goals and do not follow security guidelines, for example ± which is inadequate. Moreover, the role of motivation in the area of information security is not considered seriously enough, even though its role has been widely recognised. To tackle such weaknesses, this paper constructs a conceptual foundation for information systems/organizational security awareness. The normative and prescriptive nature of end-user guidelines will be considered. In order to understand human behaviour, the behavioural science framework, consisting in intrinsic motivation, a theory of planned behaviour and a technology acceptance model, will be depicted and applied. Current approaches (such as the campaign) in the area of information security awareness and education will be analysed from the viewpoint of the theoretical framework, resulting in information on their strengths and weaknesses. Finally, a novel persuasion strategy aimed at increasing users' commitment to security guidelines is presented. spite of its significant role, seems to lack adequate foundations. To begin with, current approaches (e.g. McLean, 1992; NIST, 1995, 1998; Perry, 1985; Morwood, 1998), are descriptive in nature. Their inadequacy with respect to point of departure is partly recognized by McLean (1992), who points out that the approaches presented hitherto do not ensure learning. Learning can also be descriptive, however, which makes it an improper objective for security awareness. Learning and other concepts or approaches are not irrelevant in the case of security awareness, education or training, but these and other approaches need a reasoned contextual foundation as a point of departure in order to be relevant. For instance, if learning does not reflect the idea of prescriptiveness, the objective of the learning approach includes the fact that users may learn guidelines, but nevertheless fails to comply with them in the end. This state of affairs (level of descriptiveness[6]), is an inadequate objective for a security activity (the idea of prescriptiveness will be thoroughly considered in section 3). Also with regard to the content facet, the important role of motivation (and behavioural theories) with respect to the uses of security systems has been recognised (e.g. by NIST, 1998; Parker, 1998; Baskerville, 1989; Spruit, 1998; SSE-CMM, 1998a; 1998b; Straub, 1990; Straub et al., 1992; Thomson and von Solms, 1998; Warman, 1992) ± but only on an abstract level (as seen in Table I, the issue islevel (as seen in Table I, the issue is not considered from the viewpoint of any particular behavioural theory as yet). Motivation, however, is an issue where a deeper understanding may be of crucial relevance with respect to the effectiveness of approaches based on it. The role, possibilities and constraints of motivation and attitude in the effort to achieve positive results with respect to information security activities will be addressed at a conceptual level from the viewpoints of different theories. The scope of this paper is limited to the content aspects of awareness (Table I) and further end-users, thus resulting in a research contribution that is: a conceptual foundation and a framework for IS security awareness. This is achieved by addressing the following research questions: . What are the premises, nature and point of departure of awareness? . What is the role of attitude, and particularly motivation: the possibilities and requirements for achieving motivation/user acceptance and commitment with respect to information security tasks? . What approaches can be used as a framework to reach the stage of internalization and end-user",
"title": ""
},
{
"docid": "5c8ab947856945b32d4d3e0edc89a9e0",
"text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.",
"title": ""
},
{
"docid": "29aa7084f7d6155d4626b682a5fc88ef",
"text": "There is an underlying cascading behavior over road networks. Traffic cascading patterns are of great importance to easing traffic and improving urban planning. However, what we can observe is individual traffic conditions on different road segments at discrete time intervals, rather than explicit interactions or propagation (e.g., A→B) between road segments. Additionally, the traffic from multiple sources and the geospatial correlations between road segments make it more challenging to infer the patterns. In this paper, we first model the three-fold influences existing in traffic propagation and then propose a data-driven approach, which finds the cascading patterns through maximizing the likelihood of observed traffic data. As this is equivalent to a submodular function maximization problem, we solve it by using an approximate algorithm with provable near-optimal performance guarantees based on its submodularity. Extensive experiments on real-world datasets demonstrate the advantages of our approach in both effectiveness and efficiency.",
"title": ""
},
{
"docid": "46e37ce77756f58ab35c0930d45e367f",
"text": "In this letter, we propose an enhanced stereophonic acoustic echo suppression (SAES) algorithm incorporating spectral and temporal correlations in the short-time Fourier transform (STFT) domain. Unlike traditional stereophonic acoustic echo cancellation, SAES estimates the echo spectra in the STFT domain and uses a Wiener filter to suppress echo without performing any explicit double-talk detection. The proposed approach takes account of interdependencies among components in adjacent time frames and frequency bins, which enables more accurate estimation of the echo signals. Experimental results show that the proposed method yields improved performance compared to that of conventional SAES.",
"title": ""
},
{
"docid": "e8681043d4551f6da335a649a6d7b13c",
"text": "In recent years, wireless communication particularly in the front-end transceiver architecture has increased its functionality. This trend is continuously expanding and of particular is reconfigurable radio frequency (RF) front-end. A multi-band single chip architecture which consists of an array of switches and filters could simplify the complexity of the current superheterodyne architecture. In this paper, the design of a Single Pole Double Throw (SPDT) switch using 0.35μm Complementary Metal Oxide Semiconductor (CMOS) technology is discussed. The SPDT RF CMOS switch was then simulated in the range of frequency of 0-2GHz. At 2 GHz, the switch exhibits insertion loss of 1.153dB, isolation of 21.24dB, P1dB of 21.73dBm and IIP3 of 26.02dBm. Critical RF T/R switch characteristic such as insertion loss, isolation, power 1dB compression point and third order intercept point, IIP3 is discussed and compared with other type of switch designs. Pre and post layout simulation of the SPDT RF CMOS switch are also discussed to analyze the effect of parasitic capacitance between components' interconnection.",
"title": ""
},
{
"docid": "dbf8e0125944b526f7b14c98fc46afa2",
"text": "People counting is one of the key techniques in video surveillance. This task usually encounters many challenges in crowded environment, such as heavy occlusion, low resolution, imaging viewpoint variability, etc. Motivated by the success of R-CNN [1] on object detection, in this paper we propose a head detection based people counting method combining the Adaboost algorithm and the CNN. Unlike the R-CNN which uses the general object proposals as the inputs of CNN, our method uses the cascade Adaboost algorithm to obtain the head region proposals for CNN, which can greatly reduce the following classification time. Resorting to the strong ability of feature learning of the CNN, it is used as a feature extractor in this paper, instead of as a classifier as its commonlyused strategy. The final classification is done by a linear SVM classifier trained on the features extracted using the CNN feature extractor. Finally, the prior knowledge can be applied to post-process the detection results to increase the precision of head detection and the people count is obtained by counting the head detection results. A real classroom surveillance dataset is used to evaluate the proposed method and experimental results show that this method has good performance and outperforms the baseline methods, including deformable part model and cascade Adaboost methods. ∗Corresponding author Email address: [email protected] (Chenqiang Gao∗, Pei Li, Yajun Zhang, Jiang Liu, Lan Wang) Preprint submitted to Neurocomputing May 28, 2016",
"title": ""
},
{
"docid": "d69573f767b2e72bcff5ed928ca8271c",
"text": "This article provides a novel analytical method of magnetic circuit on Axially-Laminated Anisotropic (ALA) rotor synchronous reluctance motor when the motor is magnetized on the d-axis. To simplify the calculation, the reluctance of stator magnet yoke and rotor magnetic laminations and leakage magnetic flux all are ignored. With regard to the uneven air-gap brought by the teeth and slots of the stator and rotor, the method resolves the problem with the equivalent air-gap length distribution function, and clarifies the magnetic circuit when the stator teeth are saturated or unsaturated. In order to conduct exact computation, the high order harmonics of the stator magnetic potential are also taken into account.",
"title": ""
},
{
"docid": "33e6abc5ed78316cc03dae8ba5a0bfc8",
"text": "In this paper, we present a deep learning architecture which addresses the problem of 3D semantic segmentation of unstructured point clouds. Compared to previous work, we introduce grouping techniques which define point neighborhoods in the initial world space and the learned feature space. Neighborhoods are important as they allow to compute local or global point features depending on the spatial extend of the neighborhood. Additionally, we incorporate dedicated loss functions to further structure the learned point feature space: the pairwise distance loss and the centroid loss. We show how to apply these mechanisms to the task of 3D semantic segmentation of point clouds and report state-of-the-art performance on indoor and outdoor datasets. ar X iv :1 81 0. 01 15 1v 2 [ cs .C V ] 8 D ec 2 01 8 2 F. Engelmann et al.",
"title": ""
},
{
"docid": "23d9479a38afa6e8061fe431047bed4e",
"text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.",
"title": ""
},
{
"docid": "0408aeb750ca9064a070248f0d32d786",
"text": "Mood, attention and motivation co-vary with activity in the neuromodulatory systems of the brain to influence behaviour. These psychological states, mediated by neuromodulators, have a profound influence on the cognitive processes of attention, perception and, particularly, our ability to retrieve memories from the past and make new ones. Moreover, many psychiatric and neurodegenerative disorders are related to dysfunction of these neuromodulatory systems. Neurons of the brainstem nucleus locus coeruleus are the sole source of noradrenaline, a neuromodulator that has a key role in all of these forebrain activities. Elucidating the factors that control the activity of these neurons and the effect of noradrenaline in target regions is key to understanding how the brain allocates attention and apprehends the environment to select, store and retrieve information for generating adaptive behaviour.",
"title": ""
},
{
"docid": "8a708ec1187ecb2fe9fa929b46208b34",
"text": "This paper proposes a new face verification method that uses multiple deep convolutional neural networks (DCNNs) and a deep ensemble, that extracts two types of low dimensional but discriminative and high-level abstracted features from each DCNN, then combines them as a descriptor for face verification. Our DCNNs are built from stacked multi-scale convolutional layer blocks to present multi-scale abstraction. To train our DCNNs, we use different resolutions of triplets that consist of reference images, positive images, and negative images, and triplet-based loss function that maximize the ratio of distances between negative pairs and positive pairs and minimize the absolute distances between positive face images. A deep ensemble is generated from features extracted by each DCNN, and used as a descriptor to train the joint Bayesian learning and its transfer learning method. On the LFW, although we use only 198,018 images and only four different types of networks, the proposed method with the joint Bayesian learning and its transfer learning method achieved 98.33% accuracy. In addition to further increase the accuracy, we combine the proposed method and high dimensional LBP based joint Bayesian method, and achieved 99.08% accuracy on the LFW. Therefore, the proposed method helps to improve the accuracy of face verification when training data is insufficient to train DCNNs.",
"title": ""
},
{
"docid": "95037e7dc3ae042d64a4b343ad4efd39",
"text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.",
"title": ""
}
] | scidocsrr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.