_id
stringlengths 40
40
| text
stringlengths 0
10k
|
---|---|
21b25b025898bd1cabe60234434b49cf14016981
|
Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the “gradient descent” form of GAN optimization, i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator parameters. We show that even though GAN optimization does not correspond to a convex-concave game (even for simple parameterizations), under proper conditions, equilibrium points of this optimization procedure are still locally asymptotically stable for the traditional GAN formulation. On the other hand, we show that the recently proposed Wasserstein GAN can have non-convergent limit cycles near equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which is able to guarantee local stability for both the WGAN and the traditional GAN, and also shows practical promise in speeding up convergence and addressing mode collapse.
|
c8cff23dcba448f4af436d40d32e367ea0bbe9bc
|
This paper describes the realization and characterization of microwave 3-D printed loads in rectangular waveguide technology. Several commercial materials were characterized at X-band (8-12 GHz). Their dielectric properties were extracted through the use of a cavity-perturbation method and a transmission/reflection rectangular waveguide method. A lossy carbon-loaded Acrylonitrile Butadiene Styrene (ABS) polymer was selected to realize a matched load between 8 and 12 GHz. Two different types of terminations were realized by fused deposition modeling: a hybrid 3-D printed termination (metallic waveguide + pyramidal polymer absorber + metallic short circuit) and a full 3-D printed termination (self-consistent matched load). Voltage standing wave ratio of less than 1.075 and 1.025 were measured over X-band for the hybrid and full 3-D printed terminations, respectively. Power behavior of the full 3-D printed termination was investigated. A very linear evolution of reflected power as a function of incident power amplitude was observed at 10 GHz up to 11.5 W. These 3-D printed devices appear as a very low cost solution for the realization of microwave matched loads in rectangular waveguide technology.
|
16f63ebc5b393524b48932946cb1ba3b6ac5c702
|
In this paper, we present a recursive neural network (RNN) model that works on a syntactic tree. Our model differs from previous RNN models in that the model allows for an explicit weighting of important phrases for the target task. We also propose to average parameters in training. Our experimental results on semantic relation classification show that both phrase categories and task-specific weighting significantly improve the prediction accuracy of the model. We also show that averaging the model parameters is effective in stabilizing the learning and improves generalization capacity. The proposed model marks scores competitive with state-of-the-art RNN-based models.
|
49b3256add6efdcd9ed2ea90c54b18bb8f5cee3e
|
Standard techniques for improved generalization from neural networks include weight decay and pruning. Weight decay has a Bayesian interpretation with the decay function corresponding to a prior over weights. The method of transformation groups and maximum entropy suggests a Laplace rather than a gaussian prior. After training, the weights then arrange themselves into two classes: (1) those with a common sensitivity to the data error and (2) those failing to achieve this sensitivity and that therefore vanish. Since the critical value is determined adaptively during training, pruningin the sense of setting weights to exact zerosbecomes an automatic consequence of regularization alone. The count of free parameters is also reduced automatically as weights are pruned. A comparison is made with results of MacKay using the evidence framework and a gaussian regularizer.
|
d142c1b2488ea054112187b347e1a5fa83a3d54e
| |
3ccf752029540235806bdd0c5293b56ddc1254c2
|
In this paper, we study resource allocation for multiuser multiple-input single-output (MISO) secondary communication systems with multiple system design objectives. We consider cognitive radio (CR) networks where the secondary receivers are able to harvest energy from the radio frequency when they are idle. The secondary system provides simultaneous wireless power and secure information transfer to the secondary receivers. We propose a multi-objective optimization framework for the design of a Pareto optimal resource allocation algorithm based on the weighted Tchebycheff approach. In particular, the algorithm design incorporates three important system design objectives: total transmit power minimization, energy harvesting efficiency maximization, and interference power leakage-totransmit power ratio minimization. The proposed framework takes into account a quality of service (QoS) requirement regarding communication secrecy in the secondary system and the imperfection of the channel state information (CSI) of potential eavesdroppers (idle secondary receivers and primary receivers) at the secondary transmitter. The proposed framework includes total harvested power maximization and interference power leakage minimization as special cases. The adopted multiobjective optimization problem is non-convex and is recast as a convex optimization problem via semidefinite programming (SDP) relaxation. It is shown that the global optimal solution of the original problem can be constructed by exploiting both the primal and the dual optimal solutions of the SDP relaxed problem. Besides, two suboptimal resource allocation schemes for the case when the solution of the dual problem is unavailable for constructing the optimal solution are proposed. Numerical results not only demonstrate the close-to-optimal performance of the proposed suboptimal schemes, but also unveil an interesting trade-off between the considered conflicting system design objectives.
|
503a6d42cfb0174ca944053372153e21fec1111c
|
Many formal models of cognition implicitly use subjective probability distributions to capture the assumptions of human learners. Most applications of these models determine these distributions indirectly. We propose a method for directly determining the assumptions of human learners by sampling from subjective probability distributions. Using a correspondence between a model of human choice and Markov chain Monte Carlo (MCMC), we describe a method for sampling from the distributions over objects that people associate with different categories. In our task, subjects choose whether to accept or reject a proposed change to an object. The task is constructed so that these decisions follow an MCMC acceptance rule, defining a Markov chain for which the stationary distribution is the category distribution. We test this procedure for both artificial categories acquired in the laboratory, and natural categories acquired from experience.
|
2bab122e886271733c3be851b2b11b040cefc213
|
BACKGROUND
The main objective of this research is to identify, categorize, and analyze barriers perceived by physicians to the adoption of Electronic Medical Records (EMRs) in order to provide implementers with beneficial intervention options.
METHODS
A systematic literature review, based on research papers from 1998 to 2009, concerning barriers to the acceptance of EMRs by physicians was conducted. Four databases, "Science", "EBSCO", "PubMed" and "The Cochrane Library", were used in the literature search. Studies were included in the analysis if they reported on physicians' perceived barriers to implementing and using electronic medical records. Electronic medical records are defined as computerized medical information systems that collect, store and display patient information.
RESULTS
The study includes twenty-two articles that have considered barriers to EMR as perceived by physicians. Eight main categories of barriers, including a total of 31 sub-categories, were identified. These eight categories are: A) Financial, B) Technical, C) Time, D) Psychological, E) Social, F) Legal, G) Organizational, and H) Change Process. All these categories are interrelated with each other. In particular, Categories G (Organizational) and H (Change Process) seem to be mediating factors on other barriers. By adopting a change management perspective, we develop some barrier-related interventions that could overcome the identified barriers.
CONCLUSIONS
Despite the positive effects of EMR usage in medical practices, the adoption rate of such systems is still low and meets resistance from physicians. This systematic review reveals that physicians may face a range of barriers when they approach EMR implementation. We conclude that the process of EMR implementation should be treated as a change project, and led by implementers or change managers, in medical practices. The quality of change management plays an important role in the success of EMR implementation. The barriers and suggested interventions highlighted in this study are intended to act as a reference for implementers of Electronic Medical Records. A careful diagnosis of the specific situation is required before relevant interventions can be determined.
|
00514b5cd341ef128d216e86f2a795f218ef83db
|
In this paper, we presented the design and development of a new integrated device for measuring heart rate using fingertip to improve estimating the heart rate. As heart related diseases are increasing day by day, the need for an accurate and affordable heart rate measuring device or heart monitor is essential to ensure quality of health. However, most heart rate measuring tools and environments are expensive and do not follow ergonomics. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly and uses optical technology to detect the flow of blood through index finger. Three phases are used to detect pulses on the fingertip that include pulse detection, signal extraction, and pulse amplification. Qualitative and quantitative performance evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense of physical activity. We compared the performance of HRM device with Electrocardiogram reports and manual pulse measurement of heartbeat of 90 human subjects of different ages. The results showed that the error rate of the device is negligible.
|
543ad4f3b3ec891023af53ef6fa2200ce886694f
|
Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. Heart rate is a very vital health parameter that is directly related to the soundness of the human cardiovascular system. Heart rate is the number of times the heart beats per minute, reflects different physiological conditions such as biological workload, stress at work and concentration on tasks, drowsiness and the active state of the autonomic nervous system. It can be measured either by the ECG waveform or by sensing the pulse the rhythmic expansion and contraction of an artery as blood is forced through it by the regular contractions of the heart. The pulse can be felt from those areas where the artery is close to the skin. This paper describes a technique of measuring the heart rate through a fingertip and Arduino. It is based on the principal of photophelthysmography (PPG) which is non-invasive method of measuring the variation in blood volume in tissue using a light source and detector. While the heart is beating, it is actually pumping blood throughout the body, and that makes the blood volume inside the finger artery to change too. This fluctuation of blood can be detected through an optical sensing mechanism placed around the fingertip. The signal can be amplified and is sent to arduino with the help of serial port communication. With the help of processing software heart rate monitoring and counting is performed. The sensor unit consists of an infrared light-emitting-diode (IR LED) and a photo diode. The IR LED transmits an infrared light into the fingertip, a part of which is reflected back from the blood inside the finger arteries. The photo diode senses the portion of the light that is reflected back. The intensity of reflected light depends upon the blood volume inside the fingertip. So, every time the heart beats the amount of reflected infrared light changes, which can be detected by the photo diode. With a high gain amplifier, this little alteration in the amplitude of the reflected light can be converted into a pulse.
|
e35c466be82e1cb669027c587fb4f65a881f0261
|
In this paper, we propose a Simple Wireless transmission System using common approach Sensor Platform called The wireless based Patient Sensor platform (WSP, Sensor Node) which has remote access capability. The goals of the WSP are to establish: Standard sensor node (System on module), a common software .The proposed platform architecture (Sensor Node) offers flexibility, easy customization for different vital parameter collecting and sending. An prototype has been established based on wireless communication channel. Wireless lan (IEEE .802.15.4)) has been used as communication channel on our prototype (Sensor node). Desire sensor information (vital parameter) can be viewed remotely, and also vital parameter can be adjusted to meet demand.
|
c2c465c332ec57a4430ce5f2093915b4ded497ff
|
Augmented reality is increasingly applied in medical education mainly because educators can share knowledge through virtual objects. This research describes the development of a web application, which enhances users' medical knowledge with regards to the anatomy of the human heart by means of augmented reality. Evaluation is conducted in two different facets. In the first one, the authors of this paper evaluate the feasibility of a three-dimensional human heart module using one investigator under the supervision of an expert. In the second, evaluation aims at identifying usability issues by means of the cognitive walkthrough method. Three medical students (naive users) are called upon three target tasks in the web application. Task completion is appreciated in the light of the standard set of cognitive walkthrough questions. Augmented reality content miss hits are revealed by means of the first evaluation in an effort to enhance the educational utility of the three-dimensional human heart. Cognitive walkthrough provides further improvement points, which may further enhance usability in the next software release. The current piece of work constitutes the pre-pilot evaluation. Standardized methodologies are utilized in an effort to improve the application before its wider piloting to proper student populations. Such evaluations are considered important in experiential learning methods aiding online education of anatomy courses.
|
d5049a49ab605a6703b0461a330e4dbbcd7307fb
|
This letter presents a novel topology of a 4 ×4 Butler matrix, which can realize relatively flexible phase differences at the output ports. The proposed Butler matrix employs couplers with arbitrary phase-differences to replace quadrature couplers in the conventional Butler matrix. By controlling the phase differences of the applied couplers, the progressive phase differences among output ports of the proposed Butler matrix can be relatively flexible. To facilitate the design, closed-form design equations are derived and presented. For verifying the design concept, a planar 4×4 Butler matrix with four unique progressive phase differences ( - 30<sup>°</sup>, + 150<sup>°</sup>, - 120<sup>°</sup>, and + 60<sup>°</sup>) is designed and fabricated. At the operating frequency, the amplitude imbalance is less than 0.75 dB, and the phase mismatch is within ±6<sup>°</sup>. The measured return loss is better than 16 dB, and the isolation is better than 18 dB. The bandwidth with 10 dB return loss is about 15%.
|
101c14f6a04663a7e2c5965c4e0a2d46cb465a08
| |
4d352696f60eaebf7ef941bb31173ba0a1bb9a41
| |
a16dc6af67ef9746068c63a56a580cb3b2a83e9c
|
In order to control a reaching movement of the arm and body, several different computational problems must be solved. Some parallel methods that could be implemented in networks of neuron-like processors are described. Each method solves a different part of the overall task. First, a method is described for finding the torques necessary to follow a desired trajectory. The methods is more economical and more versatile than table look-up and requires very few sequential steps. Then a way of generating an internal representation of a desired trajectory is described. This method shows the trajectory one piece at a time by applying a large set of heuristic rules to a "motion blackboard" that represents the static and dynamic parameters of the state of the body at the current point in the trajectory. The computations are simplified by expressing the positions, orientations, and motions of parts of the body in terms of a single, non-accelerating, world-based frame of reference, rather than in terms of the joint-angles or an egocentric frame based on the body itself.
|
5ab321e0ea7893dda145331bfb95e102c0b61a5d
|
In this paper, a horizontally meandered strip (HMS) feed technique is proposed to achieve good impedance matching and symmetrical broadside radiation patterns for a single-fed broadband circularly polarized stacked patch antenna, which is suitable for universal ultrahigh frequency (UHF) RF identification (RFID) applications. The antenna is composed of two corner truncated patches and an HMS, all of which are printed on the upper side of the FR4 substrates. One end of the HMS is connected to the main patch by a probe, while the other end is connected to an SMA connector. Simulation results are compared with the measurements, and a good agreement is obtained. The measurements show that the antenna has an impedance bandwidth (VSWR <; 1.5) of about 25.8% (758-983 MHz), a 3-dB axial ratio (AR) bandwidth of about 13.5% (838-959 MHz), and a gain level of about 8.6 dBic or larger within the 3-dB AR bandwidth. Therefore, the proposed antenna can be a good candidate for universal UHF RFID readers operating at the UHF band of 840-955 MHz. In addition, a parametric study and a design guideline of the proposed antenna are presented to provide the engineers with information for designing, modifying, and optimizing such an antenna. At last, the proposed antenna is validated in RFID system applications.
|
65077651b36a63d3ca4184137df348cc8b29776a
|
Novel asymmetric-circular shaped slotted microstrip patch antennas with slits are proposed for circularly polarized (CP) radiation and radio frequency identification (RFID) reader applications. A single-feed configuration based asymmetric-circular shaped slotted square microstrip patches are adopted to realize the compact circularly polarized microstrip antennas. The asymmetric-circular shaped slot(s) along the diagonal directions are embedded symmetrically onto a square microstrip patch for CP radiation and small antenna size. The CP radiation can be achieved by slightly asymmetric (unbalanced) patch along the diagonal directions by slot areas. Four symmetric-slits are also embedded symmetrically along the orthogonal directions of the asymmetric-circular shaped slotted patch to further reduce antenna size. The operating frequency of the antenna can be tuned by varying the slit length while keeping the CP radiation unchanged. The measured 3-dB axial-ratio (AR) bandwidth of around 6.0 MHz with 17.0 MHz impedance bandwidth is achieved for the antenna on a RO4003C substrate. The overall antenna size is 0.27λo × 0.27λo × 0.0137λo at 900 MHz.
|
6f3ffb1a7b6cb168caeb81a23b68bbf99fdab052
|
An unbalance-fed cross aperture is developed to excite a short backfire antenna (SBA) for circular polarization. The cross aperture consists of two orthogonal H-shaped slots with a pair of capacitive stubs and is fed by a single probe that forms an unbalanced feed with a shorting pin. It is demonstrated that the cross-aperture-excited SBA can achieve an axial ratio (les 3 dB) bandwidth of 4.2% with a voltage standing wave ratio (VSWR) bandwidth of 6.5% (VSWR<1.2) and a gain of 14 dBi. The antenna structure is described and the simulation and experimental results are presented. The mechanisms for impedance matching and circular-polarization production are analyzed
|
838b107445e72d903f2217946c73a5d3d1e4344e
|
This paper describes the design and testing of an aperture-coupled circularly polarized antenna for global positioning satellite (GPS) applications. The antenna operates at both the L1 and L2 frequencies of 1575 and 1227 MHz, which is required for differential GPS systems in order to provide maximum positioning accuracy. Electrical performance, lowprofile, and cost were equally important requirements for this antenna. The design procedure is discussed, and measured results are presented. Results from a manufacturing sensitivity analysis are also included.
|
9639aa5fadb89ea5e8362dad52082745012c90aa
|
A novel 90deg broadband balun comprising a broadband 90deg Schiffman phase shifter is introduced as a means of enhancing the wideband circular polarization performance of dual-fed type microstrip antennas. The proposed 90deg broadband balun delivers good impedance matching, balanced power splitting and consistent 90deg (plusmn5deg) phase shifting, across a wide bandwidth (~57.5%). A circular patch antenna utilizing the proposed 90deg broadband balun is shown to attain measured impedance(S11< -10 dB) and axial ratio (AR < 3 dB) bandwidths of 60.24% and 37.7%, respectively, for the dual L-probe case; and 71.28% and 81.6% respectively, for the quadruple L-probe case.
|
a6a0384d7bf8ddad303034fe691f324734409568
|
This paper reports on the findings of a survey and case study research into the understanding and application of business process management (BPM) in European companies. The process perspective is increasingly being seen as a mechanism for achieving competitive advantage through performance improvement and in response to market pressures, customer expectations for better and more reliable service and increasing competition. We reveal the level of importance European companies attach to BPM, what it means to them and what they have done in practice. The paper draws on a postal survey conducted with quality directors and business process managers in organisations which are members of the European Foundation for Quality Management (EFQM) and case studies in a number of organisations regarded as being leaders in the adoption of BPM. The study has helped to highlight some interesting approaches and reveal features which are important for BPM to be successful. Introduction One of the difficulties with business process management (BPM) is that of terminology. The term process can be found in many disciplines which contribute to our understanding of organisations in the management literature. An operational view is seen in quality improvement (Deming, 1986), total quality management (Oakland, 1989) and the concept of just-in-time (Harrison, 1992). Systems thinking (Jenkins, 1971; Checkland, 1981), cybernetics (Beer, 1966) and systems dynamics (Senge, 1990) give a richer meaning to the term. Organisational theorists have also talked in terms of social and organisational processes (Burrell and Morgan, 1979; Monge, 1990). A useful review of these antecedents has been provided by Peppard and Preece (1995). The domain in which the current study is centred develops out of recent approaches which seek to improve organisational effectiveness through the attachment of managerial thinking to Total Quality or Business Excellence models. These have been essentially practitioner driven and not grounded in academic theory. Examples include the European Foundation for Quality Management model (EFQM) (Hakes, 1995) and the Malcom Baldrige National Quality Award model (MBNQA) (George, 1992). While these models espouse multi-factorial and multi-constituency models of organisational effectiveness they remain essentially goal-based (Cameron, 1986). They have also developed from a strong operational framework and often have absorbed the approaches of We would like to thank Bob Dart, Simon Machin and Tony Grant from Royal Mail. We also gratefully appreciate the assistance of the EFQM, Rank Xerox, British Telecom, TNT and Nortel during the various stages of our research. D ow nl oa de d by S E L C U K U N IV E R SI T Y A t 0 2: 52 0 8 Fe br ua ry 2 01 5 (P T ) Lessons from European business 11 business process re-engineering (Hammer, 1990). They have also been influenced by strategic thinking in the strong process orientation of the value chain analysis (Porter, 1985) and they accommodate the resources based view of the firm (Grant, 1991). Use of the models may lead to questioning the design of an organisation at a strategic level by reassessing the value of functions in favour of processes which transcend functionality (Ghoshal and Bartlett, 1995; Galbraith, 1995). However, neither the EFQM nor the MBNQA provide direct guidance on how to deploy BPM. Approaches often incorporate attempts to identify business processes and to classify them as being operational, supporting or direction setting. This activity is often facilitated by consultants using a range of methods but commonly including aspects of process mapping at least at the top level of the organisation. There is evidence that adopting the process paradigm is favoured at least by senior managers (Garvin, 1995), although it is by no means clear that this is a widely held opinion throughout organisations. While it is possible to point to aspects of good practice in BPM, at least at an operational level (Armistead, 1996), we do not know how organisations apply the notion in practice and what they have found to be the key components of a BPM approach. Method The aim of the research has been to develop a better understanding of BPM and how it can be applied as a route to achieving organisational effectiveness. We have been especially interested to find out how companies have used a business process perspective as a way of managing their whole organisation, rather than just the application of process improvement techniques. In particular we have sought to explore the following questions: . How important is BPM for European managers? . Is there a common understanding of BPM among European organisations? . How are European organisations implementing BPM in practice? In addressing these questions we hope to shed light on how organisations conceptualise BPM and draw on their experiences in order to enlighten others both in terms of strategy formulation and deployment. This paper discusses the findings of the research and proposes lessons which can be learnt. During our research we have built up a rich databank of case study material. The case studies have been compiled using an open-ended interview format where senior executives (usually the quality director or business process manager) have been invited to elaborate on their organisation's approach to BPM. The interviews were recorded and transcribed and a cognitive map was developed to identify concepts. In some cases data from the interviews were supplemented by material used for internal self assessments against the EFQM model. Organisations were typically chosen because they were known to have adopted BPM approaches. This paper refers specifically to case studies with Rank Xerox, Nortel, British Telecom and TNT, all of whom have been winners in some form (either directly of through subsidiaries) of European Quality Awards. D ow nl oa de d by S E L C U K U N IV E R SI T Y A t 0 2: 52 0 8 Fe br ua ry 2 01 5 (P T )
|
eb448bb53372d14df4113f04fee813307f24d049
|
This paper paper describes the design procedure as well as the experimental performance of a 2.45GHz 10 μW wireless energy harvester (WEH) with a maximum total efficiency of ≈ 30% at 1 μW/cm2 incident power density. The WEH integrates a shunt high-speed rectifying diode with a folded dipole. A metal reflector increases the gain of the rectenna and a quarter wavelength differential line is used as a choke. Both a VDI WVD and a Skyworks GaAs Schottky diode are integrated with the antenna and their performance is compared.
|
21c2bd08b2111dcf957567b98e1c8dcad652e3dd
|
The factor analysis literature includes a range of recommendations regarding the minimum sample size necessary to obtain factor solutions that are adequately stable and that correspond closely to population factors. A fundamental misconception about this issue is that the minimum sample size, or the minimum ratio of sample size to the number of variables, is invariant across studies. In fact, necessary sample size is dependent on several aspects of any given study, including the level of communality of the variables and the level of overdetermination of the factors. The authors present a theoretical and mathematical framework that provides a basis for understanding and predicting these effects. The hypothesized effects are verified by a sampling study using artificial data. Results demonstrate the lack of validity of common rules of thumb and provide a basis for establishing guidelines for sample size in factor analysis.
|
994c88b567703f76696ff29ca0c5232268d06261
|
The recent implementation by some major sports-governing bodies of policies governing eligibility of females with hyperandrogenism to compete in women's sports has raised a lot of attention and is still a controversial issue. This short article addresses two main subjects of controversy: the existing scientific basis supporting performance enhancing of high blood T levels in elite female athletes, and the ethical rationale and considerations about these policies. Given the recently published data about both innate and acquired hyperandrogenic conditions and their prevalence in elite female sports, we claim that the high level of androgens are per se performance enhancing. Regulating women with clinical and biological hyperandrogenism is an invitation to criticism because biological parameters of sex are not neatly divided into only two categories in the real world. It is, however, the responsibility of the sports-governing bodies to do their best to guarantee a level playing field to all athletes. In order not cloud the discussions about the policies on hyperandrogenism in sports, issues of sports eligibility and therapeutic options should always be considered and explained separately, even if they may overlap. Finally, some proposals for refining the existing policies are made in the present article.
|
391d9ef4395cf2f69e7a2f0483d40b6addd95888
|
In this paper, we propose an approach to automatically detect sentiments on Twitter messages (tweets) that explores some characteristics of how tweets are written and meta-information of the words that compose these messages. Moreover, we leverage sources of noisy labels as our training data. These noisy labels were provided by a few sentiment detection websites over twitter data. In our experiments, we show that since our features are able to capture a more abstract representation of tweets, our solution is more effective than previous ones and also more robust regarding biased and noisy data, which is the kind of data provided by these
|
09779ea94f0035c1e5d5cf75f7dfca8c7966a17b
|
In this paper, a planar, compact, single-substrate, multiband 2 sets of 2-elements each multiple-input-multiple-output (MIMO) antenna system is presented. The MIMO antenna system consists of a tunable 2-element meandered and folded MIMO antenna to cover the LTE Band (698 MHz–813 MHz) and a compact 2-element modified truncated cube wideband antenna to cover 754 MHz–971 MHz, 1.65–1.83 GHz and 2–3.66 GHz, respectively. The ground plane of this antenna behaves as a sensing antenna operating in 0.76–1.92 GHz, and 3.0–5.2 GHz. The upper band antennas operate in 0.728–1.08 GHz, 1.64–1.84 GHz, 2.1–3.69 GHz, and 5.01–5.55 GHz range to develop a complete antenna platform for cognitive radios (CR) and Internet of Things (IoT) applications. The antenna is fabricated on a low cost FR-4 substrate (ε<inf>r</inf>=4.4 tanδ=0.02) of dimensions 65 ×120 ×1.56 mm<sup>3</sup>.
|
5b110494639f71fa8354e61af04c0cb5e8bbae70
|
In this paper, we study both the jamming capability of the cognitive-radio-based jammers and the anti-jamming capability of the cognitive radio networks (CRN), by considering multiple uncooperative jammers and independent Rayleigh flat-fading propagations. A Markov model of CRN transmission is set up for the cross-layer analysis of the anti-jamming performance. The transitional probabilities are derived analytically by considering a smart jamming attack strategy. Average throughput expression is obtained and verified by simulations. The results indicate that CRN communications can be extremely susceptible to smart jamming attacks targeting the CRN spectrum sensing and channel switching procedures.
|
b8aae299e926d8e6f547faea4b90619fc6361146
| |
36638aff184754db62547b75bade8fa2076b1b19
|
Adaboost is a machine learning algorithm that builds a series of small decision trees, adapting each tree to predict difficult cases missed by the previous trees and combining all trees into a single model. We will discuss the AdaBoost methodology and introduce the extension called Real AdaBoost. Real AdaBoost comes from a strong academic pedigree: its authors are pioneers of machine learning and the method has well-established empirical and theoretical support spanning 15 years. Practically speaking, Real AdaBoost is able to produce readable credit scorecards and offers attractive features including variable interaction and adaptive, stage-wise binning. We will contrast Real AdaBoost to the dominant methodology for creating credit scorecards: stepwise weight of evidence logistic regression (SWOELR). Real AdaBoost is remarkably similar to SWOELR and is well positioned to serve as a benchmark for SWOELR models; it may even offer a statistical framework by which we can understand the power of SWOELR. We offer a macro to generate Real AdaBoost models in SAS. INTRODUCTION Financial institutions (FIs) must develop a wide range of models for marketing, fraud detection, loan adjudication, etc. Modeling has undergone a recent renaissance as machine learning has exploded – spurned by the availability of advanced statistical techniques, the ubiquity of powerful computers to execute these techniques, and the well-publicized successes of the companies who have embraced these methods (Parloff 2016). Modeling departments within some FIs face opposing demands: executives want some of the famed value of advanced methods, while government regulators, internal deployment teams and front-line staff want models that are easy to implement, interpret and understand. In this paper we review Real AdaBoost, a machine learning technique that may offer a middle-ground between powerful, but opaque machine learning methods and transparent conventional methods. CONSUMER RISK MODELS One field of modeling where FIs must often strike a balance between power and transparency is consumer risk modeling. Consumer risk modeling involves ranking customers by their credit worthiness (the likelihood they will repay a loan): first by identifying customer characteristics that indicate risk of delinquency, and then combining them mathematically to calculate a relative risk score for each customer (common characteristics include: past loan delinquency, high credit utilization, etc.). CREDIT SCORECARDS In order to keep the consumer risk models as transparent as possible, many FIs require that the final output of the model be in the form of a scorecard (an example is shown in Table 1). Credit scorecards are a popular way to represent customer risk models due to their simplicity, readability, and the ease with which business expertise can be incorporated during the modeling process (Maldonado et al. 2013). A scorecard lists a number of characteristics that indicate risk and each characteristic is subdivided into a small number of bins defined by ranges of values for that characteristic (e.g., credit utilization: 30-80% is a bin for the credit utilization characteristic). Each bin is assigned a number of score points, a value derived from a statistical model and proportional to the risk of that bin (SAS 2012). A customer will fall into one and only one bin per characteristic and the final score of the applicant is the sum of the points assigned by each bin (plus an intercept). This final score is proportional to consumer risk. The procedure for developing scorecards is termed stepwise weight of evidence logistic regression (SWOELR) and is implemented in the Credit Scoring add-on in SAS® Enterprise MinerTM.
|
f89ee2c9c67858c00bd87df310994ff3a69de747
|
For this problem, however, the usual approach would be completely inadequate since approximating θ to any reasonable degree of accuracy would require n to be inordinately large. For example, on average we would have to set n ≈ 2.7014× 10 in order to obtain just one non-zero value of I. Clearly this is impractical and a much smaller value of n would have to be used. Using a much smaller value of n, however, would almost inevitably result in an estimate, θ̂n = 0, and an approximate confidence interval [L,U ] = [0, 0]! So the naive approach does not work. We could try to use the variance reduction techniques we have seen in the course so far, but they would provide little, if any, help.
|
a5366f4d0e17dce1cdb59ddcd90e806ef8741fbc
| |
727a8deb17701dd07f4e74af37b8d2e8cb8cb35b
| |
e99f72bc1d61bc7c8acd6af66880d9a815846653
|
Agriculture is a major source of earning of Indians and agriculture has made a big impact on India's economy. The development of crops for a better yield and quality deliver is exceptionally required. So suitable conditions and suitable moisture in beds of crop can play a major role for production.. Mostly irrigation is done by tradition methods of stream flows from one end to other. Such supply may leave varied moisture levels in filed. The administration of the water system can be enhanced utilizing programmed watering framework This paper proposes a programmed water system with framework for the terrains which will reduce manual labour and optimizing water usage increasing productivity of crops. For formulating the setup, Arduino kit is used with moisture sensor with Wi-Fi module. Our experimental setup is connected with cloud framework and data is acquisition is done. Then data is analysed by cloud services and appropriate recommendations are given.
|
e4e9e923be7dba92d431cb70db67719160949053
| |
797f359b211c072a5b754e7a8f48a3b1ecf9b8be
|
Wright Laboratory, at Tyndall AFB, Florida, has contracted the University ofFlorida to develop autonomous navigation systems for a variety ofrobotic vehicles, capable ofperforming tasks associated with the location and removal of bombs and mines. One ofthe tasks involves surveying closed target ranges for unexploded buried munitions. Accuracy in path following is critical to the task. There are hundreds of acres that currently require surveying. The sites are typically divided into regions, where each mission can take up to 4.5 hours. These sites are usually surveyed along parallel rows. By improving the accuracy ofpath following, the distance betweenthe rows can be increased to nearly the detection width ofthe ground penetrating sensors, resulting in increased acreage surveyed per mission. This paper evaluates a high-level PID and a pure pursuit steering controller. The controllers were combined into a weighted solution so that the desirable features of each controller is preserved. This strategy was demonstrated in simulation and implemented on a Navigation Test Vehicle (NTV). For a test path ofvarying curvature, the average lateral control error was 2 cm at a vehicle speed of 1.34 mIs.
|
0dd6795ae207ae4bc455c9ac938c3eebd84897c8
|
The $64,000 question in computational linguistics these days is: “What should I read to learn about statistical natural language processing?” I have been asked this question over and over, and each time I have given basically the same reply: there is no text that addresses this topic directly, and the best one can do is find a good probability-theory textbook and a good information-theory textbook, and supplement those texts with an assortment of conference papers and journal articles. Understanding the disappointment this answer provoked, I was delighted to hear that someone had finally written a bookdirectly addressing this topic. However, after reading Eugene Charniak’s Statistical Language Learning, I have very mixed feelings about the impact this bookmight have on the ever-growing field of statistical NLP. The book begins with a very brief description of the classic artificial intelligence approach toNLP (chapter 1), including morphology, syntax, semantics, and pragmatics. It presents a few definitions from probability theory and information theory (chapter 2), then proceeds to introduce hidden Markov models (chapters 3–4) and probabilistic context-free grammars (chapters 5–6). The book concludes with a few chapters discussing advanced topics in statistical language learning, such as grammar induction (chapter 7), syntactic disambiguation (chapter 8), word clustering (chapter 9), andword sense disambiguation (chapter 10). To its credit, the book serves as an interesting popular discussion of statistical modeling in NLP. It is well-written and entertaining, and very accessible to the reader with a limited mathematical background. It presents a good selection of statistical NLP topics to introduce the reader to the field. And the descriptions of the forward-backward algorithm for hidden Markov models and the inside-outside algorithm for probabilistic context-free grammars are intuitive and easy to follow. However, as a resource for someone interested in entering this area of research, this book falls far short of its author’s goals. These goals are clearly stated in the preface:
|
82bcb524a2036676bfa4ebd3324fe76013dced54
|
Differential privacy is a precise mathematical constraint meant to ensure privacy of individual pieces of information in a database even while queries are being answered about the aggregate. Intuitively, one must come to terms with what differential privacy does and does not guarantee. For example, the definition prevents a strong adversary who knows all but one entry in the database from further inferring about the last one. This strong adversary assumption can be overlooked, resulting in misinterpretation of the privacy guarantee of differential privacy. Herein we give an equivalent definition of privacy using mutual information that makes plain some of the subtleties of differential privacy. The mutual-information differential privacy is in fact sandwiched between ε-differential privacy and (ε,δ)-differential privacy in terms of its strength. In contrast to previous works using unconditional mutual information, differential privacy is fundamentally related to conditional mutual information, accompanied by a maximization over the database distribution. The conceptual advantage of using mutual information, aside from yielding a simpler and more intuitive definition of differential privacy, is that its properties are well understood. Several properties of differential privacy are easily verified for the mutual information alternative, such as composition theorems.
|
2c075293886b601570024b638956828b4fbc6a24
|
Parallel computing using accelerators has gained widespre ad research attention in the past few years. In particular, using GPUs for general purpose computing has brought forth several success stories with respect to time taken, cost, power, and other metrics. Howev er, accelerator based computing has significantly relegated the role of CPUs in computation. As CPUs evo lv and also offer matching computational resources, it is important to also include CPUs in the comput ation. We call this thehybrid computing model. Indeed, most computer systems of the present age offe r a d gree of heterogeneity and therefore such a model is quite natural. We reevaluate the claim of a recent paper by Lee et al.(ISCA 20 10). We argue that the right question arising out of Lee et al. (ISCA 2010) should be how to use a CPU+ GPU platform efficiently, instead of whether one should use a CPU or a GPU exclusively. To this end, we experiment with a set of 13 diverse workloads ranging from databases, image processing, spars e matrix kernels, and graphs. We experiment with two different hybrid platforms: one consisting of a 6-c ore Intel i7-980X CPU and an NVidia Tesla T10 GPU, and another consisting of an Intel E7400 dual core CP U with an NVidia GT520 GPU. On both these platforms, we show that hybrid solutions offer good ad vantage over CPU or GPU alone solutions. On both these platforms, we also show that our solutions are 9 0% resource efficient on average. Our work therefore suggests that hybrid computing can offer tremendous advantages at not only research-scale platforms but also the more realistic scale yst ms with significant performance gains and resource efficiency to the large scale user community.
|
4ad35158e11f8def2ba3c389df526f5664ab5d65
| |
58a34752553d41133f807ee37a6796c5193233f2
|
The excessive use of the communication networks, rising of Internet of Things leads to increases the vulnerability to the important and secret information. advance attacking techniques and number of attackers are increasing radically. Intrusion is one of the main threats to the internet. Hence security issues had been big problem, so that various techniques and approaches have been presented to address the limitations of intrusion detection system such as low accuracy, high false alarm rate, and time consuming. This paper proposes a hybrid machine learning technique for network intrusion detection based on combination of K-means clustering and Sequential Minimal Optimization (SMO) classification. It introduces hybrid approach that able to reduce the rate of false positive alarm, false negative alarm rate, to improve the detection rate and detect zero-day attackers. The NSL-KDD dataset has been used in the proposed technique.. The classification has been performed by using Sequential Minimal Optimization. After training and testing the proposed hybrid machine learning technique, the results have shown that the proposed technique (K-mean + SMO) has achieved a positive detection rate of (94.48%) and reduce the false alarm rate to (1.2%) and achieved accuracy of (97.3695%).
|
8711a402d3b4e9133884116e5aaf6931c86ae46b
| |
e2cf35d4235896ab823baf1a3801b67af2203cde
|
The precision of the answer is now essential for a question answering system, because of the large amount of free texts on the Internet. Attempting to achieve a high precision, we propose a question answering system supported by case grammar theory and based on VerbNet frames. It extracts the syntactic, thematic and semantic information from the question to filter out unmatched sentences in semantic level and to extract answer chunk (a phrase or a word that can answer the question) from the answer sentence. VerbNet is applied in our system to detect the verb frames in question and candidate sentences, so that the syntactic and thematic information as well as semantic information can be therefore obtained. Our question answering system works well especially for answering factoid questions. The experiments show that our approach is able to filter out semantically unmatched sentences effectively and therefore rank the correct answer (s) higher in the result list.
|
2ede6a685ad9b58f2090b01ce1e3f86e42aeda7e
|
Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm.
|
36a5f8e1c3ad330d321ccf5b9943c1f5fe23de74
|
The fields of learning theory and instructional design are in the midst of a scientific revolution in which their objectivist philosophical foundations are being replaced by a constructivist epistemology. This article describes the assumptions of a constructivist epistemology, contrasts them with objectivist assumptions, and then describes instructional systems that can support constructive learning at a distance Limitations of Distance Learning Technologies In an effort to supplement or replace live face-to-face instruction, technologicany mediated distance learning has more often than not merely replicated the ineffective methods that limit learning in face-to-face classrooms (Turoff 1995). Too often, potentially interactive technologies are used to present one-way lectures to students in remote locations. However, we believe that the most valuable activity in a classroom of any kind is the opportunity for students to work and interact together and to build and become part of a community of scholars and practitioners (Selfe and Eilola 1989; Bates 1990; Seaton 1993; Nalley 1995). A good learning experience is one in which a student can "master new knowledge and skills, criticany examine assumptions and beliefs, and engage in an invigorating, collaborative quest for wisdom and personal, holistic development" (Eastmond and Ziegahn 1995, 59). Technology used in distance education should facilitate these "good learning experiences" in an "extended classroom model" rather than broadcast teacher-centered lectures and demonstrations (Burge and Roberts 1993). A significant impediment to this goal is the fact that many teachers and instructional designers come to distance education from traditional backgrounds, bringing with them assumptions about teaching and learning that are not theory-based and do not translate well to technologicany mediated instruction (Schieman, Taere, and McLaren 1992).
|
5978ca8e9fdf4b900b72871a0c1e6de28294dd08
| |
0c7b67dcf86af3eb2ca4c19a713ce615e17343ab
|
Based on the nomenclature of the early papers in the field, we propose a set of terminology which is both expressive and precise. More particularly, we define anonymity, unlinkability, unobservability, and pseudonymity (pseudonyms and digital pseudonyms, and their attributes). We hope that the adoption of this terminology might help to achieve better progress in the field by avoiding that each researcher invents a language of his/her own from scratch. Of course, each paper will need additional vocabulary, which might be added consistently to the terms defined here.
|
af5a56f7d392e7c0c720f8600a5a278d132114ca
|
This paper presents the results of a structured review of the rethinking project management (RPM) literature based on the classification and analysis of 74 contributions and in addition takes a critical look at this brave new world. Through the analysis, a total of 6 overarching categories emerged: contextualization, social and political aspects, rethinking practice, complexity and uncertainty, actuality of projects and broader conceptualization. These categories cover a broad range of different contributions with diverse and alternative perspectives on project management. The early RPM literature dates back to the 1980s, while the majority was published in 2006 onwards, and the research stream appears to be still active. A critical look at this brave new world exhibits the overall challenge for RPM to become much more diffused and accepted. © 2014 Elsevier Ltd. APM and IPMA. All rights reserved.
|
3000e77ed7282d9fb27216f3e862a3769119d89e
|
Cloud computing promises flexibility and high performance for users and high cost-efficiency for operators. Nevertheless, most cloud facilities operate at very low utilization, hurting both cost effectiveness and future scalability.
We present Quasar, a cluster management system that increases resource utilization while providing consistently high application performance. Quasar employs three techniques. First, it does not rely on resource reservations, which lead to underutilization as users do not necessarily understand workload dynamics and physical resource requirements of complex codebases. Instead, users express performance constraints for each workload, letting Quasar determine the right amount of resources to meet these constraints at any point. Second, Quasar uses classification techniques to quickly and accurately determine the impact of the amount of resources (scale-out and scale-up), type of resources, and interference on performance for each workload and dataset. Third, it uses the classification results to jointly perform resource allocation and assignment, quickly exploring the large space of options for an efficient way to pack workloads on available resources. Quasar monitors workload performance and adjusts resource allocation and assignment when needed. We evaluate Quasar over a wide range of workload scenarios, including combinations of distributed analytics frameworks and low-latency, stateful services, both on a local cluster and a cluster of dedicated EC2 servers. At steady state, Quasar improves resource utilization by 47% in the 200-server EC2 cluster, while meeting performance constraints for workloads of all types.
|
1c667ca4a83b3db5f7b8bbf8d8ee6e5c2da5c3b9
| |
1a2c6843b9e781f2f77e875f3d073ab686f6fae3
|
In distributed geospatial applications with heterogeneous databases, an ontology-driven approach to data integration relies on the alignment of the concepts of a global ontology that describe the domain, with the concepts of the ontologies that describe the data in the distributed databases. Once the alignment between the global ontology and each distributed ontology is established, agreements that encode a variety of mappings between concepts are derived. In this way, users can potentially query hundreds of geospatial databases using a single query. Using our approach, querying can be easily extended to new data sources and, therefore, to new regions. In this paper, we describe the AgreementMaker, a tool that displays the ontologies, supports several mapping layers visually, presents automatically generated mappings, and finally produces the agreements. r 2007 Elsevier Ltd. All rights reserved.
|
8d69c06d48b618a090dd19185aea7a13def894a5
| |
664a2c6bff5fb2708f30a116745fad9470ef317a
|
Principal component analysis (PCA) is a popular dimensionality reduction algorithm. However, it is not easy to interpret which of the original features are important based on the principal components. Recent methods improve interpretability by sparsifying PCA through adding an L1 regularizer. In this paper, we introduce a probabilistic formulation for sparse PCA. By presenting sparse PCA as a probabilistic Bayesian formulation, we gain the benefit of automatic model selection. We examine three different priors for achieving sparsification: (1) a two-level hierarchical prior equivalent to a Laplacian distribution and consequently to an L1 regularization, (2) an inverse-Gaussian prior, and (3) a Jeffrey’s prior. We learn these models by applying variational inference. Our experiments verify that indeed our sparse probabilistic model results in a sparse PCA solution.
|
afde48d14d4b6783b6aef376a1bb4a47ffccc071
|
A system for quantifying the physiological features of emotional stress is being developed for use during a driving task. Two prototypes, using sensors that measure the driver's skin conductance, respiration, muscle activity, and heart activity are presented. The first system allows sampling rates of 200 Hz on two fast channels and 20 Hz on six additional channels. It uses a wearable computer to do real-time processing on the signals and has an attached digital camera which was used to capture images of the driver's facial expression once every minute. The second system uses a car-based computer that allows a sampling rate of 1984 samples per second on eight channels. This system uses multiple video cameras to continuously capture the driver's facial expression and road conditions. The data is then synchronized with the physiological signals using a video quad-splitter. The methods for extracting physiological features in the driving environment are discussed, including measurement of the skin conductance orienting response, muscle activity, pulse, and respiration patterns. Preliminary studies show how using multiple modalities of sensors can help discriminate reactions to driving events and how individual's response to similar driving conditions can vary from day to day.
|
0853c2a59d44fe97e0d21f89d80fa2f5a220e3b9
|
Traditional machine learning algorithms for pattern recognition just output simple predictions, without any associated confidence values. Confidence values are an indication of how likely each prediction is of being correct. In the ideal case, a confidence of 99% or higher for all examples in a set, means that the percentage of erroneous predictions in that set will not exceed 1%. Knowing the likelihood of each prediction enables us to assess the extent to which we can rely on it. For this reason, predictions that are associated with some kind of confidence values are highly desirable in many risk-sensitive applications, such as those used for medical diagnosis or financial analysis. In fact, such information can benefit any application that requires human-computer interaction, as confidence values can be used to determine the way in which each prediction will be treated. For instance, a filtering mechanism can be employed so that only predictions which satisfy a certain level of confidence will be taken into account, while the rest can be discarded or passed on to a human for judgement. There are two main areas in mainstream machine learning that can be used in order to obtain some kind of confidence values; the Bayesian framework and the theory of Probably Approximately Correct learning (PAC theory). Quite often the Bayesian framework is used for producing algorithms that complement individual predictions with probabilistic measures of their quality. On the other hand, PAC theory can be used for producing upper bounds on the probability of error for a given algorithm with respect to some confidence level 1 − δ. Both of these approaches however, have their drawbacks. In order to apply the Bayesian framework one is required to have some prior knowledge about the distribution that generates the data. When the correct prior is known, Bayesian methods provide optimal decisions. For real world data sets though, as the required knowledge is not available, one has to assume the existence of an arbitrarily chosen prior. In this case, if the assumed prior is incorrect, the resulting confidence levels may also be “incorrect”; for example the predictive regions output for the 95% confidence level may contain the true label in much less than 95% of the cases. This signifies a major failure, as we would expect confidence levels to bound the percentage of expected errors. An experimental demonstration of how misleading Bayesian methods can become when their assumptions are violated can be found in (Melluish et al., 2001).
|
1ff107c3230c51ae3cc8e0f14dced3eaebea9a8e
|
An encryption method is presented with the novel property that publicly revealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: (1) Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intented recipient. Only he can decipher the message, since only he knows the corresponding decryption key. (2) A message can be “signed” using a privately held decryption key. Anyone can verify this signature using the corresponding publicly revealed encryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in “electronic mail” and “electronic funds transfer” systems. A message is encrypted by representing it as a number M, raising M to a publicly specified power e, and then taking the remainder when the result is divided by the publicly specified product, n, of two large secret primer numbers p and q. Decryption is similar; only a different, secret, power d is used, where e * d ≡ 1(mod (p - 1) * (q - 1)). The security of the system rests in part on the difficulty of factoring the published divisor, n.
|
d21f261bf5a9d7333337031a3fa206eaf0c6082c
| |
6665e03447f989c9bdb3432d93e89b516b9d18a7
| |
90a6f53bf0eb10fe53f908419c9ac644b16d6065
| |
f67acaa10ad4a0eb7130cd1f0b953478056f32af
|
The first € price and the £ and $ price are net prices, subject to local VAT. Prices indicated with * include VAT for books; the €(D) includes 7% for Germany, the €(A) includes 10% for Austria. Prices indicated with ** include VAT for electronic products; 19% for Germany, 20% for Austria. All prices exclusive of carriage charges. Prices and other details are subject to change without notice. All errors and omissions excepted. D. Kissinger Millimeter-Wave Receiver Concepts for 77 GHz Automotive Radar in Silicon-Germanium Technology
|
97a18d0c88d72bac9fbdfe9d19485ac37175177b
|
Designing of an microstrip patch antenna with circular polarization (CP) and reduced size is a challenging and complex task. Here such an antenna is proposed and experimentally studied. Deployment of meandering technique and shorting pins together at the four corners of patch gives less back radiation along with acheiving CP and fairly reduce the size of antenna. Also it can provide inductive and capacitive loading effect to the patch which in turn control the frequency of operation. In this paper, a study is conducted with different shorting strip structures such as rectangle, U-shape, and meandering. Simulations were carried out in HFSS and compared the simulated results with different structures. It is found that meandering technique gives better size reduction than other two because the strongest current strengths concentrates at the meandering shorting strips. In addition, meandering technique provides high front-to-back ratio than others.
|
70ca66188f98537ba9e38d87ee2e5c594ef4196d
|
This paper describes a novel frequency-modulated continuous-wave radar concept, where methods like nonuniform sparse antenna arrays and multiple-input multiple-output techniques are used to improve the angular resolution of the proposed system. To demonstrate the practical feasibility using standard production techniques, a prototype sensor using a novel four-channel single-chip radar transceiver in combination with differential patch antenna arrays was realized on off-the-shelf RF substrate. Furthermore, to demonstrate its practical applicability, the assembled system was tested in real world measurement scenarios in conjunction with the presented efficient signal processing algorithms.
|
8da84ea04a289d06d314be75898d9aa96cdf7b55
|
The continuing progress of Moore's law has enabled the development of radar systems that simultaneously transmit and receive multiple coded waveforms from multiple phase centers and to process them in ways that have been unavailable in the past. The signals available for processing from these multiple-input multiple-output (MIMO) radar systems appear as spatial samples corresponding to the convolution of the transmit and receive aperture phase centers. The samples provide the ability to excite and measure the channel that consists of the transmit/receive propagation paths, the target and incidental scattering or clutter. These signals may be processed and combined to form an adaptive coherent transmit beam, or to search an extended area with high resolution in a single dwell. Adaptively combining the received data provides the effect of adaptively controlling the transmit beamshape and the spatial extent provides improved track-while-scan accuracy. This paper describes the theory behind the improved surveillance radar performance and illustrates this with measurements from experimental MIMO radars.
|
df168c45654bf1d62b8e066e68be5ba1450a976a
|
In this paper we present methods for the design of planar frequency-modulated continuous-wave (FMCW) multiple-input multiple-output (MIMO) arrays with an emphasis on the problem of moving targets in time-division multiple-access (TDMA) systems. We discuss the influence of target motion and boundaries of operation and present a method to compensate for its effects, which requires special attention in the array design and in signal processing. Array design techniques, examples including an implementation, and measurement results are also covered in this article.
|
1cd8ee3bfead2964a3e4cc375123bb594949aa0b
|
This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i.e., networks that provably satisfy some desired input-output properties. The key idea is to simultaneously train two networks: a predictor network that performs the task at hand, e.g., predicting labels given inputs, and a verifier network that computes a bound on how well the predictor satisfies the properties being verified. Both networks can be trained simultaneously to optimize a weighted combination of the standard data-fitting loss and a term that bounds the maximum violation of the property. Experiments show that not only is the predictor-verifier architecture able to train networks to achieve state of the art verified robustness to adversarial examples with much shorter training times (outperforming previous algorithms on small datasets like MNIST and SVHN), but it can also be scaled to produce the first known (to the best of our knowledge) verifiably robust networks for CIFAR-10.
|
7a2fc025463d03b17a1d0fa4941b00db3ce71f26
|
We propose and compare methods for gradientbased domain adaptation of self-attentive neural machine translation models. We demonstrate that a large proportion of model parameters can be frozen during adaptation with minimal or no reduction in translation quality by encouraging structured sparsity in the set of offset tensors during learning via group lasso regularization. We evaluate this technique for both batch and incremental adaptation across multiple data sets and language pairs. Our system architecture—combining a state-of-the-art self-attentive model with compact domain adaptation—provides high quality personalized machine translation that is both space and time efficient.
|
5324ba064dc1656dd51c04122c2c802ef9ec28ce
|
Recommender systems traditionally assume that user profiles and movie attributes are static. Temporal dynamics are purely reactive, that is, they are inferred after they are observed, e.g. after a user's taste has changed or based on hand-engineered temporal bias corrections for movies. We propose Recurrent Recommender Networks (RRN) that are able to predict future behavioral trajectories. This is achieved by endowing both users and movies with a Long Short-Term Memory (LSTM) autoregressive model that captures dynamics, in addition to a more traditional low-rank factorization. On multiple real-world datasets, our model offers excellent prediction accuracy and it is very compact, since we need not learn latent state but rather just the state transition function.
|
3e090dac6019963715df50dc23d830d97a0e25ba
|
Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.
|
652d159bf64a70194127722d19841daa99a69b64
|
This paper shows how Long Short-term Memory recurrent neural networks can be used to generate complex sequences with long-range structure, simply by predicting one data point at a time. The approach is demonstrated for text (where the data are discrete) and online handwriting (where the data are real-valued). It is then extended to handwriting synthesis by allowing the network to condition its predictions on a text sequence. The resulting system is able to generate highly realistic cursive handwriting in a wide variety of styles.
|
2d208d551ff9000ca189034fa683edb826f4c941
|
We consider the problem of semi-supervised learning to extract categories (e.g., academic fields, athletes) and relations (e.g., PlaysSport(athlete, sport)) from web pages, starting with a handful of labeled training examples of each category or relation, plus hundreds of millions of unlabeled web documents. Semi-supervised training using only a few labeled examples is typically unreliable because the learning task is underconstrained. This paper pursues the thesis that much greater accuracy can be achieved by further constraining the learning task, by coupling the semi-supervised training of many extractors for different categories and relations. We characterize several ways in which the training of category and relation extractors can be coupled, and present experimental results demonstrating significantly improved accuracy as a result.
|
52aa38ffa5011d84cb8aae9f1112ce53343bf32c
|
We analyse the performance of several clustering algorithms in the digital peerto-peer currency Bitcoin. Clustering in Bitcoin refers to the task of finding addresses that belongs to the same wallet as a given address. In order to assess the effectiveness of clustering strategies we exploit a vulnerability in the implementation of Connection Bloom Filtering to capture ground truth data about 37,585 Bitcoin wallets and the addresses they own. In addition to well-known clustering techniques, we introduce two new strategies, apply them on addresses of the collected wallets and evaluate precision and recall using the ground truth. Due to the nature of the Connection Bloom Filtering vulnerability the data we collect is not without errors. We present a method to correct the performance metrics in the presence of such inaccuracies. Our results demonstrate that even modern wallet software can not protect its users properly. Even with the most basic clustering technique known as multiinput heuristic, an adversary can guess on average 68.59% addresses of a victim. We show that this metric can be further improved by combining several more sophisticated heuristics.
|
f824415989a7863a37e581fdeec2f1d9f4d54f62
| |
4abdf7f981612216de354f3dc6ed2b07b5e9f114
|
This paper investigates a planar monopole antenna for fifth generation (5G) wireless communication networks. The proposed antenna has an ultra-wide band impedance response in millimeter wave (mmW) spectrum, 25–39 GHz covering Ka band. The antenna has unique structural layout resembling hexagonal honeycomb and has low profile (8×7 mm2) on 0.254 mm thick Rogers substrate, enabling the design for incorporation into future mobile phones. This antenna provides peak gain of 4.15 dBi along with 90% efficiency in the working band. The design is also extended to an 8×1 element array presenting maximum gain of 12.7 dBi at central frequency of the antenna.
|
958340c7ccd205ed7670693fa9519f9c140e372d
|
Recently, there has been a flurry of industrial activity around logo recognition, such as Ditto’s service for marketers to track their brands in user-generated images, and LogoGrab’s mobile app platform for logo recognition. However, relatively little academic or open-source logo recognition progress has been made in the last four years. Meanwhile, deep convolutional neural networks (DCNNs) have revolutionized a broad range of object recognition applications. In this work, we apply DCNNs to logo recognition. We propose several DCNN architectures, with which we surpass published state-of-art accuracy on a popular logo recognition dataset.
|
087337fdad69caaab8ebd8ae68a731c5bf2e8b14
|
Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30% relative improvement to 67.2% mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image.
|
08a4fa5caead14285131f6863b6cd692540ea59a
|
In practice, there are often explicit constraints on what representations or decisions are acceptable in an application of machine learning. For example it may be a legal requirement that a decision must not favour a particular group. Alternatively it can be that that representation of data must not have identifying information. We address these two related issues by learning flexible representations that minimize the capability of an adversarial critic. This adversary is trying to predict the relevant sensitive variable from the representation, and so minimizing the performance of the adversary ensures there is little or no information in the representation about the sensitive variable. We demonstrate this adversarial approach on two problems: making decisions free from discrimination and removing private information from images. We formulate the adversarial model as a minimax problem, and optimize that minimax objective using a stochastic gradient alternate min-max optimizer. We demonstrate the ability to provide discriminant free representations for standard test problems, and compare with previous state of the art methods for fairness, showing statistically significant improvement across most cases. The flexibility of this method is shown via a novel problem: removing annotations from images, from separate training examples of annotated and unannotated images, and with no a priori knowledge of the form of annotation provided to the model.
|
d7805eee3daef814140001a6c59fda004266b3c8
| |
988c10748a66429dda79d02bc5eb57c64f9768fb
|
Conversational machine comprehension requires a deep understanding of the conversation history. To enable traditional, single-turn models to encode the history comprehensively, we introduce FLOW, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to shallow approaches that concatenate previous questions/answers as input, FLOW integrates the latent semantics of the conversation history more deeply. Our model, FLOWQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of FLOW also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FLOWQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.
|
31181e73befea410e25de462eccd0e74ba8fea0b
| |
0e6f5abd7e4738b765cd48f4c272093ecb5fd0bc
| |
0501336bc04470489529b4928c5b6ba0f1bdf5f2
|
Modern search engines are good enough to answer popular commercial queries with mainly highly relevant documents. However, our experiments show that users behavior on such relevant commercial sites may differ from one to another web-site with the same relevance label. Thus search engines face the challenge of ranking results that are equally relevant from the perspective of the traditional relevance grading approach. To solve this problem we propose to consider additional facets of relevance, such as trustability, usability, design quality and the quality of service. In order to let a ranking algorithm take these facets in account, we proposed a number of features, capturing the quality of a web page along the proposed dimensions. We aggregated new facets into the single label, commercial relevance, that represents cumulative quality of the site. We extrapolated commercial relevance labels for the entire learning-to-rank dataset and used weighted sum of commercial and topical relevance instead of default relevance labels. For evaluating our method we created new DCG-like metrics and conducted off-line evaluation as well as on-line interleaving experiments demonstrating that a ranking algorithm taking the proposed facets of relevance into account is better aligned with user preferences.
|
4a87972b28143b61942a0eb011b60f76be0ebf2e
|
Many important problems in computational sciences, social network analysis, security, and business analytics, are data-intensive and lend themselves to graph-theoretical analyses. In this paper we investigate the challenges involved in exploring very large graphs by designing a breadth-first search (BFS) algorithm for advanced multi-core processors that are likely to become the building blocks of future exascale systems. Our new methodology for large-scale graph analytics combines a highlevel algorithmic design that captures the machine-independent aspects, to guarantee portability with performance to future processors, with an implementation that embeds processorspecific optimizations. We present an experimental study that uses state-of-the-art Intel Nehalem EP and EX processors and up to 64 threads in a single system. Our performance on several benchmark problems representative of the power-law graphs found in real-world problems reaches processing rates that are competitive with supercomputing results in the recent literature. In the experimental evaluation we prove that our graph exploration algorithm running on a 4-socket Nehalem EX is (1) 2.4 times faster than a Cray XMT with 128 processors when exploring a random graph with 64 million vertices and 512 millions edges, (2) capable of processing 550 million edges per second with an R-MAT graph with 200 million vertices and 1 billion edges, comparable to the performance of a similar graph on a Cray MTA-2 with 40 processors and (3) 5 times faster than 256 BlueGene/L processors on a graph with average degree 50.
|
50ac4c9c4409438719bcb8b1bb9e5d1a0dbedb70
|
Unicorn is an online, in-memory social graph-aware indexing system designed to search trillions of edges between tens of billions of users and entities on thousands of commodity servers. Unicorn is based on standard concepts in information retrieval, but it includes features to promote results with good social proximity. It also supports queries that require multiple round-trips to leaves in order to retrieve objects that are more than one edge away from source nodes. Unicorn is designed to answer billions of queries per day at latencies in the hundreds of milliseconds, and it serves as an infrastructural building block for Facebook’s Graph Search product. In this paper, we describe the data model and query language supported by Unicorn. We also describe its evolution as it became the primary backend for Facebook’s search offerings.
|
94c817e196e71c03b3425f905ebd1793dc6469c2
|
The analysis of large graphs plays a prominent role in various fields of research and is relevant in many important application areas. Effective visual analysis of graphs requires appropriate visual presentations in combination with respective user interaction facilities and algorithmic graph analysis methods. How to design appropriate graph analysis systems depends on many factors, including the type of graph describing the data, the analytical task at hand, and the applicability of graph analysis methods. The most recent surveys of graph visualization and navigation techniques were presented by Herman et al. [HMM00] and Diaz [DPS02]. The first work surveyed the main techniques for visualization of hierarchies and graphs in general that had been introduced until 2000. The second work concentrated on graph layouts introduced until 2002. Recently, new techniques have been developed covering a broader range of graph types, such as time-varying graphs. Also, in accordance with ever growing amounts of graph-structured data becoming available, the inclusion of algorithmic graph analysis and interaction techniques becomes increasingly important. In this State-of-the-Art Report, we survey available techniques for the visual analysis of large graphs. Our review firstly considers graph visualization techniques according to the type of graphs supported. The visualization techniques form the basis for the presentation of interaction approaches suitable for visual graph exploration. As an important component of visual graph analysis, we discuss various graph algorithmic aspects useful for the different stages of the visual graph analysis process.
|
2748dc51ba8dd9d2a7899caadbef2e3269b8b0b9
|
We present a framework for autonomous driving which can learn from human demonstrations, and we apply it to the longitudinal control of an autonomous car. Offline, we model car-following strategies from a set of example driving sequences. Online, the model is used to compute accelerations which replicate what a human driver would do in the same situation. This reference acceleration is tracked by a predictive controller which enforces a set of comfort and safety constraints before applying the final acceleration. The controller is designed to be robust to the uncertainty in the predicted motion of the preceding vehicle. In addition, we estimate the confidence of the driver model predictions and use it in the cost function of the predictive controller. As a result, we can handle cases where the training data used to learn the driver model does not provide sufficient information about how a human driver would handle the current driving situation. The approach is validated using a combination of simulations and experiments on our autonomous vehicle.
|
42a6ae6827f8cc92e15191e53605b0aa4f875fb9
|
Software testing is all too often simply a bug hunt rather than a wellconsidered exercise in ensuring quality. A more methodical approach than a simple cycle of system-level test-fail-patch-test will be required to deploy safe autonomous vehicles at scale. The ISO 26262 development V process sets up a framework that ties each type of testing to a corresponding design or requirement document, but presents challenges when adapted to deal with the sorts of novel testing problems that face autonomous vehicles. This paper identifies five major challenge areas in testing according to the V model for autonomous vehicles: driver out of the loop, complex requirements, non-deterministic algorithms, inductive learning algorithms, and failoperational systems. General solution approaches that seem promising across these different challenge areas include: phased deployment using successively relaxed operational scenarios, use of a monitor/actuator pair architecture to separate the most complex autonomy functions from simpler safety functions, and fault injection as a way to perform more efficient edge case testing. While significant challenges remain in safety-certifying the type of algorithms that provide high-level autonomy themselves, it seems within reach to instead architect the system and its accompanying design process to be able to employ existing software safety approaches.
|
64c83def2889146beb7ca2dddee2dae21d9ca6de
|
We study an algorithm that allows a vehicle to autonomously change lanes in a safe but personalized fashion without the driver's explicit initiation (e.g. activating the turn signals). Lane change initiation in autonomous driving is typically based on subjective rules, functions of the positions and relative velocities of surrounding vehicles. This approach is often arbitrary, and not easily adapted to the driving style preferences of an individual driver. Here we propose a data-driven modeling approach to capture the lane change decision behavior of human drivers. We collect data with a test vehicle in typical lane change situations and train classifiers to predict the instant of lane change initiation with respect to the preferences of a particular driver. We integrate this decision logic into a model predictive control (MPC) framework to create a more personalized autonomous lane change experience that satisfies safety and comfort constraints. We show the ability of the decision logic to reproduce and differentiate between two lane changing styles, and demonstrate the safety and effectiveness of the control framework through simulations.
|
2087c23fbc7890c1b27fe3f2914299cc0693306e
|
Neural net advances improve computers' language ability in many fields.
|
680f268973fc8efd775a6bfe08487ee1c3cb9e61
|
We explore the impact on employee attitudes of their perceptions of how others outside the organization are treated (i.e., corporate social responsibility) above and beyond the impact of how employees are directly treated by the organization. Results of a study of 827 employees in eighteen organizations show that employee perceptions of corporate social responsibility (CSR) are positively related to (a) organizational commitment with the relationship being partially mediated by work meaningfulness and perceived organizational support (POS) and (b) job satisfaction with work meaningfulness partially mediating the relationship but not POS. Moreover, in order to address limited micro-level research in CSR, we develop a measure of employee perceptions of CSR through four pilot studies. Employing a bifactor model, we find that social responsibility has an additional effect on employee attitudes beyond environmental responsibility, which we posit is due to the relational component of social responsibility (e.g., relationships with community).
|
9406ee01e3fda0932168f31cd3835a7d7a943fc6
| |
2402066417256a70d7bf36ee163af5eba0aed211
|
The natural language generation (NLG) component of a spoken dialogue system (SDS) usually needs a substantial amount of handcrafting or a well-labeled dataset to be trained on. These limitations add significantly to development costs and make cross-domain, multi-lingual dialogue systems intractable. Moreover, human languages are context-aware. The most natural response should be directly learned from data rather than depending on predefined syntaxes or rules. This paper presents a statistical language generator based on a joint recurrent and convolutional neural network structure which can be trained on dialogue act-utterance pairs without any semantic alignments or predefined grammar trees. Objective metrics suggest that this new model outperforms previous methods under the same experimental conditions. Results of an evaluation by human judges indicate that it produces not only high quality but linguistically varied utterances which are preferred compared to n-gram and rule-based systems.
|
d781b74cf002f9fffcb7f60c3c319c41797d702e
|
In Aquaculture, the yields (shrimp, fish etc.) depend on the water characteristics of the aquaculture pond. For maximizing fish yields, the parameters which are to be kept at certain optimal levels in water. The parameters can vary a lot during the period of a day and can rapidly change depending on the external environmental conditions. Hence it is necessary to monitor these parameters with high frequency . Wireless sensor networks are used to monitor aqua farms for relevant parameters this system consists of two modules which are transmitter station and receiver station. The data transmits through GSM to the Database at receiver station. The graphical user interface was designed, to convey the data in the form of a message to the farmers in their respective local languages to their Mobile Phones and alerts them in unhygienic environmental conditions, in order to take suitable actions. Keywords; aquaculture; wireless sensor networks; IAR-Kick;pH;
|
6fb3940ddd658e549a111870f10ca77ba3c4cf37
|
We introduce a simple baseline for action localization on the AVA dataset. The model builds upon the Faster R-CNN bounding box detection framework, adapted to operate on pure spatiotemporal features – in our case produced exclusively by an I3D model pretrained on Kinetics. This model obtains 21.9% average AP on the validation set of AVA v2.1, up from 14.5% for the best RGB spatiotemporal model used in the original AVA paper (which was pretrained on Kinetics and ImageNet), and up from 11.3% of the publicly available baseline using a ResNet101 image feature extractor, that was pretrained on ImageNet. Our final model obtains 22.8%/21.9% mAP on the val/test sets and outperforms all submissions to the AVA challenge at CVPR 2018.
|
2060441ed47f6cee9bab6c6597a7709836691da3
|
The `1-regularized maximum likelihood estimation problem has recently become a topic of great interest within the machine learning, statistics, and optimization communities as a method for producing sparse inverse covariance estimators. In this paper, a proximal gradient method (G-ISTA) for performing `1-regularized covariance matrix estimation is presented. Although numerous algorithms have been proposed for solving this problem, this simple proximal gradient method is found to have attractive theoretical and numerical properties. G-ISTA has a linear rate of convergence, resulting in an O(log ε) iteration complexity to reach a tolerance of ε. This paper gives eigenvalue bounds for the G-ISTA iterates, providing a closed-form linear convergence rate. The rate is shown to be closely related to the condition number of the optimal point. Numerical convergence results and timing comparisons for the proposed method are presented. G-ISTA is shown to perform very well, especially when the optimal point is well-conditioned.
|
4a20823dd4ce6003e31f7d4e0649fe8c719926f2
|
To elucidate gene function on a global scale, we identified pairs of genes that are coexpressed over 3182 DNA microarrays from humans, flies, worms, and yeast. We found 22,163 such coexpression relationships, each of which has been conserved across evolution. This conservation implies that the coexpression of these gene pairs confers a selective advantage and therefore that these genes are functionally related. Many of these relationships provide strong evidence for the involvement of new genes in core biological functions such as the cell cycle, secretion, and protein expression. We experimentally confirmed the predictions implied by some of these links and identified cell proliferation functions for several genes. By assembling these links into a gene-coexpression network, we found several components that were animal-specific as well as interrelationships between newly evolved and ancient modules.
|
25c760c11c7803b2aefd6b6ae36f15908f76b544
|
We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.
|
256f63cba7ede2a58d56a089122466bc35ce6abf
|
This paper proposes a new framework for semantic segmentation of objects in videos. We address the label inconsistency problem of deep convolutional neural networks (DCNNs) by exploiting the fact that videos have multiple frames; in a few frames the object is confidently-estimated (CE) and we use the information in them to improve labels of the other frames. Given the semantic segmentation results of each frame obtained from DCNN, we sample several CE frames to adapt the DCNN model to the input video by focusing on specific instances in the video rather than general objects in various circumstances. We propose offline and online approaches under different supervision levels. In experiments our method achieved great improvement over the original model and previous state-of-the-art methods. c © 2016 Elsevier Ltd. All rights reserved.
|
da411a876b4037434e4f47f7d14f0fca1ca0cad8
| |
127a818c2ba1bbafbabc62d4163b0dd98364f64a
|
This paper proposes a near-field communication (NFC) antenna solution for metal-cover smartphone applications. In this NFC antenna solution, a narrow slot is initially loaded into the metal cover, and the position of this slot can be altered (with flexibility) according to the design of the smartphone’s external appearance. Next, an unconventional six-turn coil (with a six-sided irregular hexagonal shape) is designed that has a nonuniform linewidth and a nonuniform line gap between two lines, and it is partially loaded with a rectangular ferrite composite. In this design, an enhanced magnetic line of forces can be realized in certain specific locations, and an excellent inductively coupled near-field receiver is achieved. Notably, this proposed NFC antenna can pass the tests required by NFC forum certification, and its performances are comparable with the traditional NFC antenna that has nonmetallic cover.
|
3786308bf65cde7e5c0b320ab6cc01a8ab0abfff
|
A novel structure of the near-field communication (NFC) antenna design for a tablet PC is proposed. This tablet PC has a narrow border and full metallic back-cover. A miniaturized loop antenna design is achieved by attaching ferrite sheets on both sides of the loop antenna. The ferrite sheets may reduce eddy currents induced on the adjacent metallic back-cover by the loop antenna to improve the communication range of the NFC. Only the edge of the tablet PC allows the antenna to radiate due to the full metallic back-cover. Thus, the NFC antenna needs to be narrow to be installed on the edge of the tablet PC. Therefore, we propose a miniaturized NFC antenna with the dimensions of 41.5 (L) × 7.5 (W) × 0.45 (T) mm3 only. Simulated magnetic field distributions are consistent with measured voltage distributions. This design has a good communication range more than 6 cm in front of the touchscreen panel and reaches 2 cm over the other side above the metal back-cover.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.