content
stringlengths
1
311k
Performance of statistical control tools in identifying the defect on plunger- A review In this paper, we carried out a study to upsurge the brilliance of the plungers used in the braking system. The main focus has been taken on various defects that occur in the plungers during manufacturing. In this work, we analyzed the types of defects that transpire in the plunger. The foremost issue is acknowledged using tools like Pareto graphic representation, root and effect diagram, Failure Mode and Effect Analysis, Trend Analysis and Control chart. We identified more frequently occurring defects and major causes and effects of these defects in plungers. Based on the result of the analysis, interpretation measures are taken to overcome the defects of the plungers that will help to improve the overall efficiency of the brake system in vehicles. Introduction Plunger plays a vital role in the braking system and it is a very crucial part of the hydraulic braking system. So, the plungers are required to have a high accuracy in its quality. Very small defects in the Plunger may also lead to a major accident so it is very important to improve the quality of the plungers. In general, a brake system of a vehicle is operated to slow down or to stop the vehicle when the vehicle is in running condition. The brake pedal is associated with a plunger, as it enforces hydraulic oil through a sequence of tubes along with hoses to the braking component. The piston forces the brake pads adjacent to the brake are bound to the wheel may freeze down or stop. In this research study, an attempt was made by the researchers to identify and analyze various types of defects that occur in plunger during manufacturing and the root causes for the occurrence of defects. The investigation of defects in plungers is made based on unremitting enhancement. It requires coordination within the organization members and effectively using the quality tools in their improvement activities and decision-making process. In the current scenario, many different tools are available in the market and each tool is used for specific purposes and differentiates among them. In further there are different types of tools format is required like statistical, analytical, and clerical. The use of statistical quality control tools had increased on both the practical and theoretical levels. This analytical data is further used to measure the causes of defects using DMAIC, Cause and Effect analyses, Pareto chart. In today's scenario, the business firm is at the speed of the iterative development and this development records the feedback from the iteration for further development which implements the software-based quality tools like Pareto and six sigma and it also gathers feedback from the customer end to further rectification [ Plunger It is a component inside the wheel cylinder of the braking system. The brake pedal is forced towards the plunger within the wheel cylinder. This leads to the transporting of fluids to all portions of the wheel. Many researchers indicated that the performance is better for a component at low pressure and heat. So inturn the parameters need to be optimized. Defects Identified in Plungers These are the defects identified on the plunger while manufacturing is classified as Barrel diameter, damage, or wrong size, Collar diameter damage or wrong size, Damage, Drill depth over or undersize, Drill diameter oversize, Overall diameter wrong size, Seal Groove diameter oversize, Seal diameter poor finish, Seal face chatter mark, Short feed, Stem diameter oversize, Line marks, Nick marks, Wrong indexing, Runout,. The ion implantation was advanced to avoid the deterioration in the plunger. Causes of Defects According to our research and data collected from the literature, there are some major causes of defects that will affect the quality of the plunger while manufacturing and may affect the performance of the plunger,. These are the ways of rejection identified while the first operation and it is caused by primary faults that may occur on the machine or the operator. They are Material rejection, End bit rejection, Power cut, Process, Tryout, Operator, Setting, Tool is broken. The defects due to heat are most common on engines and it affects the performance of engines too,. Research Design In this study, the researcher has used the analytical type of research. The researcher intends to collect secondary data on defect variance in plunger at brake system, such as collar diameter oversize, barrel diameter oversize, collar diameter undersize, drill depth oversize, drill depth undersize, conveyor damage, etc. The root cause of the defects is analyzed and effective measures are taken to eliminate the occurrence of defects in plunger and that helps to advance the braking system used in automobiles,. The research design needs a novel approach for improvising the performance of the wear rate on the plunger,. The data are collected from various literature such as Quality control, Production, Manufacturing Engineering, Materials departments in industries that are taken into account for sampling the process,. The researcher collected sample data on defects of brakes plunger. The sample data was collected for one month. Limitations On bearing in mind about time constraints for carrying out the research, only "Postmachining process defects" are considered to in this research study. The defects which occur in the "Machining process" will be considered and analyzed in a future study by using statistical control tools,. Fault investigation through trend chart Trend analysis is the extensive tradition of gathering in sequence to spot a pattern. Even though trend analysis is frequently used to forecast upcoming events, it also shows the path of future results. The above trend analysis shows that the rejection rate varies continuously concerning time. Pareto Analysis A Pareto chart is an systematic technique for segregating the issues for erdicating the factors affecting the quality. The pareto chart gives a transparency of the parent defect which intiates other sub defects. It prioritize the defects using its relative consequence. From the above analysis, it is shown that vital few and trivial many defects of Plunger manufacturing. Cause and Effect Analysis The cause and effect diagram is used to discover all the impending or real cause. The causes are characterized by their effects. It is also known as the "Fishbone diagram or Ishikawa diagram". We categorize the foremost reimbursement, and the Causes are in order according to their altitude of substance. From on top of the chart, it explains the assortment of causes of the incidence of damages in plunger the foremost causes of harm are done by the man mistakes. Failure Modes & Effects Analysis (FMEA) Failure Mode and Effect Analysis is an approach for performing risk analysis, it a sequential process to categorize all possible letdown. This mode of approach is an everlasting process for identical creativeness. It used to describe, recognize, and eradicate existing problems and probable prospective failures ahead of them to accomplish the customer. Every risk factor has been calculated from every process and notified from 1-Lowest risk and 10-Highest risk. From the above FMEA -Failure Mode Effective Analysis table it was originated that the probable possessions of the failures and their cause and their impending significance. By prioritizing the foremost RPN of all prospective failure to decide upon behavior to shrink risk. Control Chart The control chart is a graph that shows the variations in each process over time. In a control chart, the midline is called average, a high procession is for better control limit and the minor line is for subordinate control limit. By correlating the active data to these outlines, we can terminate whether the process is in control or not. Statistical process control is used to follow good quality control practices in the manufacturing process. And it is used to detect the unnatural patterns and deviations in the process. Analysis of various defects in plungers using a statistical Quality control tool will help to identify the frequency of occurrence of defects, the causes, and its impacts. The interpretation from the statistical analysis will help to take effective measures to control and eliminate the causes of various defects in plungers during manufacturing This in turn help the companies to reduce the rejection rates of plungers and the cost of Quality. By reducing the rejection rates of defects, rework time, and rework costs can be minimized. Analysis of various defects in plungers using a statistical Quality control tool will help to identify the frequency of occurrence of defects, the causes, and its impacts. The interpretation from the statistical analysis will help to take effective measures to control and eliminate the causes of various defects in plungers during manufacturing. This in turn helps the companies to reduce the rejection rates of plungers and the cost of Quality. By reducing the rejection rates of defects, rework time, and rework costs can be minimized. The plunger stress analysis during the time is already shown in figure 9 he overcame the design based on ion implantation. Conclusion From the above research of defects in plunger manufacturing, the researchers have identified the various defects occurring more frequently and the causes of these defects. In this Analytical type of research Pareto chart is an ideology to identify and split the imperative few from trivial many defects. Trend analysis identifies that the rejection rate continuously varies concerning time. Then the cause VCADPCA 2020 IOP Conf. Series: Materials Science and Engineering 906 012012 IOP Publishing doi:10.1088/1757-899X/906/1/012012 5 and effect diagram is used to recognize the mixture of cause for the incidence of the major defect and the diagram highlights the errors produced while manufacturing of plunger. Finally, the FMEA tool is used to analyze the various effects that are caused by the defects and this tool also shows the various prevention and detection for the failure.
Learning Relations by Bathfinding First-order learning systems (e.g., FOlL, FOCL, FORTE) generally rely on hill-climbing heuristics in order to avoid the combinatorial explosion inherent in learning first-order concepts. However, hill-climbing leaves these systems vulnerable to local maxima and local plateaus. We present a method, called relational pathfinding, which has proven highly effective in escaping local maxima and crossing local plateaus. We present our algorithm and provide learning results in two domains: family relationships and qualitative model building.
The Foundation Liquefaction and Permanent Deformation Analysis of Diversion Dike for Nuclear Power Plant In seismic response analysis, total stress method is adopted mostly, but variation of the pore water pressure and the development process of the liquefaction are not considered in this method. Based on the dynamic analysis of two-dimensional effective stress proposed by Zhu-jiang Shen, combined with dynamic consolidation theory of Biot, the foundation liquefaction analysis of the diversion dike for nuclear power plant is performed by using effective stress dynamic analysis program, the liquefied range is given, as well as the permanent deformation. The obtained law can provide theoretical guidance for the similar project.
Numerical studies on steel-concrete composite structures The paper focuses on the seismic performances of steel-concrete composite structures made with fully encased steel-concrete composite columns and steel beams. An important objective is to study the influence of the structural steel ratio on the behaviour of composite columns. To meet the proposed objectives, a numerical study was developed on composite structures. Some comments, design recommendations and an economical study conclude the paper. Introduction The paper concentrates on the seismic performances of steel-concrete composite frames, with emphasis on the behaviour of composite columns. Concrete encased hot-rolled steel sections are still interesting for researchers from all over the world, being studied intensively for more than three decades. In order to improve design recommendations of current norms researchers are still developing extensive experimental tests, the latest in Singapore and China, especially on columns made with high strength materials. Based on experimental tests, numerical models are implemented in different types of calculus programs (commercial or specially developed) in order to perform a large variation of sections, types of embedded profiles and quality of materials. To achieve the proposed objectives a numerical study on composite structures was developed. The structures were designed with fully encased composite columns and steel beams. The analysis included five types of similar structures regarding the floor plan, but different heights. The studied structures had twelve, ten, eight, six and two levels. The columns were designed using three structural steel ratios: low, medium and high. The loads taken into consideration were identical for all types of structures. To investigate the seismic performances two types of analysis were performed: pushover and dynamic time history, based on the numerical model developed. Beside the seismic performances of the structures, an economic study was conducted on the chosen frames. Numerical model The numerical model used was developed in 2013 in FineLg, a finite element program implemented at Lige University, ArGenCo department. The calibration and validation of the numerical model was made using six experimental tests taken from the international literature on fully embedded steelconcrete composite columns. Numerical model The finite element used was a classic beam element (Bernoulli) for concrete plane frames with steel reinforcement and embedded beams with three nodes, as shown in figure 1, without taking into account the shear force effect. The total number of degrees of freedom corresponds to: one relative translational degree of freedom for the node situated at the mid-length of the beam element and one rotational and two translational degrees of freedom for each two nodes located at beam element ends and, as shown in figure 1. Nodes 1 and 3 have three degrees of freedom (u, v, ), and node 2 has a single degree of freedom u, which allows to consider a possible relative displacement between steel and concrete. This type of element does not allow the involvement of the local buckling phenomenon of the section. Because this analysis is two-dimensional, the bending phenomenon applied outside the section plane, such as torsional bending, is not taken into account in this analysis. It is considered a perfect connection between steel and concrete. Calibration and certification of the numerical model The calibration and certification of the numerical model was made using six experimental tests taken from the international literature on fully encased steel-concrete composite columns. The experimentally tested columns had different types of concrete or steel, different structural steel or reinforcing steel ratios. The columns were tested (both monotonic and cyclic) to constant axial force and lateral forces. The subject of the paper is not the detailed presentation of the experimental tests used for the calibration and certification of the numerical model, but on the study of the seismic performances of the structures with composite columns, based on the case study developed with the numerical model validated using these experimental tests. Figure 4 presents the types of cross sections of the columns used for calibration and certification of the numerical model. The experimental programs used for calibration and certification of the numerical model were developed at Technical University of Cluj-Napoca, in Romania, at NCU, Chung-Li, in Taiwan, at UC, San Diego, California, in USA, at CTU, Hsinchu, in China, and at Lakehead University, Thunder Bay, Ontario, in Canada. With red colour are presented the experimental curves and with blue colour the numerically obtained ones. The difference between experimental and numerical values was between 015 percent with a mean value of 5%. For exemplification, in figure 5 is presented the comparison with a cyclically tested column in China and in figure 6 with a monotonically tested column in Canada. Case study The developed case study included five similar types of composite frames, with the same floor plan as showed in figure 7 and with twelve, ten, eight, six and two levels, as showed in figure 8 for the six levels structure. The composite frames had five openings of 6 meters in longitudinal direction and two openings of 7 meters in transversal direction. The chosen seismic zone was the one corresponding to a peak ground acceleration of 0.40 g and a corner period of 1.6 s. After a preliminary design analysis performed with a commercial software were chosen IPE 550 profiles for the steel beams and the following materials for columns: C40/50 concrete, S355 steel for the structural steel profiles embedded in concrete and S500 steel for the longitudinal reinforcement. For each type of structure were chosen three types of composite columns. The difference between the columns was the structural steel ratio: low, medium and high. In table 1 are presented all the characteristics of the composite columns used in the analysis. The names of the structures have the following meaning: for example, 6Fb: 6 represents the number of levels, F comes from floors and the last number (b) represents the structural steel ratio chosen (a for low, b for medium and c for high). The columns of the twelve, ten and eight level structures were also varied in height, as showed in table 1. In the preliminary design stage, all recommendations from P100-1/2013 were considered. The ductility class chosen was DCH, with a q behaviour factor equal to 6.5 for the transversal frames and 4 for the longitudinal ones. After the preliminary design stage, using the FineLg program, based on the numerical model presented at chapter 2.1., to study the seismic performances of the chosen frames two types of analysis were performed: pushover and dynamic time history. In the pushover analysis, the seismic forces had a triangular distribution. The time history analysis was performed using three artificial accelerograms, in accordance with P100-1/2013 and one real (Vrancea 1977). The followed parameters were: the evolution of inter-storey drift at all levels, the global pushover curve and rotation capacity. In addition, the q behaviour factor was evaluated for all analysed frames. For exemplification, in figure 9 are presented the pushover curves for 2Fa structure and in figure 10 the evolution of inter-storey drift at all levels for the same structure. Figure 10. Evolution of inter-storey drift for 2Fa structure With a green vertical line is marked the inter-storey drift limitation of 0.0075h/, where represents the reduction factor that takes into account the lower return period of the seismic action associated with the damage limitation requirement and h is the height on structure. The 0.0075 value corresponds to buildings having non-structural elements with high dissipation capacity, attached to the structure, according to the seismic norm P100/1-2013. The yellow line represents the inter-storey drift limitation of 2.5%, FEMA 356-2000 criteria for Life Safety. The pushover analysis results for all studied composite structures are presented centralized in table 2. Table 2 presents the displacement (dc) and corresponding force (Fb) for 0.0075h criteria, 2.5% drift limitation according to FEMA 356-2000 and the values at concrete failure, when cu2 reaches 3.5 value. The last column presents the corresponding force when p reaches 35mrad value, where p represents the rotation capacity of the plastic hinge region. The six and two floor structures did not achieve a minimum rotation capacity of the plastic hinge region of 35 mrad, necessary to design the structure in DCH ductility class, as can be seen in table 2. From the eight floor, analysed frames reached a superior rotation capacity of the plastic hinge region, 37 mrad for 8Fa structure to 69 mrad for 12Fc structure. Table 3 presents the q behaviour factor obtained in the pushover analysis and dynamic one, using artificial accelerograms, according to P100/1-2013 and real ones (Vrancea 1977). Based on the performed analysis it is recommended that low level structures (with one up to sixseven levels) to be designed in medium ductility class. Structures with more than eight levels can be designed in medium or high ductility class, depending on the architectural and/or structural restrictions. Higher structural steel ratio leads to an important increase of structure ductility, more pronounced from low to medium than from medium to high. Given the financial importance, the structural analysis was completed with an economical study of the frames for an optimal choice of structural steel ratio. In ' Table 4' is presented the cost of each type of designed column per 1 linear meter of element. The final price was obtained by summing the costs of all materials (structural steel, concrete and reinforcement), formwork and labour. The prices were calculated based on average offers received from local suppliers. The most economical solution is offered by choosing a low structural steel ratio. Up to eight floors, the price difference between columns with a low structural steel ratio and medium steel ratio is about 15%. This difference decreases substantially at higher structures up to about 5%. In comparison with low structural steel ratio, a medium one offers an important increase of structural ductility and slimmer crosssections, so the 5% cost difference is considered acceptable. The values of the structural steel ratio for the chosen structures were for low: between 0.2090.32, with a mean value of 0.276, for medium: between 0.3490.543, with a mean value of 0.403 and for high: between 0.5060.610, with a mean value of 0.562. Conclusions Composite frames made with steel beams and fully encased steel-concrete composite columns can be an efficient solution for buildings situated in medium and high seismicity zones. From the case study developed some notable conclusions can be drawn: small structures (up to 6-7 levels) are recommended to be designed in medium ductility class; for higher structures a medium or high ductility class can be adopted, the solution chosen being optimized from different points of view: cross-section dimensions, necessary rotation capacity, costs, etc. When considering only the economical point of view the structures with low steel ratio offered the best results, but considering the 5% (for tall buildings) difference in using low or medium steel ratio it is recommended a medium one when designing a composite column.
Comments on Liquid Film Atomization on Wall Edges-Separation Criterion and Droplets Formation Model In an intriguing paper (cid:1) 1 (cid:2), Maroteaux, Llory, Le Coz, and Hab-chi presented a separation criterion for a liquid film from sharp edges in a high-speed air flow. According to their model, the film of thickness h f and velocity U f separates from a sharp edge of angle (cid:1) if (cid:1)(cid:2)(cid:1) crit. The relation obtained for the critical angle is where (cid:4) / (cid:4) 0 is the amplitude ratio of the final to the initial perturbation of the film surface. When the wave amplitude reaches a critical value, the film stripping from an edge occurs. The critical value (cid:5) (cid:4) / (cid:4) 0 (cid:6) crit is set equal to 20 as the best fit for their experimental data. The frequency (cid:3) max is defined as the most unstable perturbation growth rate that causes the film separation. This maximum frequency is computed from the dispersion relation of Jain and Ruckenstein (cid:5) JR (cid:6) (cid:1) 2 (cid:2). The results of 12 tests with dodecane film flowing on springboard or straight step are reported. The geometrical edge angle (cid:1) is equal to 135 deg for all tests. The maximum film thickness h f is measured while the film velocity U f is estimated. The fact of stripping is established from the visual observations. If the critical angle, computed from Eq. (cid:5) 1 (cid:6), takes values that are inferior to 135 deg,
'London Calling' - A spatial decision support system for inward investors SUMMARY: This paper summarises the development of a framework of geographic factors which are used to inform the development of a Spatial Decision Support System for the promotion of Inward Investment. First, a literature review identified potentially relevant theories of factors influencing regional development and competitiveness. Drawing from this review, we developed a geospatial framework which incorporates data requirements that were gathered from a user requirements study carried out with Think London, Londons inward investment agency. Central to our framework is the notion of hard and soft capitals (factors) which influence competitiveness and economic development, against which all data requirements were mapped. These geospatial frameworks gave us important insights into how Think London can structure information demands, and devise a strategy to implement a GIS to pro-actively target and promote locations in London.
Project Patron: exploiting a digital library for the performing arts The School of Performing Arts at the University of Surrey, UK, draws on the resources of the Library for material such as recordings of music, dance video, music scores and dance notation. Project PATRON (Performing Arts Teaching Resources ONline) has been designed to deliver digital audio, video, music scores and dance notation across a high speed network to the desktop. Patron provides a rich digital library resource of audio, video, images and text. It can be used in a similar way to a conventional library in that items can be searched for and retrieved, but it also enables the resource to be used in a variety of contextual situations. Appropriate multi-media tools within a web browser environment are provided to integrate different media and to enable a range of users to put them into context. The resource formats, system architecture, document structures such as HTML, and the integration with an authoring and management package are described.
Discovery of Ontologies from Implicit User Knowledge The purpose of the Semantic Web is to enable worldwide access to humanitys knowledge in a machine-processable way. A major obstacle to this has been that knowledge is often either represented in an incoherent way, or not externalized at all and only present in peoples minds. Populating a knowledge graph and manually building an ontology by a domain expert is tedious work, requiring great initial effort until the result can be used. As a consequence, knowledge will often never be made available to the Semantic Web. The aim of this project is to develop a new approach for building ontologies from implicit user knowledge that is already present, but hidden in various artifacts like SQL query logs or application usage patterns.
Simplifying Drug Package Leaflets Drug Package Leaflets provide information for patients on how to safely use medicines. European Commission and recent studies stress that further efforts must be made to improve the readability and understandability of package leaflets in order to ensure the proper use of medicines and to increase patient safety. To the best of our knowledge, this is the first work that directly deals with the automatic simplification of drug package leaflets. Our approach to lexical simplification combines the use of domain terminological resources to give a set of synonym candidates for a given target term, and the use of their frequencies in a large collection of documents in order to select the simplest synonym.
A Deterministic Edge Detection Using Statistical Approach A large number of edge detectors are available in image processing literature where the choices of input parameters are to be made by the user and are made on an informal basis. In this paper, an edge detector is proposed, where thresholding is performed using statistical principles. Local thresholding of each individual pixel which depends upon the statistical variability of the gradient vector at that pixel is made. Such a standardization statistic based on the gradient vector at each pixel is used to determine the eligibility of the pixel to be an edge pixel. The results obtained from the proposed method are found to be comparable to those from well-known edge detectors. However, the values of the input parameters providing the appreciable results in the proposed detector are found to be more established than other edge detectors and possess statistical elucidation. The results obtained from the proposed algorithm are compared with Canny's edge detector which is more popular among different edge detectors. The proposed algorithm is implemented using MATLAB-7.1.
Electronic Dental Records System Adoption The use of Electronic Dental Records (EDRs) and management software has become more frequent, following the increase in prevelance of new technologies and computers in dental offices. The purpose of this study is to identify and evaluate the use of EDRs by the dental community in the So Paulo city area. A quantitative case study was performed using a survey on the phone. A total of 54 offices were contacted and only one declinedparticipation in this study. Only one office did not have a computer. EDRs were used in 28 offices and only four were paperless. The lack of studies in this area suggests the need for more usability and implementation studies on EDRs so that we can improve EDR adoption by the dental community.
. OBJECTIVE To construct different mutants of human p53 for expression in eukaryotic cells and investigate the effects of these mutants on stress-induced cell apoptosis. METHODS Human p53 cDNA was amplified by PCR and cloned into pcDNA3/HA vector following the routine procedures. The Ser15 and Ser46 of p53 were mutated to Ala and identified by enzyme digestion and PCR, and these mutants were expressed in NIH3T3 cells and detected by Western blotting. After transfection with the plasmids of different p53 mutants, the NIH3T3 cells were double-stained with AnnexinV-FITC and propidium iodide for apoptotic analysis using flow cytometry. RESULTS The recombinant plasmids of HA-tagged wild-type p53, HA-p53(WT), and its mutants, HA-p53(S15A) and HA-p53(S46A), were successfully constructed and expressed efficiently in NIH3T3 cells. The apoptotic ratio of p53(WT)-transfected cells induced by arsenite increased and that of p53(S15A)-transfected cells decreased significantly after arsenite stimulation, but no significant changes occurred in the apoptosis of p53(S46A)-transfected cells. CONCLUSION The phosphorylation on Ser15 of p53 plays an important role in mediating arsenite-induced cell apoptosis.
ONLY FERMI LIQUIDS ARE METALS Any singular deviation from Landau Fermi-liquid theory appears to lead, for arbitrarily small concentration of impurities coupling to a non-conserved quantity, to a vanishing density of states at the chemical potential and infinite resistivity as temperature approaches zero. Applications to copper-oxide metals including the temperature dependence of the anisotropy in resistivity, and to other cases of non Fermi-liquids are discussed.
Horn growth variation and hunting selection of the Alpine ibex Selective hunting can affect demographic characteristics and phenotypic traits of the targeted species. Hunting systems often involve harvesting quotas based on sex, age and/or size categories to avoid selective pressure. However, it is difficult to assess whether such regulations deter hunters from targeting larger "trophy" animals with longer horns that may have evolutionary consequences. Here, we compile 44,088 annually resolved and absolutely dated measurements of Alpine ibex (Capra ibex) horn growth increments from 8,355 males, harvested between 1978 and 2013, in the eastern Swiss Canton of Grisons. We aim to determine whether male ibex with longer horns were preferentially targeted, causing animals with early rapid horn growth to have shorter lives, and whether such hunting selection translated into long-term trends in horn size over the past four decades. Results show that medium- to longer-horned adult males had a higher probability of being harvested than shorter-horned individuals of the same age and that regulations do affect the hunters' behaviour. Nevertheless, phenotypic traits such as horn length, as well as body size and weight, remained stable over the study period. Although selective trophy hunting still occurs, it did not cause a measurable evolutionary response in Grisons' Alpine ibex populations; managed and surveyed since 1978. Nevertheless, further research is needed to understand whether phenotypic trait development is coinfluenced by other, potentially compensatory factors that may possibly mask the effects of selective, long-term hunting pressure.
Predicting the safety impact of a speed limit increase using condition-based multivariate Poisson lognormal regression ABSTRACT Speed limit changes are considered to lead to proportional changes in the number and severity of crashes. To predict the impact of a speed limit alteration, it is necessary to define a relationship between crashes and speed on a road network. This paper examines the relationship of crashes with speed, as well as with other traffic and geometric variables, on the UK motorways in order to estimate the impact of a potential speed limit increase from 70 to 80mph on traffic safety. Full Bayesian multivariate Poisson lognormal regression models are applied to a data set aggregated using the condition-based approach for crashes by vehicle (i.e. single vehicle and multiple vehicle) and severity (i.e. fatal or serious and slight). The results show that single-vehicle crashes of all severities and fatal or serious injury crashes involving multiple vehicles increase at higher speed conditions and particularly when these are combined with lower volumes. Slight injury multiple-vehicle crashes are found not to be related to high speeds, but instead with congested traffic. Using the speed elasticity values derived from the models, the predicted annual increase in crashes after a speed limit increase on the UK motorway is found to be 6.212.1% for fatal or serious injury crashes and 1.32.7% for slight injury, or else up to 167 more crashes.
Robust performance of periodic systems Robust performance of sampled-data systems to structured periodic and quasi-periodic uncertainty is considered, and necessary and sufficient conditions are derived, The conditions are finite dimensional, and explicitly computing them is investigated; the results are illustrated with an example. This work is readily extended to yield exact and finite dimensional robust performance conditions for structured arbitrary time-varying uncertainty.
Noncoding Sequences Near Duplicated Genes Evolve Rapidly Gene expression divergence and chromosomal rearrangements have been put forward as major contributors to phenotypic differences between closely related species. It has also been established that duplicated genes show enhanced rates of positive selection in their amino acid sequences. If functional divergence is largely due to changes in gene expression, it follows that regulatory sequences in duplicated loci should also evolve rapidly. To investigate this hypothesis, we performed likelihood ratio tests (LRTs) on all noncoding loci within 5 kb of every transcript in the human genome and identified sequences with increased substitution rates in the human lineage since divergence from Old World Monkeys. The fraction of rapidly evolving loci is significantly higher nearby genes that duplicated in the common ancestor of humans and chimps compared with nonduplicated genes. We also conducted a genome-wide scan for nucleotide substitutions predicted to affect transcription factor binding. Rates of binding site divergence are elevated in noncoding sequences of duplicated loci with accelerated substitution rates. Many of the genes associated with these fast-evolving genomic elements belong to functional categories identified in previous studies of positive selection on amino acid sequences. In addition, we find enrichment for accelerated evolution nearby genes involved in establishment and maintenance of pregnancy, processes that differ significantly between humans and monkeys. Our findings support the hypothesis that adaptive evolution of the regulation of duplicated genes has played a significant role in human evolution. Introduction The genetic basis for human-specific traits is of great interest. Despite striking phenotypic divergence in the Hominini (members the of human-chimp lineage), genome sequence data suggest a slowdown in the rate of nucleotide substitutions in humans and our primate relatives (Wu and Li 1985;). Furthermore, orthologous human and chimpanzee proteins differ by only two amino acid substitutions on average, and nearly a third of proteins are identical between the two species (The Chimpanzee Sequencing and Analysis Consortium 2005). Low amino acid divergence in the ''hominin'' (human-chimp) lineage lends support for the hypothesis that divergence between closely related species is accompanied by evolution of the gene regulatory network (King and Wilson 1975;Levine and Tjian 2003;Carroll 2005). Structural variation in the genome is another mutational mechanism that contributes significantly to genomic divergence (;). An increased rate of structural genomic rearrangements (such as gene duplications) has been observed in primates (;;; ). Structural variation has been recognized as a major contributor to genomic diversity, with gene duplication serving as an evolutionary mechanism for functional innovation (Ohno 1999;Zhang 2003). Also, gene turnover in the form of rapid expansion or contraction of gene families has been put forward as a possible explanation of phenotypic divergence (;;;;;). Additionally, evidence for excess positive selection on the coding sequences of genes in families that ª The Author(s) 2010. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/ 2.5), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. expanded rapidly in primates corroborates the hypothesis that gene duplication can lead to functional innovation (). The beta subunit of the glycoprotein hormone chorionic gonadatropin (CGB), for example, is believed to have arisen by duplication of luteinizing hormone beta (LHB) about 35-50 million years ago. This duplication was followed by one deletion and two insertions in the coding sequence that lead to the appearance of a carboxy-terminal peptide in CGB. Additional mutations in the promoter induced an expression shift from pituitary gland to placenta (Maston and Ruvolo 2002;Henke and Gromoll 2008). The evolution of CGB illustrates the process of gene duplication followed by neofunctionalization. Notably, the emergence of new function involved changes in both the coding sequence and nearby noncoding sequence, which provided a new regulatory context. This example highlights a particular type of gene evolution whereby adaptation occurs in regulatory noncoding sequences after a duplication event. We hypothesize that this form of divergence played a significant role in human evolution. The ever-increasing richness of genomic sequence and functional data provides a foundation for empirical studies of duplication-mediated divergence. Although there have been some large-scale studies on the evolution of duplicated loci in human (Lynch and Conery 2000;;Lynch and Conery 2003;;;; ;), these analyses have typically focused on coding rather than noncoding sequences. Bioinformatic challenges related to sequence assembly and alignment, as well as interpretation of evolutionary analyses, are largely responsible for the paucity of genome-wide studies of noncoding sequences to date. In light of the mounting evidence that regulatory divergence has played a major role in hominin evolution, however, it is desirable to understand how noncoding sequences in and near duplicated loci evolve. These considerations motivated us to perform a systematic genome-wide search for signatures of accelerated sequence evolution and functional innovation in noncoding sequences associated with duplicated genes. We focus on the hominin lineage since divergence from the common ancestor with Old World Monkeys (represented by the macaque genome). The hominin lineage is very relevant to our understanding of human evolution, and statistical tests on this lineage have greater power than tests on the much shorter human lineage since divergence from the chimphuman ancestor. For this analysis, we identified all noncoding sequences within 5 kb of a human Ensembl transcript. We then performed molecular evolutionary tests to highlight regions of unusually high substitution rates in the Hominini. Next, we assessed the likely impact of human-macaque sequence differences on transcription factor (TF) binding, thereby identifying noncoding regions likely to have affected transcriptional regulation. Finally, we asked whether noncoding sequences associated with genes that duplicated in the hominin lineage show stronger evidence of divergence than noncoding sequences nearby nonduplicated genes. We find strong enrichment for accelerated substitution rates and transcription factor-binding site (TFBS) divergence in noncoding sequences associated with duplicated genes. Sequence Data and Orthologous Sequence Blocks We downloaded 28-way alignments in multiple alignment format (MAF) from the genome browser maintained by the University of California-Santa Cruz (UCSC) (hg18, NCBI assembly version 36). MAF-formatted alignments are partitioned into consecutively aligned sequence blocks () that can be viewed as orthologous units. A MAF sequence block is a local alignment where each row represents consecutive (though potentially gapped) sequence from one species, and there are no gap-only columns. Moving along the human chromosomes, a new block starts when there is a change in orthology (e.g., a species drops in or out of the alignment). We used this ''natural'' partition of whole-genome multiple sequence alignments in our analyses. In cases of duplications, both of the duplicated sequences in the Hominini are considered orthologous to single-copy regions in the outgroups. Thus, the orthologous outgroup sequences may appear in more than one MAF block. We included only the genomes with data-use policies allowing genome-wide analysis (human, chimp, macaque, mouse, rat, dog, opossum, platypus, chicken, zebrafish, fugu, and medaka) in our analyses. Not all species are present in all MAF blocks. We delineated all noncoding MAF blocks that are located within 5 kb of a human Ensembl transcript (version 41) (), that is, a region spanned by the transcriptional unit plus 5 kb of upstream and downstream flanking sequence. We trimmed off coding sequence from any block overlapping a coding exon. We defined exonic 5# UTRs as the sequences between the transcription start site (TSS) and the coding start site (CSS) that are annotated as exons; intronic 5# UTRs contain all other sequence between the TSS and CSS. Analogously, we defined exonic 3# UTRs as sequences between coding stop and transcription stop that are annotated as exons. Intronic 3# UTRs are all other sequences between coding stop and transcription stop. We clustered overlapping transcripts and annotated each sequence block with a unique genic location category using the following hierarchy: exonic UTR. intronic UTR. 5#. 3#. first intron early. first intron late. intron. flanking sequence. For example, if a block overlaps both exonic 5# UTR and first early intron sequence (due to overlapping transcripts), we annotated it as exonic 5# UTR. Flanking regions were annotated as 5# (upstream of TSS), 3# (downstream of transcription stop site), or ambiguous (if not uniquely 5# or 3# due to overlapping or nearby transcripts). We refer to contiguously transcribed genomic regions on either strand as ''transcript clusters.'' Alignment Quality Filters To produce a data set with high-quality syntenic alignments, we used the following filtering criteria to exclude certain alignment blocks from further analysis: -Blocks not containing chimp and macaque plus at least one other placental mammal with no more than 50% nongap characters were excluded. -Blocks with more than 1/3 of bases (chimp or macaque) inserted or deleted with respect to human were excluded. -Blocks with more than 1/2 gap characters in human, chimp, and macaque were excluded. -Blocks with more than 25% of all nongap bases differing between human and chimp (or 35% between human and macaque) were excluded. -Blocks were masked if more than 1/2 of their bases were repeat masked in human. -Blocks with more than 1/2 of their bases overlapping annotated pseudogenes (Ensembl version 54) were excluded. -Blocks that were not syntenic between human and chimp were excluded. Additionally, quality scores for chimp and macaque were taken from the UCSC genome browser databases rheMac2 and panTro2 (table ''quality'') and bases with a score less than 40 were masked in both species. Synteny was derived from human-chimp alignments: Syntenic net alignment files were downloaded from UCSC, and syntenic regions were defined as top-level chain alignments of at least 5-Mb length; gaps in this chain were filled with syntenically aligned chains from lower levels. Repeat masking was performed on the basis of the rmskRM327 track downloaded from UCSC. This quality filtering produced a data set of 4,699,477 high-quality MAF blocks with median length of 64 bp (range 10-2,580 bp). These blocks cover 410,274,564 bp of the human genome. Alignment filtering might introduce various biases in patterns of sequence composition and conservation, such as biased retention of more conserved blocks over less conserved blocks. We addressed this issue by using calibrated chromosome-specific models for unconstrained sequence in our LRTs, which were then rescaled using the data from each block to produce a local null model (see below). We believe that the benefits of more trustworthy alignments outweigh the potential drawbacks of a higher false-negative rate (i.e., filteredout blocks that are not tested) and a bias toward analysis of more conserved blocks. Likelihood Ratio Test for Accelerated Substitution Rates To test for acceleration in the rate of nucleotide substitutions in the Hominini, we subjected each alignment block to a one-sided LRT using phylogenetic models (). Phylogenetic continuous time Markov models are parameterized by a tree T (topology and branch lengths), a rate matrix Q, and equilibrium base frequencies p. The tree T can be viewed as consisting of a tree T, spanning the hominin lineage, and a tree T, spanning the other species: T 5 (T, T). We used a general time-reversible parameterization (REV) of the rate matrix Q. All model fitting was performed using maximum likelihood estimation with the ''phyloFit'' program from the PHAST package (http://compgen.bscb.cornell.edu/phast/). To assess the robustness of our results, we also calculated test statistics after deleting chimp from T, so that T consisted of the human lineage alone (from the macaque-human ancestor to modern humans). To keep the two analyses comparable, we performed this analysis on the same set of filtered MAF blocks as before. Leaving chimp in the filtering rules creates a bias toward well-aligned blocks (conservative with respect to identifying accelerated substitution rates), whereas excluding chimp from the LRT analysis guards for false positives presumably due to misalignment and/or erroneous assembly of the low-coverage shotgun sequenced chimp genome. Test Statistics. The LRT statistic for an alignment block B is based on the ratio of the likelihood of the sequence alignment under two different models: 1) a null model and 2) an alternative model with acceleration in the hominin lineage that allows a faster rate of substitutions on the human and chimp branches, as described in Pollard et al.. In more detail, the null and the alternative models were estimated starting from a chromosome-specific model M 5 (Q, T, p) for unconstrained sequence. The null model was then obtained by rescaling T by a constant c to allow for a faster or slower overall rate of substitutions across the whole tree (i.e., in all species), thereby maintaining the same relative branch lengths (i.e., substitution rates) across lineages. This rescaling step accounts for local substitution rate variation. The alternative model also adjusts for local rate variation but includes an additional parameter q. 1, which allows the hominin lineage to have a faster rate of substitutions relative to the rest of the tree. Thus, the rate of substitutions in the hominin lineage compared with the rest of the tree is increased, whereas relative rates of substitutions across lineages in the rest of the tree are maintained. The models can be represented as follows: where the parameters c and q are estimated for each model by maximum likelihood, using the alignment data for block B. The null hypothesis being tested is H 0 : q 5 1 (no acceleration in Hominini) versus the alternative hypothesis H a : q. 1 (acceleration in Hominini). Chromosome-Specific Initial Models. We detected lineage-specific increases in substitution rates by comparing an alternative model M with a null model M0 (see previous paragraph). Because both of these models are rescaled versions of a model M for unconstrained sequence, it is important to pick M such that rescaling its tree T provides a reasonable estimate of the local substitution process at block B. On the one hand, it is desirable that the base frequencies p and relative rates of various types of substitutions in the rate matrix of M are as appropriate for B as possible. On the other hand, the better the initial model M fits the data in block B, the more difficult it is to reject the null hypothesis. This provides a dilemma as to how ''local'' (with respect to B) the initial model should be. We examined a range of choices, from a single genomewide model to a rescaled local model fit on several megabases of nearby sequence. Based on this analysis, we chose to estimate separate initial models for each chromosome (see supplementary section SR3, Supplementary Material online). In this way, we attempt to accommodate chromosomespecific biases (e.g., sequence composition and substitution patterns) and condition on the fact that chromosomes, as a whole, are not accelerated. Scaling the null model M0 by c ensures that more local rate variation is also accounted for in the test. To estimate chromosome-wide models, we performed maximum likelihood estimation with starting values for parameter optimization obtained from a genome-wide REV model fit to 4-fold degenerate sites from 28-way alignments (obtained from UCSC). We estimated c and q, as well as equilibrium frequencies, using all blocks on a given chromosome, by maximum likelihood. These chromosome-specific models were then subsequently used as the initial models (M) for the LRT analysis described above. Statistical Significance. To determine significantly accelerated blocks, we calculated P values for the observed LRT scores using the asymptotic distribution of a 50:50 mixture of a point mass at zero and a v 2 distribution (Self and Liang 1987). As our data contain LRT scores from blocks of different lengths, we checked the correlation between block length and LRT score. We found only a minimal association (r 5 0.051). From the P values, we derived an LRT score cutoff controlling the false discovery rate (FDR) at 10% using the Benjamini-Hochberg procedure (R package multtest; http://www.bioconductor.org). Blocks with an LRT score larger than the cutoff were termed as ''accelerated.'' Duplication Data Duplicated Genes. We used gene tree reconciliation to determine a set of Ensembl (version 41) peptides that underwent duplication on the lineage between the macaquehominin ancestor (common ancestor of human, chimp, and macaque) and the chimp-human ancestor. We constructed neighbor-joining trees for each family defined by Ensembl using the peptide sequences from human, chimpanzee, macaque, rat, mouse, and dog. We reconciled the resulting gene trees with the species tree of the six species using NOTUNG (). Duplications that occurred on the lineage leading to Hominini after the split with macaque but before the human-chimpanzee split were identified as Hominini specific. These duplicated genes were required to have synonymous divergence less than 0.064 (twice the average distance back to the humanmacaque ancestor). This approach yielded 716 unique Hominini-specific Ensembl (version 41) peptides. We refer to these as duplication peptides. After alignment filtering (see above), we retain noncoding blocks in 5-kb neighborhoods of 449 duplication peptides. We call these as duplication-associated blocks or ''DA blocks.'' Duplicated sequences are often located a long distance from the locus they are copied from, especially in mammals (;). When paralogous genes are located far apart, we can define one as the ''parent'' and one as the ''daughter''; these correspond to the paralog in the original location and the paralog in a new location, respectively ). We used the likelihood method of to define parent and daughter duplicates (when possible) based on the length of shared synteny between the human copies of the homininspecific duplicates and the single-copy genes in macaque. In total, we were able to uniquely polarize 95 peptides. These correspond to 56 parents and 39 daughters (multiple parents may exist when the original locus is duplicated both in tandem and to a distant location). Duplication Status of DA Blocks. Not all DA blocks are themselves duplicated ( fig. 1). To delineate the duplication status of noncoding blocks, we used human-macaque alignment nets (UCSC hg18.netRheMac2). We annotated noncoding blocks based on the alignment chain they reside on compared with the exons of duplication peptides within 5 kb. We find that the majority of transcripts of duplication peptides have exons that align to a single unique macaque alignment net (66%). DA blocks within 5 kb of those peptides and on the same chain as the exons were annotated as ''chained.'' Specifically, a ''chained DA block'' is (A) a DA block that is part of a multiple sequence alignment (MAF block) that meets our quality filters and is located within 5 kb of a recently duplicated peptide (human coordinates and annotation), and (B) in the syntenic net alignment files, it is on the same chain as the duplicated peptide. DA blocks that are not chained are called nonchained. We refer to other noncoding blocks within 5 kb of duplication peptides as nonchained DA blocks (see, e.g., fig. 1). This annotation does not directly reflect the duplication status of noncoding blocks, but chained blocks are much more likely to share the evolutionary history of the exons of a nearby transcript compared with nonchained blocks. Chained DA blocks near daughter peptides are likely to have been duplicated along with the duplicated coding sequence, whereas nonchained DA blocks are more likely to be nonduplicated or to have duplicated separately from the nearby duplicated peptide. This approach enables us to roughly estimate the duplication status of noncoding blocks, even though reliable phylogeny-based duplication information is currently only available on genome-wide scale for coding sequences. Because parent loci are less likely than daughter loci to have been affected by genomic rearrangements, we expect to see an enrichment of chained blocks near parent peptides compared with daughter peptides. This is indeed the case: 49 of the 56 parent peptides (88%) have chained noncoding blocks nearby, whereas this is only the case for 6 of the 39 daughter peptides (15%). Genes Near Accelerated DA Blocks. To determine genes nearby DA blocks, we mapped the Ensembl version 41 peptides near each accelerated (i.e., with a significant LRT; see above) DA block to Ensembl version 56 using the BioMart data management system (http://www.ensembl.org/biomart). (blue) is duplicated in the Hominini, and paralogous copies are present at two loci. The two red noncoding blocks and the peptide align to the same genomic regions in macaque (chained blocks, see Materials and Methods for definition). In contrast, the green noncoding block aligns to a different region in macaque compared with the peptide (nonchained block, see Materials and Methods). There are other scenarios that generate both types of DA blocks, but these are common examples. Panel (B) depicts the presence of both types of DA blocks (chained and nonchained) nearby a single peptide. This is the case in 17% of the 459 duplicated genes (transcript clusters, see Materials and Methods) in our study; 49% have exclusively chained blocks nearby, whereas 34% have only nonchained blocks in their proximity. Note that panel (A) depicts a chained non-DA block. This is for illustration purposes, and nonchained non-DA blocks are also possible, although they do not play an explicit role in our analysis. Testing for Association with the Fisher's Exact Test From our analyses, we have a variety of block-level annotations (accelerated, duplication status, genic location, etc.). We test for association between pairs of annotations by calculating the corresponding contingency table (either on all or on a subset of blocks) and applying the Fisher's exact test (FET) for independence. The FET is significant if there is evidence that an annotation is enriched within one category of another annotation, for example, acceleration is more common among duplicated versus nonduplicated loci. We also compute the corresponding odds ratio (OR), which measures the direction and magnitude of the association. An OR greater than 1 reflects a positive correlation, whereas an OR less than 1 indicates a negative association. Transcription Factor-Binding Site Turnover We developed a method to estimate the number of TFBS lost and gained on the hominin lineage since the homininmacaque ancestor. Our approach scores human and macaque sequence variants in an MAF block for TF-binding potential and compares predicted binding sites between the two species (Kostka D, Holloway AK, Pollard KS, manuscript in preparation). Briefly, we downloaded binding motifs and annotation for 11 TF families from the JASPAR FAM database Wasserman and Sandelin 2004) and regularized their weight matrices by adding 0.01 to each entry. These matrices can be used to scan genome sequences for matches to the TF-binding motif. For each family, a significance threshold for matches (i.e., predicted TFBS) was computed using a method that balances Type I and Type II errors (). For each alignment block and TF family, we predicted TFBS in the human and macaque sequences, after removing gaps from the alignment. We then calculated P values for the difference in binding site predictions in the two species under a model of two correlated Bernoulli processes. Specifically, for each block and TF, we model the prediction of TFBS in a single sequence as a Bernoulli trial. These trials are correlated between human and macaque because their sequences are related through homology. The table below presents the probabilities for the different outcomes. The probability of a prediction in one species but not the other, denoted c, is constrained to be smaller than min(p, 1 p), where p is the probability of a match (i.e., TFBS prediction for that TF). The number of trials is the average gap-free sequence length of human and macaque minus the length of the TF motif plus 1. We estimate p for each TF from genome-wide data. It is the fraction of all predictions divided by the total number of trials. Conditional on this estimate, we obtain the maximum likelihood estimate for c, which is based on the observed number of differences between human and macaque in TFBS predictions per block. Then, for each block and TF, we calculate a P value based on the estimates of p and c. More details can be found in (Kostka D, Holloway AK, Pollard KS, in preparation). Finally, we correct the P values for multiple testing using the FDR controlling procedure of Benjamini and Hochberg. This method allowed us to identify alignment blocks with TFBS turnover (gain or loss) while controlling the FDR at 10%. Our approach naturally takes the different block lengths into account, which allows for a meaningful comparison between different alignment blocks. We expect that two facts help to mitigate common problems (like high false-positive rates) generally encountered in single species TFBS prediction: 1) JASPAR FAM family motifs are of high quality and 2) we focus on differential predictions between human and macaque. Gene Ontology Term Enrichment and Depletion Tests We performed gene ontology (GO) enrichment and depletion analyses to determine if different block annotation groups (e.g., duplicated, accelerated) were significantly enriched for any GO functional categories. GO terms were mapped from transcripts to all blocks within 5 kb. Thus, a block may receive multiple GO terms from each of multiple transcripts. For each enrichment test, we first defined a reference ''universe'' of blocks from which the ''target'' annotation group is drawn. For example, the target group of syntenic blocks is drawn from the universe of all DA blocks. The role of the universe is to provide an appropriate null distribution of GO term frequencies. Enrichment of each GO term in the target set compared with the universe was assessed using standard one-tailed hypergeometric tests. Note that both the universe and target sets are groups of MAF blocks, not groups of peptides. Enrichment testing on the block level corrects for the fact that the number of MAF blocks per transcript is variable, which can create bias in gene-level enrichment tests (Taher and Ovcharenko 2009). When comparing the target group of DA blocks to the universe of all blocks, we restrict ourselves to report the GO Slim subset (downloaded from http://www.ebi.ac.uk /GOA) of GO terms (see tables 1 and 2). Accelerated Substitution Rates in the Hominini To identify the fastest evolving regulatory sequences in the Hominini, we scored all noncoding regions associated with a human gene for evidence of accelerated substitution rates Human Prediction No prediction Macaque Prediction p c c p No prediction c 1 p c 1 p p 1 p since divergence from the macaque-hominin ancestor. Specifically, we used whole-genome multiple sequence alignments of up to 12 vertebrates to identify short alignments of orthologous sequence within 5 kb of all human Ensembl transcripts (). After strict filtering to ensure high-quality syntenic alignments (see Materials and Methods), we obtained a set of ;4.7 million alignments covering ;410 Mb of the human genome (median length 64 bp). We call these regions ''blocks'' because they are derived from the putatively orthologous sequence alignment blocks in MAF files. This approach to identifying candidate regions for evolutionary analysis allows alignability and conservation to define orthologous regions of variable length, in contrast to windows of arbitrary fixed size or restriction to a predefined set of genomic elements. Also, candidate regions are not inferred to be evolutionarily conserved in (a subset of) the species in our analysis; this is different form previous work focusing on conserved noncoding elements (Pollard, Salama, ;;;Kim and Pritchard 2007). Next, we performed an LRT on each block to detect lineage-specific acceleration in substitution rate in the Hominini since the macaque-hominin ancestor (see Materials and Methods). Controlling the FDR at 10% (see Materials and Methods), we found 3,805 blocks (;0.081%) with significant evidence of accelerated substitution rate in the Hominini. These accelerated blocks cover 611,318 bp of the human genome (0.15% of the noncoding bp analyzed). Acceler-ated blocks tend to be slightly longer than nonaccelerated blocks, as expected because the power of our test is higher in longer blocks. But this trend does not translate into a strong correlation between block length and LRT statistic (see Materials and Methods). Accelerated blocks have roughly the same GC content as the average MAF block (43% accelerated vs. 40% average), but they tend to be in gene-rich regions. Although the average noncoding block in our data set is within 100 kb of 2.4 genes (i.e., transcript clusters; see Materials and Methods), accelerated blocks are within 100 kb of 3.1 genes on average. Noncoding Regions Near Duplicated Genes Evolve Rapidly. We hypothesized that adaptive evolution favoring gene expression divergence after duplication may have generated an excess of accelerated blocks nearby duplicated genes. To explore this idea, we employed gene tree to species tree reconciliation () based on Ensembl peptide and gene family annotations to identify duplication events in the mammalian phylogeny. Using these duplication histories, we defined ''expanded'' gene families as sets of homologs with more members in the Hominini than in macaque (see Materials and Methods). We refer to the noncoding regions within 5 kb of a peptide in an expanded gene family as ''DA loci'' and we call the 26,283 blocks in these loci ''DA blocks.'' Note that proximity to a peptide in an expanded family does not necessarily To functionally characterize this set of DA blocks, we conducted GO () enrichment and depletion analyses. GO analyses were performed using a novel method that maps GO terms to noncoding elements and performs statistical analysis on the elements themselves, rather than the genes (see Materials and Methods). This approach adjusts for the different distributions of noncoding elements around different categories of genes (Taher and Ovcharenko 2009). We found that terms related to signal transduction, response to stimulus, and metabolic processes are enriched among DA blocks compared with all MAF blocks (tables 1 and 2). Using FET and ORs, we investigated evidence of acceleration in DA blocks compared with non-DA blocks. Table 3 presents an overview of all the tests we conducted. Each row corresponds to a test for association, and the columns contain the type of blocks considered in the comparison, the attributes compared, FET P value, and OR with a 95% confidence interval (CI). Contingency tables for each comparison can be found in the supplement (supplementary results SR5, Supplementary Material online). We identified 171 significantly accelerated DA blocks (see table S2 in the supplement). Thus, acceleration is roughly ten times more common in DA blocks compared with non-DA blocks (171/26,283 % 0.65% compared with 0.078% in non-DA blocks). This enrichment of accelerated blocks near duplicated genes is highly significant (FET: P, 1 10 15, OR 5 8.41 ; see row one in table 3). The 48 genes with accelerated DA blocks nearby are an interesting set of candidates for functional divergence in Hominini (table 4). Many of these genes are paralogous members of the same families and/or belong to related pathways (see below). Performing GO enrichment tests to compare accelerated with nonaccelerated DA blocks, we found that accelerated DA blocks are enriched in GO terms related to transferase activity (glycosyl and hexolyl groups), metabolism (steroid and estrogen), G-protein-coupled receptor (GPCR) activity (including olfactory receptors), and visual perception (table 5). Notably, enriched terms also include female gamete generation, whereas depleted terms include spermatogenesis (table 6). Noncoding Regions Near Daughter Genes Are More Accelerated than Their Parents. Having established a strong association between duplicated loci and accelerated substitution rates, we next attempted to delineate patterns of accelerated evolution under different duplication scenarios ( fig. 2). A subset of DA blocks can be polarized to be associated uniquely with either a daughter peptide (new genomic location) or a parent peptide (preduplication genomic location; see Materials and Methods table 3). Thus, for the subset of duplicated peptides where we can infer a parent-daughter relationship, we find much faster noncoding evolution in the regulatory regions of the newly formed daughter gene. This asymmetry parallels the pattern seen in the protein sequences of parent and daughter duplicates ). To assess if acceleration of DA blocks happened before or after the duplication of peptides on the hominin lineage, we compared LRT statistics from polarized blocks near parent and daughter peptides. We find minimal table 3). This suggests that noncoding sequences that are close to-but not included in-duplication events evolve more rapidly than noncoding sequences that are either 1) far away from duplication events or 2) duplicated alongside a gene. In light of these results, we asked whether our previous results that acceleration is enriched in DA blocks still holds for chained DA blocks. We find that this is indeed the case (FET: P, 1 10 15, OR 5 4.05 ; see row two in table 3). We note that accelerated substitutions in nonchained DA blocks have a distinct interpretation from the same phenomenon in chained DA blocks. Although the latter induce changes in duplicated sequences, the former affect ''ancestral'' sequences close to duplicated loci. That is, noncoding sequence that did not previously regulate any gene has been co-opted to (presumably) regulate a newly duplicated locus placed nearby. This type of change has been associated with the gain of transcriptional regulation of retrotransposed duplicated genes, which are not copied with any flanking noncoding sequences (;;). The division of DA blocks into chained and nonchained sets also allows us to explore whether the association we observed between acceleration and daughter peptides (see above) is driven by nonchained blocks. Focusing only on chained DA blocks, we still find significantly faster evolution in daughter compared with parent loci (FET: P 5 0.002, OR 5 23.69 ; see row five in table 3). Assuming co-occurrence on an alignment chain indicates duplication of the noncoding sequence with the gene, this finding suggests that the derived (i.e., daughter) noncoding sequence is more likely to diverge from the Fast-Evolving Blocks Are Enriched in Flanking Regions and Exonic 5# UTRs. We developed a bioinformatics pipeline to annotate each block with respect to human gene structure (e.g., UTRs, introns, flanking sequences; see Materials and Methods). Using these transcript-based annotations, we investigated whether or not acceleration occurs uniformly across different noncoding genic location categories. Figure 3 shows the log odds score for each annotation category compared with its complement together with a 95% CI. A positive log odds score indicates an enrichment of accelerated blocks in the respective category, whereas a negative score indicates depletion. We find enrichment for acceleration in 5#-and 3#-flanking regions, as well as in exonic 5# UTRs. In contrast, introns are relatively depleted of accelerated blocks. To further investigate these results, we performed an equivalent analysis using log-linear models. The main advantages of this approach are that 1) we account for all pair-wise correlations between variables simultaneously and 2) we correct for possibly confounding factors, such as GC content, alignment length, and alignment depth. This analysis yielded qualitatively similar results to the log odds scores in figure 3 (supplementary results SR1 and supplementary figure S3, Supplementary Material online). Next, we investigated whether these distributions of accelerated blocks are similar in DA blocks. Although the enrichment patterns are not as clear-cut (supplementary fig. S2, Supplementary Material online), we do find acceleration to be enriched in 3#-flanking regions and weakly in exonic 5# UTRs but not in 5#-flanking regions. Patterns of Acceleration Are Not Driven by Changes on the Chimpanzee Lineage. To account for potential false positives introduced by sequencing, assembly, or alignment errors in the 6X chimp genome, we repeated all of the above tests involving acceleration without chimp sequence in the alignments. This filtering did not qualitatively change our results (supplementary results SR2, Supplementary Material online). Turnover of TFBS To provide a complementary and functionally oriented analysis of divergence, we assessed the predicted impact of substitutions on TF-binding affinity in our data set of ;4.7 million noncoding blocks within 5 kb of a human gene. Specifically, we scored human and macaque sequences using motifs for 11 families of TFs from the JASPAR database Wasserman and Sandelin 2004) to identify predicted binding sites in each species. Using a novel approach (see Materials and Methods), we assessed the statistical significance of total binding site gain and loss (''TFBS turnover'') in each block. At an FDR of 10%, we identified 13,067 blocks (;0.3%) with significant TFBS turnover between human and macaque. We refer to these as ''turnover blocks.'' Using FETs and ORs, we examined the association between TFBS turnover and 1) accelerated substitution rates (significant LRTs) and 2) duplication status. Accelerated Blocks Exhibit High TFBS Turnover. First, we asked whether TFBS turnover occurs at a higher rate in blocks that show accelerated substitution rates in the Hominini. We find that accelerated blocks have much higher rates of TFBS turnover compared with nonaccelerated blocks (FET: P, 1 10 15, OR 5 8.04 ; see row seven in table 3). To some extent such a correlation is expected because accelerated blocks have, on average, higher substitution rates than nonaccelerated blocks (supplementary fig. S5, Supplementary Material online). Higher substitution rates, in turn, mean a higher probability of destroying or creating a TFBS. On the other hand, higher substitution rates are not sufficient to explain TFBS turnover. This is illustrated by the fact that the vast majority of turnover blocks are not accelerated. Nevertheless, associations between TFBS turnover and accelerated blocks are to some degree inherent and FET P values have to be taken with a large grain of salt. Keeping the above in mind, we find that DA blocks are enriched for TFBS turnover compared with non-DA blocks (P 5 0.03, OR 5 1.26 ; see row eight in table 3) and that the association of acceleration and turnover remains significant if we focus on DA blocks exclusively (FET: P 5 3.12 10 5, OR 5 11.00 ; see row nine in table 3). In fact, among accelerated blocks, DA blocks have higher odds of TFBS turnover than non-DA blocks, although this trend is not significant (FET: P 5 0.27, OR 5 1.68 ; see row ten in table 3). We note that the reported associations are largely descriptive. More sophisticated analyses are needed to unambiguously disentangle the correlation between acceleration and TFBS turnover arising purely because of accelerated substitution rates from a biological signal. TFBS turnover DA blocks are enriched in many of the same GO terms as accelerated DA blocks, but lack the GPCR and olfactory receptor-related terms (table 7). Also, turnover blocks are enriched in RNA-related GO terms (binding and transport), as well as in terms related to regulation of development (including embryonic and mammary gland). The only GO term we find depleted in TFBS turnover DA blocks is signal transduction. Overall, our results concerning TFBS turnover are in line with our findings on accelerated substitutions. Our data FIG. 2.-Categories of noncoding blocks. First, noncoding blocks are divided into whether they are DA (i.e., within 5 kb of a duplicated gene, DA blocks) or not. DA blocks are then further split into chained and nonchained sets. Additionally, a subset of each of these sets is said to be polarized, that is, the peptides close to the blocks can be classified as either daughter or parent with respect to the duplication event on the hominin lineage. The number of blocks in each category is given in parentheses. Overall, there is an abundance of chained compared with nonchained blocks. Polarized parent blocks tend to be chained, whereas polarized daughter blocks tend to be nonchained. See Materials and Methods section for details regarding definitions. support the hypothesis that higher rates of substitution result in more binding site turnover, potentially contributing to changes in the transcriptional regulation of nearby genes. Fast-Evolving Noncoding Sequences Are Associated with Pregnancy-Related Genes Because GO enrichment analysis has a known set of limitations (Khatri and Draghici 2005;;Taher and Ovcharenko 2009), we manually analyzed the genes near accelerated DA blocks (table 4) with respect to their functions as annotated in public databases and the literature. We find three PRAME genes and five olfactory receptor genes. Both families experienced positive selection on the protein level in the human lineage (;). Additionally, table 4 contains three genes from the UDP glycosyltransferase superfamily, which is known to exhibit copy number variations in humans (). Our results suggest that functional changes in these families may have occurred through divergence in both protein structure and gene regulation. We also find genes related to immunity and to metabolism, both of which are functional categories that have been identified in the context of positive selection and duplicated genes (;). Additionally, we find two chorionic gonadatropins (CGs: CGB5 and CGB7) and a chorionic somatomammotropin (CSHL1). Both CGs and CSHL1 are expressed in the placenta and play a crucial role in pregnancy. Motivated by this observation, we asked whether other genes in table 4 might also be associated with pregnancy by looking for placental expression and/or pregnancy-related functional annotation. CGs regulate endometrial functions by influencing progesterone (), a hormone that is catalyzed to its inactive form by another gene in table 4, AKR1C1, which encodes an aldo-keto reductase. AKR1C1 utilizes NAD and/or NAD(P)H as cofactors. NAMPT (also in table 4) is an NAD(P) biosynthetic enzyme (); NAD(P)H is active in the placenta, and there is evidence that it is a modulator of antioxidant stress response in early pregnancy (). Another gene we find, CY-P4A22, a cytochrome P450 superfamily member, is part of the PPAR-gamma signaling pathway (). PPAR-gamma, in turn, is essential for placental development (). UGT2B15 and UGT2B28 (both in table 4) are part of the androgen and estrogen metabolism pathway, and estrogen is prominently involved in regulation of the menstrual cycle and pregnancy. Also, copy number variations in UGT2B28 may influence fetal development and gestation length (). In addition, there is an abundance of potentially pregnancy-related terms among GO terms enriched for accelerated DA blocks (e.g., female gamete generation, estrogen metabolism, hormone activity; table 5) and TFBS turnover FIG. 3.-Enrichment and depletion for acceleration in different genic locations. The panel shows log ORs for acceleration for each genic location category (compared with its complement). A positive log OR indicates enrichment for accelerated blocks in that category, whereas a negative log OR indicates depletion of acceleration. Bars correspond to 95% CIs. The number of blocks in each category is given above each pair of bars. Analyses excluding the chimp genome sequence are shown in orange. Nonambiguous 5#-and 3#-flanking regions and 5# exonic UTRs are enriched for accelerated blocks, whereas intronic sequences are depleted for acceleration. DA blocks (embryonic development, mammary development, and cell fate determination; see table 7). Together, these findings constitute compelling if circumstantial evidence that noncoding sequence evolution near duplicated loci played a role in the lineage-specific evolution of pregnancy and reproduction. Discussion We conducted a high-resolution genome-wide scan for accelerated substitution rates in noncoding sequences within 5 kb of all human genes. Genes belonging to families that expanded through gene duplication in the hominin lineage show enrichment for accelerated evolution in associated noncoding sequences. Noncoding elements that most likely duplicated along with the coding sequence of the associated gene (i.e., chained blocks of daughter genes) are particularly enriched for acceleration. Flanking sequence and exonic 5# UTRs are enriched for elevated substitution rates, especially compared with introns, which are relatively depleted of accelerated elements. Rapid evolution of 5# UTR elements could affect transcription and is consistent with a recent study that correlates changes in the TSSs of recently duplicated genes with expression changes (Park and Makova 2009). Because 5# UTR and flanking regions are enriched for regulatory elements, their particularly rapid divergence suggests the possible action of positive selection to modify the expression patterns of duplicate genes. However, we emphasize that our analyses cannot distinguish positive selection from neutral mutational processes that might affect substitution rates in a lineage-specific manner. To further pursue the link between noncoding sequence evolution and gene expression, we investigated noncoding elements associated with human genes for the effects of substitutions on predicted TFBS. We found that duplicated loci have more noncoding elements in which sequence differences between human and macaque are predicted to affect TF binding. Together, our findings are consistent with the hypothesis that modification of the regulation of duplicated genes is an important mechanism for the evolution of hominin-specific traits. We took several precautions to control for false positives and ensure the quality of our data. Because duplicated noncoding regions are particularly difficult to align and incorrect alignment can lead to false inference about evolutionary events, conservative quality filtering of sequence alignments was an essential component of our analysis. It is nonetheless possible that alignment errors contributed to our estimates of substitution rates in some regions. However, we do not expect that such bias would lead to an inference of accelerated evolution in the Hominini in particular. For instance, we performed our analyses twice, once with the chimp sequence and once without it. Although it could be hypothesized that the lower coverage shotgun sequenced chimp genome would leads to false signals of acceleration, we find qualitative agreement between the two analyses (supplementary results SR2, Supplementary Material online). Also, although we cannot rule out that the enrichment of accelerated blocks in exonic 5# UTRs is due to hypervariablemethylated CpG dinucleotides decaying to CA or TG, we find that accelerated exonic 5# UTRs on average have roughly the same GC content as nonaccelerated exonic 5# UTRS (58.2% accelerated vs. 58.0% nonaccelerated). We focus on the Hominini because previous studies found accelerated gene duplication, sometimes accompanied by amino acid divergence, in the ape lineage (;). From the point of view of understanding human evolution, many biologically important human traits are shared with chimpanzee and other great apes. Hence, the fast-evolving noncoding sequences that we identified are candidates for understanding the genetic basis of human-specific biology. Furthermore, by studying evolution over tens of millions of years, we have more power to detect changes in substitution rates than we would if we focused on events that took place in the ;6 million years since the human-chimp ancestor. Our approach could of course be used to study noncoding sequence evolution in loci that duplicated on the human lineage or other lineages of interest. Further studies will determine if a propensity toward accelerated evolution in noncoding sequences is a universal characteristic of duplicated loci. Several previous publications have focused on predicted functional noncoding sequences with accelerated substitution rates in the human branch (Pollard, Salama, ;Pollard, Salama, ;;;). In this study, we extended that approach in two ways. First, we expanded the set of candidate regions by considering all noncoding sequence in the vicinity of all known genes, not just deeply conserved elements. Noncoding sequences nearby genes are known to harbor regulatory elements, and sequence changes in these regions have the potential to modify the expression of the associated gene. Second, by including information about gene duplication, our method aims to identify regions that are able to take on new functions by two complementary evolutionary mechanisms: gene expression divergence and protein sequence divergence. On one hand, we find some agreement between these two levels of evolution in the sense that both appear to occur more often in the daughter copies of recently duplicated genomic loci. Interestingly, however, our data show a nonsignificant negative correlation between accelerated rates of protein and regulatory sequence evolution. This observation suggests the hypothesis that relatively disjoint subsets of proteins have evolved at the regulatory versus protein-coding level in the hominin lineage. But, exceptions to this rule are known (e.g., CGB, see Introduction). We performed GO term enrichment analysis of genes with fast-evolving regulatory regions. We note that enrichment analysis of noncoding sequences using GO is prone to ascertainment bias (Taher and Ovcharenko 2009). In this study, we account for ascertainment bias by performing enrichment tests on the block level, mapping GO terms from genes to the associated noncoding sequences and performing tests on the set of noncoding blocks. Using this approach, we highlight functional categories, such as reproduction, host defense, and metabolism. Many of these terms have been mentioned before in the context of positive selection at the protein level, but our analysis also highlights several processes and pathways that have not been emphasized in studies of single-copy genes. For instance, some of the genes we identified with hominin-specific acceleration in their regulatory regions are connected to placentation. Although there are multiple differences between human and macaque pregnancies (de Rijk and van Esch 2008), it has been argued that some of these differences are not very large, especially when factors such as body size are taken into account (Martin 2007). However, one particularly notable difference between the pregnancies of humans and other primates involves the formation of the trophoblastic shell by cytotrophoblasts. In macaques and baboons, the shell is continuous and sharply delineated from the endometrium. In humans, on the other hand, extravillous trophoblast cells invade the uterine stroma (Carter 2007). CG is necessary for the invasion of cytotrophoblasts into the endometrium during embryo implantation (Henke and Gromoll 2008). Interestingly, CG genes are highlighted by our genomic approach, making them and other genes in our list excellent targets for functional studies of human-macaque differences in pregnancy. Unfortunately, it is challenging to contrast commonalities of human and chimp pregnancies to those of macaques, as placentation in chimpanzees remains poorly studied (Carter 2007). This is the first genome-wide study to address the question of whether genetic divergence in noncoding sequences might contribute to functional divergence of duplicated genes in the hominin lineage. Consistent with the hypotheses that 1) divergence between closely related species occurs through changes in gene regulation and 2) duplicated regions are enriched for genetic and functional divergence, we find a strong propensity for rapid sequence evolution in noncoding elements near duplicated genes. We quantify this rapid evolution in terms of substitution rates and predicted TFBS turnover. Using both metrics, we find an excess of fast-evolving elements associated with duplicated genes. Together with evidence of accelerated evolution in the coding sequence of young duplicates ), our results support the view that two sources of genetic variation-structural rearrangements and point mutations-synergistically contribute to the evolution of new traits.
Ferroelectric soft phonons, charge density wave instability, and strong electron-phonon coupling in BiS 2 layered superconductors: A first-principles study Very recently a new family of layered materials, containing BiS2 planes was discovered to be superconducting at temperatures up to Tc=10 K, raising questions about the mechanism of superconductivity in these systems. Here, we present state-of-the-art first principles calculations that directly address this question and reveal several surprising findings. The parent compound LaOBiS2 possesses anharmonic ferroelectric soft phonons at the zone center with a rather large polarization of $\approx 10 \mu C/cm^2$, which is comparable to the well-known ferroelectric BiFeO3. Upon electron doping, new unstable phonon branches appear along the entire line Q=(q,q,0), causing Bi/S atoms to order in a one-dimensional charge density wave (CDW). We find that BiS2 is a strong electron-phonon coupled superconductor in the vicinity of competing ferroelectric and CDW phases. Our results suggest new directions to tune the balance between these phases and increase Tc in this new class of materials.
Vitamin D receptor gene polymorphism in men and its effect on bone density and calcium absorption OBJECTIVEPrevious studies have suggested that polymorphism of the alleles of the vitamin D receptor (VDR) gene may account for the major part of the heritable component of bone density in women, possibly mediated in part by impaired calcium absorption from the bowel. In view of the increasing importance of osteoporosis in men, we have now investigated the association between common allelic variations in the vitamin D receptor gene, calcium absorption and bone density in men.
Drivers Drowsiness Detection System : With an average number of 1.4 Billion vehicles hitting the road worldwide, the saturation rate has been increased by around 18 percent per year globally. With this number of vehicles, a normal man commuted for at least 15 to 20km per day worldwide. Also, due to the increased use of roadways in Business like Logistics transport, the commute distance may increase to thousands of KMs per day increasing the traffic level on highways worldwide. With all these aspects in mind, there is a huge chance that Drivers in the city/country must drove over time and over distance. Resulting in spreads of Drowsiness among the drivers. To overcome this situation and to avoid any Human casualty in the future, I am trying to set up a small mechanism to alert the driver and their respected loved ones whenever any driver goes drowsy while driving. Driver Drowsiness system (D3S) is a mechanism that reads the facial expression of an individual sitting in the driver's seat and detects whether a driver is sleeping or about to sleep. On the determination of his/her expression, the system generates a loud alarm which makes the driver awake and avoids any mishaps on road.
Digital discrimination: Political bias in Internet service provision across ethnic groups The global expansion of the Internet is frequently associated with increased government transparency, political rights, and democracy. However, this assumption depends on marginalized groups getting access in the first place. Here we document a strong and persistent political bias in the allocation of Internet coverage across ethnic groups worldwide. Using estimates of Internet penetration obtained through network measurements, we show that politically excluded groups suffer from significantly lower Internet penetration rates compared with those in power, an effect that cannot be explained by economic or geographic factors. Our findings underline one of the central impediments to liberation technology, which is that governments still play a key role in the allocation of the Internet and can, intentionally or not, sabotage its liberating effects.
Polydopamine-functionalized nanographene oxide: a versatile nanocarrier for chemotherapy and photothermal therapy For releasing both drug and heat to selected sites, a combination of chemotherapy and photothermal therapy in one system is a more effective way to destroy cancer cells than monotherapy. Graphene oxide (GO) with high drug-loading efficiency and near-infrared (NIR) absorbance has great potential in drug delivery and photothermal therapy, but it is difficult to load drugs with high solubility. Herein, we develop a versatile drug delivery nanoplatform based on GO for integrated chemotherapy and photothermal therapy by a facile method of simultaneous reduction and surface functionalization of GO with poly(dopamine) (PDA). Due to the excellent adhesion of PDA, both low and high solubility drugs can be encapsulated in the PDA-functionalized GO nanocomposite (rGO-PDA). The fabricated nanocomposite exhibits good biocompatibility, excellent photothermal performance, high drug loading capacity, an outstanding sustained release property, and efficient endocytosis. Moreover, NIR laser irradiation facilitates the release of loaded drugs from rGO-PDA. These features make the rGO-PDA nanocomposite achieve excellent in vivo synergistic antitumor therapeutic efficacy.
The Relationship of Tobacco, Alcohol, and Betel Quid with the Formation of Oral Potentially Malignant Disorders: A Community-Based Study from Northeastern Thailand This studys objective was to describe the relationship between the main risk factors for oral cancer, including tobacco (in the form of cigarettes, smokeless tobacco (SLT), secondhand smoking (SS)), alcohol, and betel quid (BQ), and the occurrence of oral potentially malignant disorders (OPMDs). A community-based case-control study was conducted with a population of 1448 adults aged 40 years or above in northeastern Thailand. Patients aged 60 years or above (OR 1.79, p < 0.001) and female patients (OR 2.17, p < 0.001) had a significant chance of having OPMDs. Our multivariate analysis showed that the most potent risk factor for OPMDs occurrence was betel quid (BQ) (adjusted OR 4.65, p < 0.001), followed by alcohol (OR 3.40, p < 0.001). Even former users were at risk of developing OPMDs. The synergistic effect between these main risk factors was significantly shown in the group exposed to SLT, SS, BQ, and alcohol. The most potent synergistic effect was found in the group exposed to SLT, BQ and alcohol with the OR = 20.96. Introduction Oral cancer was ranked 16th among the 36 cancers in 2018. It was one of the leading causes of death worldwide, with 177,384 deaths and an estimated 354,864 new cases. A study conducted in Thailand in 2013 found that the female population in the area chew betel nuts, smoke, and drink alcohol. In this study, Betel quid (BQ) chewing was a significant risk factor for the development of oral cancer within that geographical region. Oral potentially malignant disorders (OPMDs) are widely known for their potential of transformation to oral cancer. Moreover, many studies have revealed that the risk factors for OPMDs are similar to those for oral cancer (oral squamous cell carcinoma). The likelihood of OPMDs transforming into oral cancer depends on many factors, including the location of the lesion, its clinical characteristics and size, the age of the patient, the duration of the onset of the lesion, and the patient's behaviors. Although OPMDs are less commonly found in females [4,, the Indian study by Silverman et al. in 1976 showed that the female population had a higher malignant transformation rate compared to males. This requires gender factors to be further considered in the development of lesions. A 2014 study by Wang et al. found that the overall rate of malignant transformation was 4.3%. In addition, the malignant change was significantly greater for lesions characterized by mucosal dysplasia. It was found that OPMDs on the tongue and floor of mouth were less likely to be found but had a greater chance of turning into malignancies [5,. Hence, understanding the risk factors for both conditions is crucial and plays an important role in oral cancer screening and prevention. According to the population-based study of an oral cancer screening program in Taiwan, delayed diagnosis and mortality were reduced by 21% and 26 %, respectively. It also found a 45% greater survival rate in the screened group compared to the unscreened group. Concerning risk factors for both conditions, the use of tobacco and tobacco products (e.g., snuff) and betel quid (BQ) chewing are widely accepted as very potent risk factors, and alcohol drinking is also a commonly associated risk factor. Previous studies have shown an association between each individual risk factor and OPMDs or oral cancer. Nevertheless, studies of the synergistic effects among these common risk factors are still limited. A statistically significant association between OPMDs and habits has been demonstrated in many studies, although regional differences exist. In Asia, leukoplakia is known to be associated with BQ (pan, areca quid) chewing and smoking (bidi, cigarette), whereas in Western countries, it is associated with cigarette smoking, snuff, and alcohol consumption. Therefore, this study was performed to analyze the relationship between these common risk factors and the occurrence of OPMDs as well as their synergistic effects when patients are exposed to more than one risk factor. This community-based case-control study was undertaken as part of an oral cancer screening project in Thailand Health Region 9 (the provinces of Nakhon Ratchasima, Chaiyaphum, Buriram, and Surin.). The aim of the study was to evaluate the relationship between the main risk factors for oral cancer, including tobacco (in the form of a cigarette, smokeless tobacco (SLT), secondhand smoking (SS)), alcohol, and betel quid (BQ), as well as their synergistic effects, and the occurrence of oral potentially malignant disorders (OPMDs). Materials and Methods Based on a review of the literature and related research by Kumar et al., the incidence of OPMD lesions was 28.4% in smokers and 8.4% in non-smokers. The sample size was calculated using the formula for binary logistic regression. In order to reduce data discrepancies, the sample size was compensated by increasing the size of each group by 10%. Therefore, the sample size was 85 samples for each group, and the final result was calculated to be at least 170 samples. The calculated n was the least number that could be used to create statistical results. Nevertheless, as a part of the leading research project, the Development of Disease Management Model of Oral Cancer with an Integration Network of Screening, Surveillance, and Treatment in Nakorn-Chai-Bu-Rin project, involved a wide-ranging study area; there was inevitably a large number of participating populations. This study, therefore, decided to collect patient data with completeness up to the community hospital level, in which specialized dentists diagnosed the lesions. This judgment met the inclusion and exclusion criteria of the research. Data collection began with an initial screening at the village level (S1) with an oral cancer risk screening questionnaire administered by healthcare volunteers. Subsequently, patients with risk factors were referred to the residential sub-district hospital (S2) for further examination by an oral hygienist. The oral hygienists screened for lesions and recorded all details of the patients' risk factor exposure histories. Based on the sub-district screening, data were recorded in the online research operating system database for the oral cancer project. Screening continued at the community hospital (S3), where dentists cooperated with the research team from Mahidol University and the Center of Excellence in Oral Cancer Maharat Nakhon Ratchasima Hospital, working together for the oral examination, diagnosis, and treatment. According to the S3 level screening form, it was possible to identify vulnerable patients with OPMDs by providing diagnoses based on clinical features. The data were obtained from August 2019 to February 2021. A total of 1448 patients aged 40 years old or older were enrolled in this study. Inclusion criteria were the data from the leading research project, screening by specialized dentists at the S3 level. Exclusion criteria were data from patients who had a history of head and neck cancer, patients who used medications or had systemic conditions related to the abnormal oral manifestation, and data from dropout patients or incomplete records. The Ethics Committee approved the study of the Smokers were defined as those who had smoked at least one cigarette per day for six months continuously. Frequency was recorded as cigarettes per week, and this included the duration of the habit. Alcohol use was defined as drinking alcoholic beverages at least once a week continuously, including beer, hard liquor, and herbal liquor, whereas wine was uncommon in the study area. BQ chewing was defined as chewing BQ for at least six months continuously. The dropouts were found at the junction of each level of screening. The number of participants and dropouts were presented in Figure 1. The data were analyzed using the IBM SPSS Statistics for Windows, version 25.0 (IBM Corp., Armonk, NY, USA). The baseline characteristics were analyzed with descriptive statistics by frequency and percentage in categorical data, and mean, standard deviation, median and interquartile range were used in continuous data. Continuous variables were compared using the independent T-test or Mann-Whitney U test, and categorical variables were compared using the Chi-squared test or Fisher's exact test, as appropriate. The risk factors associated with OPMDs were selected in the logistic regression analysis performed by univariate and multivariate analysis. The odds ratio (95% CI) was presented with a p-value < 0.05 was considered statistically significant. Kaplan-Meier survival analysis, with a log rank test, was used to compare OPMDs presentation related to duration of exposure, between each risk factor. Results Overall, 72.7% of patients were exposed to or had a history of exposure to one or more main risk factors (( Table 1), (Figure 2)), including 14.6% smokers, 5.5% former smokers, 18.9% SS, 6.4% SLT users, 1.0% former SLT users, 36.3% BQ chewers, 2.4% former chewers, 16.3% alcohol drinkers, and 6.8% former drinkers. Our study had more female participants (N = 992) than males (N = 456). As the results also showed that there were differences in risk factor exposure among different gender. Smoking and drinking habits were mainly found in male patients, whereas most female patients exposed to tobacco smoke were exposed to SS. Female patients used SLT and chewed BQ more commonly than male patients. A univariate analysis (Table 2) showed that OPMDs were more significantly found in females (2.17 times more than that in males). Patients aged 60 years or older had a 1.7-fold greater risk of lesions compared to the younger group. Exposure to one, two, or three risk factors increased the occurrence of lesions 3.04-, 5.40-, and 24.82-fold, respectively. In the tobacco group, there were no statistically significant results in current smokers. However, in the group of former smokers, the result showed the statistical association to OPMDs occurrence (OR 0.26 , p < 0.001). In SLT users, there was a significant result in current users who were at risk of having OPMDs (OR 3.98 , p < 0.001). In the BQ group, the results showed that not only current chewers (OR 6.91 , p < 0.001) but also former chewers (OR 6.89 , p < 0.001) had a strong association to OPMDs occurrence. Those who chewed BQ for more than 30 years were 1.88 times more likely to have lesions than those who chewed for less than 30 years (OR 1.88 , p = 0.001). For the alcohol drinkers, the risk OPMDs was significantly associated with the group of current drinkers (OR 1.49 , p = 0.007). All univariate analysis between the single risk factors exposure and OPMDs occurrence is described in Table 3. A variable with a p-value of less than 0.2 was then selected for the multivariate analysis ( The synergistic effect among risk factors was raised as an issue in our study. An analysis of the group exposed to more than one factor was performed (Table 5). We could find the significant synergistic effect from the combination of each risk factor but not in the tobacco group. The combination of SLT + alcohol and SLT + BQ resulted in the synergistic effect of 13.79-fold and 4.65-fold, respectively. Current BQ chewers with alcohol drinking showed an increased chance of developing a lesion of 9.33-fold, and for BQ chewers with SS, 2.97-fold. For a person exposed to SS who also consumed alcohol, the occurrence of lesions was increased to 3.41-fold. For the three risk factors exposure, a group analysis was also described in this study. Significant results were found in the group of BQ chewers with alcohol in combination with SLT or SS at an increased ratio of 20.96 and 7.30-fold, respectively. Discussion Our studies show that age affects the incidence of OPMDs. A significant difference was found between the groups under 60 years of age and 60 years or older. In the group aged 60 years or above, there was a 1.79-fold higher rate of OPMD occurrence. The finding of lesions among the elderly is consistent with past studies. In the 1977 study by Bnczy et al. in Hungary, it was found that the peak incidence of leukoplakia was in the sixth decade, but the highest rates of transformation were in the seventh decade (7.1%) or in patients over 71 (8.2%). Large studies from India and the developing world support the view that lesions are more likely to develop and progress in older individuals. Our study found that female patients had 2.17 times more lesion presentation than males, unlike the past study by Lind et al. in 1987 in the Norwegian population. They found lesions in 102 males and 55 females, while oral cancer developed in eight males (7.8%) and six females (10.9%). Studies in large populations found that females were much less likely to have lesions. A study of Thailand by Anchisa et al. in 2019 also found that the ratio of males with lesions was greater. It is important to realize that past studies also reported that there was a higher chance of malignancy transformation in female patients than males. The proactive fieldwork of our research team has allowed us to reach the population more thoroughly than in the past. Many populations have entered the screening system, making it possible to find lesions in groups normally missed by other screening programs. The favored form of tobacco use can vary across geographic areas and cultures. Cigarettes, cigars, and pipes are the major types of tobacco smoking, while chewing tobacco and snuff are the most common forms of SLT. Smoking has long been implicated in the etiology of oral cancer and OPMDs, and many studies [6,16,19,25,26, have shown a positive association between smoking and OPMDs. In our study, conducted in northeastern Thailand, cigarette smokers were commonly found compared to the SLT users at a ratio of approximately 3:1. SLT use has been implicated as a risk factor for both oral leukoplakia and oral cancer. Furthermore, in our study, there was a statistically significant relationship between current SLT use and the occurrence of OPMDs (3.98-fold). According to a study from Puerto Rico in 2011, tobacco smoking was strongly associated with the risk of OPMDs. There was a more than four-fold increased risk among the current versus the never-exposed group; however, the risk was notably attenuated among former smokers. These findings are consistent with numerous previous studies of OPMDs [6,16,19,25,26,, and oral cancer. In our study, we found that the duration and number of cigarettes smoked increased the risk of OPMDs, but these were not statistically significant. However, the one who quit smoking could reduce the chance of OPMDs occurrence compared with a current smoker. Alcohol is a risk factor for many cancers, including cancers of the oral cavity. The type of alcoholic beverage and frequency of consumption have an effect on cancer risk, and the risk is increased when alcohol is used with tobacco products. Oral cancer risk is likely related to overall alcohol consumption (number of years drinking) rather than the amount of drinking per day. Alcohol is strongly associated with the development of oral cancer. It has also been proven that the prolonged use of alcohol can lead to atrophy of the oral mucosa, as well as stimulation of excessive mucosal chemistry leading to greater susceptibility to carcinogens (and alcohol itself is a carcinogen), although the association with OPMDs is currently unclear. Alcoholic beverages consumed by our population included both hard liquor and beer in combination. In contrast to many past studies, our results showed an association between alcohol drinking and OPMD occurrence. Other similar results reported a positive association with OPMDs [19,25,32,37,43,. BQ chewing is the most common behavior in Southeast Asia. The BQ contains betel leaf, betel nut, and khaini. In Thailand, turmeric is generally added. Another important component of betel is burnt tobacco, boiled tobacco, or tobacco and molasses. In 2012, in northeastern Thailand, the chewing rate of BQ was high, especially among elderly women. Several studies have shown that areca nut and betel ingredients are associated with the development of oral cancer lesions. Several studies have addressed betel's carcinogenicity; BQ contains substances that cause genetic changes. In an in vitro study with fibroblasts from the oral epithelium, it was shown that the key components of BQ exhibited genotoxicity, cytotoxicity, and cell division properties related to oral cancer pathophysiology. A high risk of developing OPMDs was observed with daily chewing of BQ in our study, similar to many previous studies . Moreover, our results showed that patients who chew BQ for 30 years or more have a 1.88-fold chance of developing lesions compared to the control group. From our study found, it was found that the exposure to one, two, or three risk factors increased the occurrence of lesions 3.04-, 5.40-, and 24.82-fold, respectively. This result can be implied by our hypothesis that there was a possible tendency of having a synergistic effect among risk factors. Adding tobacco to BQ is a common practice in Southeast Asian countries. For current smokers, there was no significant effect on OPMDs. However, when smoking was combined with BQ chewing, it was found to be a significant association. Our study also found that SLT use was significantly associated with the development of lesions. Moreover, if SLT was combined with other risk factors, it was found to have a significant association with OPMDs. The use of SLT and BQ chewing increased the occurrence of OPMDs 4.48-fold, and when combined with alcohol consumption, there was an 8.56-fold greater risk compared to the group using only SLT. Moreover, when SLT was used with BQ chewing and alcohol drinking, there was a significantly higher trend of having OPMDs of 20.96-fold. This was in line with a study by Amarasinghe et al. in Sri Lanka in 2010. The smoking and drinking interaction was mentioned in many past studies, highlighting that it had an association with the occurrence of oral cancer and OPMDs. Our analyses revealed no evidence that alcohol consumption modified the effect of smoking in terms of OPMD risk. Concurrently, in the group exposed to tobacco, BQ and alcohol presented an absence of OR due to limitations of population; however, all patients in this group presented OPMDs. Interestingly, our study found that the SS group alone did not have any association with disease, but when combined with BQ chewing and/or alcohol drinking, they were significantly prone to OPMDs. This issue has not been mentioned in previous studies; therefore, it should be incorporated into health promotion programs for oral cancer prevention. The limitation of our study was the distribution of the risk factors among our population. We found that the most extensive risk factor was betel quid. Concerning tobacco products, our study showed different consumption between genders. The male population mainly smoked cigarettes but rarely used SLT. In contrast to females, they used much more SLT than cigarettes. These behaviors are partly found in some Asian countries but do not normally exist in other continents, where the use of betel quid and SLT are not ubiquitous. Using the community network in combination with multilevel dental care is a highly effective model. After proving its effectiveness, this model will be implicated in another health region's future researches and could be used as a national public health policy for oral cancer screening in Thailand or elsewhere. This study proved a strong association of the main risk factors to the occurrence of OPMDs in all exposure characteristics, especially in the case of those exposed to combined risk factors. The results of this study might essentially draw healthcare practitioners' attention to helping their patients avoid or stop risky behaviors. This study is part of a leading research project that is the largest proactive oral cancer screening project conducted in Thailand to date. Conclusions Direct exposure to tobacco products, BQ, and alcohol is associated with the occurrence of OPMDs. It was found that the presentation of OPMDs was closely associated with BQ chewing and alcohol consumption. The synergistic effect of common risk factors is well demonstrated in this study. The association of risk factors as well as duration of exposure reported in this study can be used as clues to find OPMD lesions in routine oral examinations by dentists and health care workers. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: All the associated data are available within the manuscript. Acknowledgments: We are thankful to Prasan Tangjaturonrasme, Harin Clypuing, Angkana Wisutthajare, and the research team from Mahidol University and the Center of Excellence in Oral Cancer Maharat Nakhon Ratchasima Hospital for the cooperation in all stages of this study. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: OPMDs oral potentially malignant disorders SLT smokeless tobacco BQ betel quid SS secondhand smoking OR odds ratio
New and future developments in ultrasonic imaging. In the first part of the review, recent developments in medical imaging technology are described. Developments in transducer materials and matching, leading to improvements in band-width and sensitivity are discussed. Improvements in dynamic range due to increased transducer sensitivity, lower electronic noise levels and more efficient filtering are then considered. The benefits of the application of digital signal processing (DSP) techniques to radiofrequency (RF) echo signals are described, including more precise filtering and beam forming, synthetic aperture and parallel receive beam forming. Finally, the current situation in regard to 1.5 D arrays, 3 D scanning, ultrasound computed tomography (UCT), harmonic imaging with contrast agents and elastography are discussed. In the second part, some predictions for future developments are made. These will be possible largely due to the power of DSP. Parallel transmissions will make more efficient use of time, allowing greater spatial and temporal resolution, and greater accuracy in Doppler imaging. Adaptive transmission tailoring will be used, where the pulse characteristics to each part of the image field are independently optimized, as will adaptive receive processing in which echo sequences from each part of the image are independently and optimally processed. An important potential development will be automatic feature recognition, making possible accurate compound scanning with high spatial resolution, and quantitative information about the spatial distribution of acoustic speed. Compound scanning will provide more complete visualization of all structures and, particularly when incorporated into intravascular probes, should greatly aid the investigation of arterial plaque morphology. Feature recognition will also make it possible to have UCT systems (array based in future) which require less than 360 degrees access. Harmonic imaging without contrast agents, based simply on the inherent non-linearity of sound propagation in tissue, will become common. 2 D phased array transducer will permit symmetric beam focusing and scanning throughout a solid cone, greatly facilitating the development of 3 D scanning applications. Large 2 D arrays would have the potential to produce a five-fold increase in spatial resolution of a limited volume of tissue, or to measure the variation of backscatter with angle, as an aid to tissue characterization. Finally, ultrasound will be increasingly used to measure the elastic and dynamic properties of local regions of tissue.
Determination of the Injection Molding Process Parameters in Multicavity Injection Molds This study presents a method for simulating the flow balance of a multicavity injection mold (MCIM) system. Flow imbalance is caused by shear rate segregation at a branch point in the runner system. Five parameters are considered, including the material and process parameters for multicavity injection molding. The mass fraction index (MFI) and flow balance index (FBI) are proposed to describe the flow imbalance of the injection mold. A lower melt temperature, a lower diameter of the runner, and a higher glass transition temperature of the material, all correspond to a greater flow imbalance in a multicavity injection mold system.
Calculation of Band Offsets of Mg(OH) 2 -Based Heterostructures : The band alignment of Mg(OH) 2 -based heterostructures is investigated based on first-principles calculation. -MgO/-Mg(OH) 2 and -wurtzite ZnO/-Mg(OH) 2 heterostructures are considered. The O 2s level energy is obtained for each O atom in the heterostructure supercell, and the band edge energies are evaluated following the procedure of the core-level spectroscopy. The calculation is based on the generalized gradient approximation with the on-site Coulomb interaction parameter U considered for Zn. For MgO/Mg(OH) 2, the band alignment is of type II, and the valence band edge of MgO is higher by 1.6 eV than that of Mg(OH) 2. For ZnO/Mg(OH) 2, the band alignment is of type I, and the valence band edge of ZnO is higher by 0.5 eV than that of Mg(OH) 2. Assuming the transitivity rule, it is expected that Mg(OH) 2 can be used for certain types of heterostructure solar cells and dye-sensitized solar cells to improve the performance. Introduction Magnesium hydroxide Mg(OH) 2 has normally been regarded as an insulator and its applications have been limited in chemistry fields so far. However, there were several attempts to apply Mg(OH) 2 to solar cells. It was reported that the performance of dye-sensitized solar cells (DSSC) was improved by an Mg(OH) 2 coating on the TiO 2 particles. Mg(OH) 2 was also used for a buffer layer of Cu(InGa)Se 2 (CIGS)-based heterostructure solar cells. The most common buffer-layer material is CdS, but Cd is toxic and not abundant. In contrast, Mg is nontoxic and earth-abundant, thus Mg(OH) 2 is advantageous for domestic solar cell application. In those electronics application, if Mg(OH) 2 is completely insulating, devices do not work. Thus, the successful applications to solar cells indicate that Mg(OH) 2 has some conductivity. Mg(OH) 2 has a wide bandgap of 5.7 eV, but materials having a comparable bandgap have begun to be used in electronics as ultra-wide bandgap (UWBG) semiconductors. For example, diamond, with a bandgap that is similar to that of Mg(OH) 2, has been extensively investigated for electronic device applications. Ga 2 O 3, with a bandgap of approximately 5 eV, has also attracted much attention, and (Al x Ga 1−x ) 2 O 3, having a bandgap even larger than that of Ga 2 O 3, is considered to be indispensable for heterostructure devices based on Ga 2 O 3. Thus, it is natural to consider Mg(OH) 2 as another UWBG semiconductor. It was reported that chemically deposited Mg(OH) 2 (nominally undoped) is semiconducting, and that Cu-doped Mg(OH) 2 fabricated by electrochemical deposition can have both n-type and p-type conductivity depending on fabrication conditions. Recently, the first-principles calculation was carried out to evaluate impurity and defect levels and to discuss the possibility of controlling the conduction type and conductivity of Mg(OH) 2. In addition, the possibility of bandgap reduction by anion doping has been theoretically investigated. It was also reported that resistivity was much reduced, to the order of 10 −2 cm, by heavy carbon doping. Additional essential information for designing the heterostructure devices is band alignment. To analyze the performance of both DSSC and heterostructure solar cells includ- Electron. Mater. 2021, 2 275 ing an Mg(OH) 2 layer, one needs to consider carrier transport across the heterointerface with Mg(OH) 2. The band alignment critically influences the carrier transport across the heterointerface. For many kinds of semiconductor heterostructures, band alignment has been investigated. The core-level spectroscopy is the most popular technique for evaluating the band offset experimentally. Theoretical research has also been carried out for various heterostructures. Recently, band structures of two-dimensional (2D) heterostructures based on Mg(OH) 2 were theoretically investigated for various partners. In those previous works, only 2D Mg(OH) 2 (a single monolayer of Mg(OH) 2 ) was considered, and the interface bonding was assumed to be due to the van der Waals interaction. To my knowledge, the band alignment of heterostructures based on bulk Mg(OH) 2 (with covalent bonding at the interface) has not been investigated so far. In this work, band alignment at the Mg(OH) 2 -based heterostructures is investigated by first-principles calculations. MgO and ZnO are selected as the partner of the heterostructure. MgO has the NaCl structure, and ZnO the wurtzite structure. The arrangement of oxygen atoms in the plane of MgO and plane of ZnO is the same as that of plane of Mg(OH) 2. Thus, one can construct heterointerfaces with an Mg(OH) 2 plane. ZnO is a popular buffer layer material of heterostructure solar cells, and band alignment has been investigated for various heterostructures based on ZnO. Therefore, once the band offset with ZnO was evaluated, band offset could be estimated for other heterostructures with various materials by assuming the transitivity rule. Calculation The supercells used in the calculation are shown in Figure 1. The hetero-interface is plane for MgO ( Figure 1a) and plane for wurtzite ZnO (Figure 1b). The lattice constant parallel to the interface was fixed at the average of the constituent compounds, weighted by the respective thickness, and the vertical atom spacings were initially set the same as that of the respective compound. All of the atoms are allowed to relax with the supercell size fixed. The GDIIS (geometry optimization by direct inversion in the iterative subspace) algorism was adopted, and the convergence criterion was 5 10 −2 eV/. (The lattice constants and atom positions after relaxation are given in Tables A1 and A2.) Additional essential information for designing the heterostructure devices is band alignment. To analyze the performance of both DSSC and heterostructure solar cells including an Mg(OH)2 layer, one needs to consider carrier transport across the heterointerface with Mg(OH)2. The band alignment critically influences the carrier transport across the heterointerface. For many kinds of semiconductor heterostructures, band alignment has been investigated. The core-level spectroscopy is the most popular technique for evaluating the band offset experimentally. Theoretical research has also been carried out for various heterostructures. Recently, band structures of two-dimensional (2D) heterostructures based on Mg(OH)2 were theoretically investigated for various partners. In those previous works, only 2D Mg(OH)2 (a single monolayer of Mg(OH)2) was considered, and the interface bonding was assumed to be due to the van der Waals interaction. To my knowledge, the band alignment of heterostructures based on bulk Mg(OH)2 (with covalent bonding at the interface) has not been investigated so far. In this work, band alignment at the Mg(OH)2-based heterostructures is investigated by first-principles calculations. MgO and ZnO are selected as the partner of the heterostructure. MgO has the NaCl structure, and ZnO the wurtzite structure. The arrangement of oxygen atoms in the plane of MgO and plane of ZnO is the same as that of plane of Mg(OH)2. Thus, one can construct heterointerfaces with an Mg(OH)2 plane. ZnO is a popular buffer layer material of heterostructure solar cells, and band alignment has been investigated for various heterostructures based on ZnO. Therefore, once the band offset with ZnO was evaluated, band offset could be estimated for other heterostructures with various materials by assuming the transitivity rule. Calculation The supercells used in the calculation are shown in Figure 1. The hetero-interface is plane for MgO ( Figure 1a) and plane for wurtzite ZnO (Figure 1b). The lattice constant parallel to the interface was fixed at the average of the constituent compounds, weighted by the respective thickness, and the vertical atom spacings were initially set the same as that of the respective compound. All of the atoms are allowed to relax with the supercell size fixed. The GDIIS (geometry optimization by direct inversion in the iterative subspace) algorism was adopted, and the convergence criterion was 5 10 −2 eV/. MgO has the NaCl structure, and the lattice is fcc. Thus, the atom stacking of plane is denoted as ABC, where A, B, and C represent different atom positions. For Mg(OH) 2, the atom position is the same for each OH-Mg-OH monolayer, and the stacking of O atom planes can be denoted as ACAC. In the supercell of MgO/Mg(OH) 2, the thickness of MgO is set at 9-monolayers and that of Mg(OH) 2 is 4 monolayers, and the stacking of O atoms is as follows: Electron. Mater. 2021, 2, FOR PEER REVIEW 3 MgO has the NaCl structure, and the lattice is fcc. Thus, the atom stacking of plane is denoted as ABC, where A, B, and C represent different atom positions. For Mg(OH)2, the atom position is the same for each OH-Mg-OH monolayer, and the stacking of O atom planes can be denoted as ACAC. In the supercell of MgO/Mg(OH)2, the thickness of MgO is set at 9-monolayers and that of Mg(OH)2 is 4 monolayers, and the stacking of O atoms is as follows: ABCABCABCACACACAC MgO Mg(OH)2 As shown in Figure 1a, the Mg atom at the interface is bonded to O atoms on the MgO side and the OH groups on the Mg(OH)2 side. The unstrained O-O distance is 0.297 nm for MgO and 0.314 nm for Mg(OH)2, and thus the lattice mismatch is not very large. ZnO has the wurtzite structure, and its lattice is hcp, with ABAB atom stacking along The calculation in this work is based on the density-functional theory (DFT). PHASE code (ver.11.0, University of Tokyo, Tokyo, Japan) was used. The pseudopotential method was adopted with generalized-gradient approximation (GGA) of ref.. The ultrasoft pseudopotentials were used for O and Zn, and the norm-conserving pseudopotentials for H and Mg. The kinetic energy cutoff of the basis set was 272 eV (20 Rydberg). The effects of the on-site Coulomb interaction U for d states of Zn were included in the calculation (GGA + U), using the value of U = 5.0 eV. The band offset was evaluated by a procedure similar to the core-level spectroscopy. The local density of states was obtained for each constituent atom in the supercell, and the O 2s level was used as the inner core level. It was assumed that the difference between the O 2s level and the valence band maximum Ev is preserved. It is known that the bandgap is underestimated by the DFT calculation ; the calculated bandgap of Mg(OH)2 is approximately 4 eV, considerably smaller than the experimental value (5.7 eV). Thus, the energy of the conduction band minimum Ec was determined from the calculated Ev and the experimentally determined bandgap. Results and Discussion The energies of O 2s are plotted in Figure 2 for the O atoms in the MgO/Mg(OH)2 supercell (the squares). The O 2s level is lower in Mg(OH)2 than in MgO by approximately 2.8 eV. It was reported that the binding energy of O 1s obtained in X-ray photoelectron spectroscopy (XPS) is larger in Mg(OH)2 than in MgO by 1.8 eV. Thus, the XPS results also indicate that the O levels are lower in energy in Mg(OH)2 than in MgO. The band edge energies Ev and Ec are also plotted in Figure 2. The bandgap of MgO is considered to be 7.8 eV. The band alignment is of type II, i.e., both Ec and Ev are lower in energy in Mg(OH)2. As shown in Figure 1a, the Mg atom at the interface is bonded to O atoms on the MgO side and the OH groups on the Mg(OH) 2 side. The unstrained O-O distance is 0.297 nm for MgO and 0.314 nm for Mg(OH) 2, and thus the lattice mismatch is not very large. ZnO has the wurtzite structure, and its lattice is hcp, with ABAB atom stacking along. In the supercell of ZnO/Mg(OH) 2, to avoid energetically unfavorable stacking (such as AA) and to keep periodicity, the ZnO thickness was set at 7 monolayers. The stacking of O atoms is as follows: Electron. Mater. 2021, 2, FOR PEER REVIEW 3 MgO has the NaCl structure, and the lattice is fcc. Thus, the atom stacking of plane is denoted as ABC, where A, B, and C represent different atom positions. For Mg(OH)2, the atom position is the same for each OH-Mg-OH monolayer, and the stacking of O atom planes can be denoted as ACAC. In the supercell of MgO/Mg(OH)2, the thickness of MgO is set at 9-monolayers and that of Mg(OH)2 is 4 monolayers, and the stacking of O atoms is as follows: ABCABCABCACACACAC MgO Mg(OH)2 As shown in Figure 1a The calculation in this work is based on the density-functional theory (DFT). PHASE code (ver.11.0, University of Tokyo, Tokyo, Japan) was used. The pseudopotential method was adopted with generalized-gradient approximation (GGA) of ref.. The ultrasoft pseudopotentials were used for O and Zn, and the norm-conserving pseudopotentials for H and Mg. The kinetic energy cutoff of the basis set was 272 eV (20 Rydberg). The effects of the on-site Coulomb interaction U for d states of Zn were included in the calculation (GGA + U), using the value of U = 5.0 eV. The band offset was evaluated by a procedure similar to the core-level spectroscopy. The local density of states was obtained for each constituent atom in the supercell, and the O 2s level was used as the inner core level. It was assumed that the difference between the O 2s level and the valence band maximum Ev is preserved. It is known that the bandgap is underestimated by the DFT calculation ; the calculated bandgap of Mg(OH)2 is approximately 4 eV, considerably smaller than the experimental value (5.7 eV). Thus, the energy of the conduction band minimum Ec was determined from the calculated Ev and the experimentally determined bandgap. Results and Discussion The energies of O 2s are plotted in Figure 2 for the O atoms in the MgO/Mg(OH)2 supercell (the squares). The O 2s level is lower in Mg(OH)2 than in MgO by approximately 2.8 eV. It was reported that the binding energy of O 1s obtained in X-ray photoelectron spectroscopy (XPS) is larger in Mg(OH)2 than in MgO by 1.8 eV. Thus, the XPS results also indicate that the O levels are lower in energy in Mg(OH)2 than in MgO. The band edge energies Ev and Ec are also plotted in Figure 2. The bandgap of MgO is considered to be 7.8 eV. The band alignment is of type II, i.e., both Ec and Ev are lower in energy in Mg(OH)2. The calculation in this work is based on the density-functional theory (DFT). PHASE code (ver.11.0, University of Tokyo, Tokyo, Japan) was used. The pseudopotential method was adopted with generalized-gradient approximation (GGA) of ref.. The ultrasoft pseudopotentials were used for O and Zn, and the norm-conserving pseudopotentials for H and Mg. The kinetic energy cutoff of the basis set was 272 eV (20 Rydberg). The effects of the on-site Coulomb interaction U for d states of Zn were included in the calculation (GGA + U), using the value of U = 5.0 eV. The band offset was evaluated by a procedure similar to the core-level spectroscopy. The local density of states was obtained for each constituent atom in the supercell, and the O 2s level was used as the inner core level. It was assumed that the difference between the O 2s level and the valence band maximum E v is preserved. It is known that the bandgap is underestimated by the DFT calculation ; the calculated bandgap of Mg(OH) 2 is approximately 4 eV, considerably smaller than the experimental value (5.7 eV). Thus, the energy of the conduction band minimum E c was determined from the calculated E v and the experimentally determined bandgap. Results and Discussion The energies of O 2s are plotted in Figure 2 for the O atoms in the MgO/Mg(OH) 2 supercell (the squares). The O 2s level is lower in Mg(OH) 2 than in MgO by approximately 2.8 eV. It was reported that the binding energy of O 1s obtained in X-ray photoelectron spectroscopy (XPS) is larger in Mg(OH) 2 than in MgO by 1.8 eV. Thus, the XPS results also indicate that the O levels are lower in energy in Mg(OH) 2 than in MgO. The band edge energies E v and E c are also plotted in Figure 2. The bandgap of MgO is considered to be 7.8 eV. The band alignment is of type II, i.e., both E c and E v are lower in energy in Mg(OH) 2 The results are summarized in Figure 4. E v of MgO is positioned at a higher energy than E v of Mg(OH) 2, and MgO has a larger bandgap. Thus, for MgO/Mg(OH) 2, the band alignment is of type II, and the conduction band offset ∆E c is very large. On the other hand, for ZnO/Mg(OH) 2, the band alignment is of type-I, with a larger band offset for the conduction band side. alignment is of type II, and the conduction band offset ∆Ec is very large. On the other hand, for ZnO/Mg(OH)2, the band alignment is of type-I, with a larger band offset for the conduction band side. As noted in the introduction, CdS is the most common buffer-layer material in CIGSbased heterostructure solar cells, but Cd is toxic and not abundant. ZnO has been considered as an alternative buffer-layer material. In the ZnO/CIGS heterostructure, the band alignment is of type-II and the Ec of ZnO is lower by 0.16 eV. Lower Ec in the buffer layer reduces band bending, and increases the recombination of the majority of carriers, decreasing output voltage. Thus, it is expected that alloying with MgO could shift the Ec of ZnO upward and improve solar-cell performance. Alternatively, assuming the transitivity rule, the band alignment is of type-I for Mg(OH)2/CIGS, and thus higher output voltage can be expected than for ZnO/CIGS. However, ∆Ec at Mg(OH)2/CIGS may be too large (about 1.7 eV) so output current would be reduced. In fact, in ref., the efficiency of an Mg(OH)2/CIGS solar cell was reported as low. ZnO/Cu2O is another heterostructure attracting attention for solar cell application. It is generally agreed that the band alignment is of type-II, although different values of band offsets were reported (the reported values of ∆Ec range from 0.5 and 1.77 eV). Thus, to improve performance, oxides with a larger bandgap (such as Zn1−xMgxO and Ga2O3) have been employed, so that its Ec becomes higher than the Ec of Cu2O. According to the present calculation, Ec of Mg(OH)2 is positioned significantly higher than that of ZnO. Then, the replacement of ZnO with Mg(OH)2 in the Cu2O-based solar cell will result in the type-I band alignment with moderate ∆Ec value and thus could increase output voltage and power. Mg(OH)2 has been used for the coating of TiO2 in DSSC, as noted in the introduction. In DSSC, photo-excited electrons are injected from dye to TiO2, but a part of those electrons are lost because of the backflow to the dye or ions in the electrolyte. It is known that the band offset between ZnO and TiO2 is small for both of the bands. Thus, the band offset at Mg(OH)2/TiO2 will be similar to that at Mg(OH)2/ZnO, according to the transitivity rule. Then, ∆Ec at Mg(OH)2/TiO2 could be large, and therefore, the Mg(OH)2 coating will block the backflow of photo-generated electrons from TiO2, increasing the output. However, it may also prevent the injection of electrons from the dye to TiO2. According to the previous works, a thin Mg(OH)2 coating on TiO2 led to an increase in output voltage without significant a decrease in the output current, but thicker coatings resulted in a decrease in the current and efficiency. Since LUMO (energy of excited electrons) in the dye is higher than Ec of TiO2, the energy barrier of the Mg(OH)2 coating is smaller for As noted in the introduction, CdS is the most common buffer-layer material in CIGSbased heterostructure solar cells, but Cd is toxic and not abundant. ZnO has been considered as an alternative buffer-layer material. In the ZnO/CIGS heterostructure, the band alignment is of type-II and the E c of ZnO is lower by 0.16 eV. Lower E c in the buffer layer reduces band bending, and increases the recombination of the majority of carriers, decreasing output voltage. Thus, it is expected that alloying with MgO could shift the E c of ZnO upward and improve solar-cell performance. Alternatively, assuming the transitivity rule, the band alignment is of type-I for Mg(OH) 2 /CIGS, and thus higher output voltage can be expected than for ZnO/CIGS. However, ∆E c at Mg(OH) 2 /CIGS may be too large (about 1.7 eV) so output current would be reduced. In fact, in ref., the efficiency of an Mg(OH) 2 /CIGS solar cell was reported as low. ZnO/Cu 2 O is another heterostructure attracting attention for solar cell application. It is generally agreed that the band alignment is of type-II, although different values of band offsets were reported (the reported values of ∆E c range from 0.5 and 1.77 eV). Thus, to improve performance, oxides with a larger bandgap (such as Zn 1−x Mg x O and Ga 2 O 3 ) have been employed, so that its E c becomes higher than the E c of Cu 2 O. According to the present calculation, E c of Mg(OH) 2 is positioned significantly higher than that of ZnO. Then, the replacement of ZnO with Mg(OH) 2 in the Cu 2 O-based solar cell will result in the type-I band alignment with moderate ∆E c value and thus could increase output voltage and power. Mg(OH) 2 has been used for the coating of TiO 2 in DSSC, as noted in the introduction. In DSSC, photo-excited electrons are injected from dye to TiO 2, but a part of those electrons are lost because of the backflow to the dye or ions in the electrolyte. It is known that the band offset between ZnO and TiO 2 is small for both of the bands. Thus, the band offset at Mg(OH) 2 /TiO 2 will be similar to that at Mg(OH) 2 /ZnO, according to the transitivity rule. Then, ∆E c at Mg(OH) 2 /TiO 2 could be large, and therefore, the Mg(OH) 2 coating will block the backflow of photo-generated electrons from TiO 2, increasing the output. However, it may also prevent the injection of electrons from the dye to TiO 2. According to the previous works, a thin Mg(OH) 2 coating on TiO 2 led to an increase in output voltage without significant a decrease in the output current, but thicker coatings resulted in a decrease in the current and efficiency. Since LUMO (energy of excited electrons) in the dye is higher than E c of TiO 2, the energy barrier of the Mg(OH) 2 coating is smaller for the carrier injection from the dye than for the backflow from TiO 2. Thus, if the Mg(OH) 2 coating thickness is properly adjusted, it could block the backflow from TiO 2 without significantly blocking the carrier injection from the dye, leading to an increase in photovoltaic output. It should be noted that the calculation based on a small supercell will not be applicable for the heterostructure with Cu 2 O or TiO 2 because of a different arrangement of O atoms. Thus we have discussed the properties of those heterostructures based on the transitivity rule. However, the rule does not hold when the effects of the interface dipole are significant. For more conclusive discussion, the band offset needs to be experimentally measured. In the present calculation, a perfect interface without any defects was assumed. For Mg(OH) 2, a cation (Mg) vacancy is expected to act as an acceptor, and an anion (OH) vacancy as a donor, as for metal oxides. Another possible disorder is the inclusion of the hydroxide-like interface (e.g., Zn-O-H-H-O-Mg). The defects at the interface could modify the charge distribution near the interface and affect the band alignment. However, it is difficult to predict the effects of those disorders, to take into account those effects in the calculation, a much larger supercell needs to be used. Finally, the present results are compared with the theoretical results for the 2D ZnO/Mg(OH) 2 heterostructure by Ren et al.. They predicted the type-II band alignment for ZnO/Mg(OH) 2 with E v of ZnO lower than that of Mg(OH) 2 while the type-I alignment was predicted in the present work. In their calculation, 2D ZnO was considered, i.e., Zn and O atoms were arranged on a single atom plane. Thus, the bonding configuration is different from the tetrahedral bonding in actual bulk ZnO. This could be the main reason for the qualitative discrepancy. For Mg(OH) 2 to be applied to devices, control of conduction type, and conductivity will be necessary. Although there are some preliminary attempts of valence control of Mg(OH) 2 as noted in the introduction, doping techniques need to be established for the device application. Conclusions The band alignment of the Mg(OH) 2 -based heterostructures was investigated based on the first-principles calculation. The O 2s level energy was obtained for each O atom in the heterostructure supercell, and the band edge energies were evaluated following the procedure of the core-level spectroscopy. For MgO/Mg(OH) 2, the band alignment is of type II, and the E v of MgO is higher by 1.6 eV than that of Mg(OH) 2. The band alignment of ZnO/Mg(OH) 2 is of type I, and ∆E v is 0.5 eV. Assuming the transitivity rule, it is expected that Mg(OH) 2 can increase the output voltage of the heterostructure solar cells and DSSC if its thickness is properly adjusted. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The author declares no conflict of interest. Appendix A The structure of the supercells. Both of the supercells are hexagonal, and the lattice constants and atom positions after the relaxation (internal coordinate) are as follows:
Robustness of interleaving switched power converters in multisystem locomotives The paper deals with reconfigurable converters input stages used in multisystem locomotives and high speed trains employed in interoperable European railways. The idea to interleave the converters, shifting their switching frequency, can be a solution to mitigate the power quality problems introduced by the high current ripples. This paper presents the robustness of the interleaving control parameter variations respect to the THD of the absorbed current even if the control parameters are not well tuned. The paper is focused in railway application but the interleaving procedure can be implemented in different real conversion systems, such as renewable generator interface converters.
The Dynamic-to-Static Conversion of Dynamic Fault Trees Using Stochastic Dependency Graphs and Stochastic Activity Networks In this paper a new modeling framework for the dependability analysis of complex systems is presented and related to dynamic fault trees (DFTs). The methodology is based on a modular approach: two separate models are used to handle, the fault logic and the stochastic dependencies of the system. Thus, the fault schema, free of any dependency logic, can be easily evaluated, while the dependency schema allows the modeler to design new kind of non-trivial dependencies not easily caught by the traditional holistic methodologies. Moreover, the use of a dependency schema allows building a pure behavioral model that can be used for various kinds of dependability studies. In the paper is shown how to build and integrate the two modular models and convert them in a Stochastic Activity Network. Furthermore, based on the construction of the schema that embeds the stochastic dependencies, the procedure to convert DFTs into static fault trees is shown, allowing the resolution of DFTs in a very efficient way.
Survey of young consumers attitudes using food sharing attitudes and behaviors model PurposeGiven the importance of food wastes in households, the purpose was to identify the attitudes of young consumers towards the food sharing (FS) phenomenon in its cognitive, emotional and behavioral dimensions and to verify the reliability of the FAB (food sharing attitudes and behaviors) model, used as a research tool.Design/methodology/approachThe study was conducted in 2021 using the computer assisted web interview (CAWI) method. The FAB model was based on the ABC (affect, behavior and cognition) model of attitudes, which includes three components: affect, behavior and cognition. Questions on the phenomenon of FS were scaled on a 5-point Likert scale. A total of 469 correctly completed forms were obtained. To assess the reliability of the FAB model the Cronbachs alpha was used. In the statistical analysis SPSS Statistics 27 was used.FindingsYoung consumers have a positive attitude towards the idea of FS and the initiative of FS points. Gender is a significant factor in FS attitudes. The FAB model has proven to be a reliable tool for exploring consumer attitudes towards FS. A set of activities was proposed to promote the idea of FS on university campuses and among other potential stakeholders.Originality/valueTo contribute to the body of knowledge on FS, the authors proposed the FAB model. The results of this study are relevant for reducing food wastes; they promote sustainable food consumption and the European Green Deal (EGD).
Isolation and characterization of infected and uninfected cells from soybean nodules : role of uninfected cells in ureide synthesis. The distribution of organelles and associated enzymes between cells containing bacteroids and uninfected cells from nodules of Glycine max L. Merr. cv Amsoy 71 was investigated by separation of protoplasts on a sucrose step-gradient. Infected protoplasts were much larger, irregular in shape, and more dense than uninfected protoplasts. The peroxisomal enzymes, uricase and catalase, were present at much higher specific activity in the uninfected cell fraction. Allantoinase, an enzyme of the endoplasmic reticulum, had a greater specific activity in the uninfected cell fraction. Several enzymes whose products are required for purine biosynthesis, including phosphoglycerate dehydrogenase, aspartate aminotransferase, 6-phosphogluconate dehydrogenase, and glucose-6-phosphate dehydrogenase, exhibited a higher specific activity in the uninfected cell fraction. Isozymes of aspartate aminotransferase were separated on native gels and located by an activity stain. The soluble isozyme was predominantly found in the uninfected cell fraction. These data suggest that peroxisomes, containing uricase and catalase for conversion of uric acid to allantoin, are present only in the uninfected cells of soybean nodules. The uninfected cells also appear to be the site of the allantoinase reaction.
Stonewalling in the Brick City: Perceptions of and Experiences with Seeking Police Assistance among LGBTQ Citizens Extant research has documented police interactions between racial and ethnic minority populations, including negative perceptions of and experiences with the police; police corruption and misconduct; and the deleterious effects of negative relationships with the police, such as reduced legitimacy and mistrust. Comparatively, exchanges between lesbian, gay, bisexual, trans, and queer (LGBTQ) populations and the police have received limited attention. This is despite work suggesting that LGBTQ citizens face an elevated risk of victimization, and a possible reticence in reporting their victimization, resulting from negative perceptions of police, fear of mistreatment, or even experiences of harassment and abuse by police. To extend the research in this area, I analyze 12 focus groups with LGBTQ participants (N = 98) in an urban setting to examine the circumstances in which LGBTQ people would seek assistance from the police, when they would avoid doing so, and their justifications for avoiding or contacting the police. I also considered intersectionality in shaping policecitizen interactions between sexual and/or gender minority citizens of color, as the sample was almost exclusively LGBTQ persons of color. I conclude by discussing implications for policing practices and policies. Introduction Over the past several decades, the relative status of LGBTQ people in the United States has undergone a significant evolution with a gradual and consistent movement towards increased equality and expanded integration into mainstream society (McCarthy 2019). Though progress has been achieved, LGBTQ identities have been largely overlooked in the field of criminology (Buist and Stone 2014). Outside of studies on anti-LGBTQ bias crimes, LGBTQ intimate partner violence (IPV), and the bullying of LGBTQ students in school settings, significant gaps persist concerning sexual orientation and gender identity minorities in criminological contexts (Woods 2014). Though scholars have begun to bridge this gap (Panfil and Miller 2014), the relevance of sexual orientation and/or gender identity has not yet permeated the criminal justice system via consistent, culturally competent, and inclusive training, policies, and day-to-day operations (). The approach taken by the criminal justice system in relationship to SOGI minorities is problematic for several reasons. First, LGBTQ citizens represent a significant proportion of the United States' population. More conservative estimates have indicated about 5.8 million LGBTQ people are in the U.S., comparable to the populations of states such as Wisconsin, Colorado, or Minnesota; others have suggested upwards of 25 million U.S. LGBTQ people, a figure comparable to the size of Florida or New York (Deschamps and Singer 2017; United States Census Bureau 2019). Second, LGBTQ people are also more likely to identify as racial and ethnic minorities relative to their heterosexual counterparts (Deschamps and Singer 2017). These sociodemographic patterns are cause for concern because of the already well-documented disparate treatment of people of color in the criminal justice system (Tonry 2010), which may be compounded by status as a SOGI minority (). Third, LGBTQ people are overrepresented throughout the criminal justice system (). As a result, SOGI minorities have increased odds of contact with law enforcement personnel who likely lack appropriate training to sensitively interact with them. Thus, culturally competent approaches are essential to produce increased trust and improved victimization reporting among LGBTQ communities. Law enforcement is the most visible part of the American criminal justice system and is the first point of contact for those in need of assistance. As part of their professional and ethical obligations to the communities they serve, it is incumbent upon police officers to equitably engage with LGBTQ citizens, including in situations such as anti-LGBTQ bias crimes or IPV. However, research has consistently shown that LGBTQ citizens have poor perceptions of the police (;;;Satuluri and Nadal 2018), report personal experiences with police harassment and misconduct (Hodge and Sexton 2018;;NCVAP 2017aNCVAP, 2017bWolff and Cokely 2007), and experience problematic police responses to IPV (Guadalupe-Diaz 2016; Messinger 2017) and anti-LGBTQ bias crimes Stotzer 2014a). In addition, research demonstrates a pattern of adverse police treatment of transgender people (;Guadalupe-Diaz 2016;Miles-Johnson 2020;Miles-Johnson 2016;Stotzer 2014b;) who also hold particularly pronounced negative views of the police (;;Serpe and Nadal 2017). Research also finds unsatisfactory police interactions with LGBTQ youth (Dwyer 2007;;Holsinger and Hodge 2016) and disparate treatment of LGBTQ people of color (Amnesty International USA 2005; Center for American Progress 2016; Gaynor and Bassett 2020;;). The findings of prior research are significant, as poor regard for the police and/or adverse prior experiences can undermine trust and cement an unwillingness to report victimization Guadalupe-Diaz 2016;) that may be elevated among LGBTQ people (Miles-Johnson 2013b). Furthermore, LGBTQ subpopulations such as gay men (Herek 2009), trans people (;Stotzer 2014a), and people of color (Dunbar 2006;Kuehnle and Sullivan 2001;Meyer 2010;NCVAP 2017aNCVAP, 2017b are especially vulnerable to victimization, and thus, are in particular need of supportive police interactions. Hence, there is an ongoing need to better understand the attitudes LGBTQ people hold towards the police and the nature of their contacts with police-particularly with respect to vulnerable sub-populations. This study aims to make a timely contribution to the literature by investigating why LGBTQ citizens may be dissuaded from seeking police assistance. I rely on an intersectional framework to examine these nuances by utilizing focus groups with a sample comprised of multiply-marginalized groups from an urban, under-resourced, and crime-prone setting. I discuss my findings through participants' narratives of their perceptions, experiences, and reporting behaviors with police officers. I conclude with policy implications raised by the study and make recommendations for future research. Victimization and Reporting Behaviors among LGBTQ Citizens Among the general population in the United States, it appears that a significant proportion of victims opt out of reporting crimes to the police. According to estimates constructed by the National Crime Victimization Survey (NCVS), the majority of crimes (58%) were unreported from 2006 to 2010, including half (52%) of all violent crimes (). Most commonly, people who did not disclose their victimization reported they handled the incident outside of the justice system and/or believed it was private in nature (34%) or felt their victimization was too minor to warrant police involvement (18%). They also doubted that the police would be capable of providing effective assistance (16%) or conveyed concerns related to the offender's punishment and their potential to enact revenge (13%) (). Extrapolating from this work, these concerns will likely also be present-and perhaps even more salient-among LGBTQ people based on various documented adverse interactions with the police (Amnesty International USA 2005;Hodge and Sexton 2018;;Lambda Legal 2015;;Stotzer 2014b;Wolff and Cokely 2007;) that may prevent them from contacting police initially (Miles-Johnson 2013b) or if victimized once more (e.g., ). Indeed, work conducted with LGBTQ people specifically has suggested unique reservations that shape their decision-making regarding crime reporting, such as feelings of fear, self-blame, and practical barriers to reporting (Briones- ;Peel 1999). In a comparative analysis, Miles-Johnson (2013b) also found LGBTQ people were less likely to seek assistance from the police compared to their heterosexual counterparts and that the anticipation of homophobic treatment functioned as a significant deterrent in doing so. The LGBTQ community experiences various forms of victimization that should, in theory, warrant police intervention, including general crimes, bias crimes, and IPV. Fundamentally, it appears SOGI minorities are victimized more frequently than heterosexual people (Katz-Wise and Hyde 2012). Prior work has estimated that up to 55 percent of LGB people have experienced verbal, physical, or sexual victimization (Berrill 1993;Berrill and Herek 1992;Herek 2009;;), while as many as 50 percent of transgender people have experienced physical or sexual victimization (;Stotzer 2009). Furthermore, official estimates indicate one-fifth to one-quarter of bias-related crimes are due to sexual orientation or gender identity; most of these offenses (73%) are violent in nature, including physical assaults and forcible rape (Federal Bureau of Investigation 2018a; Oudekerk 2019; Masucci and Langton 2017). Other work has estimated the majority (roughly 60 percent) of trans people have been violently victimized based solely on their SOGI status () and are over two times more likely to be sexually attacked relative to lesbian women, gay men, and bisexual cisgender people (b). IPV is also prevalent in the LGBTQ community, resembling or even exceeding rates found among heterosexual couples (Courvant and Cook-Daniels 1998;Messinger 2017;Turell 2000), with LGB people of color, transgender people, and bisexual people at particular risk (Guadalupe-Diaz and Yglesias 2013; ;a;NCVAP 2017b;;;). Despite victimization trends among LGBTQ people, some work has indicated there may be gaps in their reporting behaviors (Miles-Johnson 2013b;Peel 1999). For instance, although victimization among SOGI minorities may be especially prominent (e.g., Katz-Wise and Hyde 2012) it appears reporting rates may be lower among this population in relationship to bias-related victimizations (Herek et al., 2002;Stotzer 2014a) and IPV (Aulivola 2004;a;Miles-Johnson 2020;NCVAP 2017b). Furthermore, the National Coalition of Anti-Violence Programs (NCVAP) recently estimated that the majority of anti-LGBTQ bias crimes (58%) and IPV incidents (59%) were unreported to police (NCVAP 2017a, 2017b), despite a higher prevalence amongst LGBTQ people of color relative to their white counterparts. According to Stotzer's (2014b) review, trans victims are especially hesitant to contact the police, with non-reporting rates ranging from 40% to as high as 90%. Overall, these patterns suggest significant gaps in reporting among LGBTQ citizens, closing an important mechanism for addressing LGBTQ victimization. To facilitate increased reporting rates, officers must be willing to respond to such victims in a culturally competent manner rather than with disdain, indifference, or various forms of misconduct (e.g., ;NCVAP 2017aNCVAP, 2017bWolff and Cokely 2007). LGBTQ Communities and Perceptions of the Police Broadly speaking, LGBTQ communities appear to harbor distrust towards law enforcement (Hodge and Sexton 2018;;Berrill 1993) that outpaces that of heterosexual (;Satuluri and Nadal 2018) and cisgender 1 (Serpe and Nadal 2017) people. In one analysis, LGBTQ participants were less likely to view the police as friendly, impartial, and non-discriminatory than heterosexual people (Satuluri and Nadal 2018). Dario et al. found that LGBTQ participants were more likely than heterosexual participants to report unfavorable perceptions of police legitimacy. Similarly, Owen et al. reported that LGBTQ people more frequently felt police treated their community unfairly and were less satisfied with police efficiency and services than their heterosexual counterparts. In both studies, trans participants reported particularly low regard for the police compared to cisgender people (see also Miles-Johnson 2013a; Serpe and Nadal 2017). Prior work also suggests LGBTQ citizens are concerned their complaints may not receive serious attention from officers (Bernstein and Kostelac 2002;Guadalupe-Diaz and Jasinski 2017;;Kuehnle and Sullivan 2001) or will not be recorded at all (Wolff and Cokely 2007). Research has also gauged anti-LGBTQ attitudes and behaviors among law enforcement by directly surveying and interviewing police officers (e.g., Bernstein and Swartwout 2012;Colvin 2012Colvin, 2009aColvin, 2009;;Panter 2018). It appears police officers are routinely exposed to culturally-entrenched homophobia, anti-gay stereotypes, and discriminatory behaviors endorsed and/or performed by their co-workers and superiors (Colvin 2009a(Colvin, 2009;). Mallory et al. revealed anti-LGBTQ discrimination within police departments is ubiquitous, producing reduced opportunities for promotion, potential dismissal, and verbal, physical, and/or sexual harassment enacted by fellow officers. Among a sample of heterosexual officers, one-third agreed gay men "are disgusting" (, p. 9). Similarly, in a survey of police chiefs, half (49%) disclosed discomfort at the prospect of a gay male co-worker (). Bernstein and Kostelac also found evidence of anti-LGBTQ sentiments among police officers, as roughly half of heterosexual officers believed LGBTQ citizens received differential treatment and that LGBTQ-related incidents were treated less seriously. These patterns have likely lingered due to the "masculinized" nature of policing (Collins and Rocco 2018, p. 1) and cultural prescriptions that officers should embrace these stereotypes (Colvin 2014). Panter asserted that policing culture prizes a traditional gender binary, wherein masculinity is a measuring rod for perceived success as a police officer; as a result, transgender and gender nonconforming police officers may be especially impacted by these aspects of policing and face transphobic treatment. The impact of policing culture is an important consideration, as it may have ramifications for how officers may behave during interactions with citizens (). The LGBTQ Community and Experiences with the Police Reporting behavior is inextricably bound, not just with perceptions of the police, but police officers' actual behaviors. For instance, negative regard for the police may stem from prior adverse interactions (;Moran and Sharpe 2004;Serpe and Nadal 2017;Stotzer 2014a;Wolff and Cokely 2007). The dismal perceptions of police within LGBTQ communities appear to have merit, as available research suggests these communities are routinely exposed to police mistreatment. Studies have documented the exposure of LGBTQ citizens to discourtesy, indifference, hostility, homophobia and/or transphobia, and verbal, sexual, and/or physical harassment from officers, as well as other barriers to service such as the refusal to file a report (Lambda Legal 2015;;NCVAP 2017aNCVAP, 2017bWolff and Cokely 2007). According to the NCVAP, during 2017, a significant portion of anti-LGBTQ bias or IPV victims described responding officers as "indifferent" (55%) or "hostile" (20%) (NCVAP 2018, p. 56). Additionally, Wolff and Cokely found LGBT-police interactions were negative more often than not, with one in four of these interactions involving police misconduct, including harassment, threats, and physical violence, while one-third of officers failed to document citizens' complaints. More recently, Hodge and Sexton generated similar findings, as roughly half of their sample (49%) reported harassment from officers. Interactions between the trans community and the police appear to be particularly troubled (Hodge and Sexton 2018;;). Stotzer's (2014b) review of research on trans people and their victimization by the police uncovered various forms of police misconduct, including biased treatment as well as verbal, physical, or sexual abuse. Furthermore, trans women who are simply navigating public spaces or performing targeted outreach work (e.g., distributing free condoms) regularly face the threat of arrest for solicitation-an action that illustrates "transwomen are routinely profiled" as sex workers by law enforcement (Amnesty International USA 2005;Carpenter and Marshall 2017;Center for Constitutional Rights 2012, pp. 12-13;Panter 2018), particularly if they are women of color (). Thus, it is unsurprising that trans victims of IPV may be less likely to seek assistance from the police based on poor prior experiences with officers (Guadalupe-Diaz 2016). Unfortunately, it appears tensions with police officers dissuade trans women from seeking the assistance of law enforcement, even when it is needed most. Intersectionality and Interactions between Police and LGBTQ Citizens A range of studies have indicated the troubled relationship between the LGBTQ community and the police may be particularly salient for SOGI people who are also racial and ethnic minorities (;Gaynor and Bassett 2020;Graham 2014;;Guadalupe-Diaz and Yglesias 2013;Hodge and Sexton 2018;;Kuehnle and Sullivan 2001;Lambda Legal 2015;;;NCVAP 2017aNCVAP, 2017;Panfil 2018;Serpe and Nadal 2017). These challenges may translate into a further diminished likelihood of turning to the police when victimized. Fundamentally, an intersectional perspective argues that the combined impact of membership across multiple historically marginalized groups serves as an avenue to heighten, amplify, and multiply the social inequalities already separately experienced via individual identities (Crenshaw 1991a(Crenshaw, 1991b. Namely, statuses associated with being a SOGI minority of color are not phenomena that exist independently in a vacuum, as "all social relations are racialized" (Burgess-Proctor 2006, p. 83). Difficulties due to multiply marginalized identities exist throughout macro and micro social structures and processes, including the criminal justice system and its actors. Thus, intersectionality can serve as a bridge in understanding the attitudes, beliefs, and experiences of LGBTQ populations and their interactions with the criminal justice system generally and the police specifically. Study Purpose and Research Questions Using an intersectional lens, this study aims to better understand police-citizen interactions with LGBTQ citizens of color situated in an economically disadvantaged, urban setting through a qualitative analysis of focus groups. Specifically, I provide an in-depth examination of LGBTQ participants' accounts of when they choose to rely on law enforcement and, perhaps more importantly, when they do not. By analyzing participants' perceptions of law enforcement, I seek to understand why LGBTQ citizens may be reluctant to contact police to report victimization or seek assistance from the police. I also endeavor to expand the literature regarding intersectional analysis by exploring how membership in multiple marginalized identities impacts police-citizen interactions and how these interactions differ between trans women, lesbian women, and gay men of color. Finally, this research has important implications for policy and practice, as it seeks to improve the relationship between LGBTQ communities of color and law enforcement by providing recommendations for targeted training and efforts to increase officers' cultural competency. Thus, I address the following research questions: 1. What are the circumstances and contexts in which participants describe that they would or would not seek assistance from the police? 2. What are participants' articulated reasons and rationales for avoiding or interacting with the police? Study Setting and Background This study was conducted in Newark, New Jersey. With a population of roughly 282,000, Newark is fairly economically deprived, with a household median income of only USD 35,000 and almost one-third (27%) of its citizens living below the poverty line (United States Census Bureau 2019). The vast majority of the city's residents are African American (50%) or Hispanic (36%) (United States Census Bureau 2019). Newark is also known for its concentration of crime. During 2018, for example, its violent crime rate was over seven times the national average (Federal Bureau of Investigation 2018b). In terms of services oriented towards the LGBTQ community, Newark is also unique, as it is home to approximately 6 community-based organizations that specifically target and serve LGBTQ youth, adults, and people of color (Essex LGBT Reaching Adolescents in Need 2017). Newark has also instituted and maintained an LGBTQ Advisory Commission that coordinates with Newark's mayor, Ras Baraka, who has supported other LGBTQ-related policies (Cagnassola 2020; The Citizens Campaign 2019; Rutgers School of Public Health 2019). Given its population composition and characteristics, Newark is a particularly wellsuited setting to investigate experiences with and perceptions of the police among LGBTQ citizens of color. To fully gauge possible contextual characteristics that may shape citizens' evaluations of the Newark Police Department (NPD), it is important to understand its basic structure as well as the controversies it has faced. The NPD is among the 50 largest police departments in the United States (Kershner 2020) and is currently staffed by approximately 1100 sworn officers as of 2019 (Rahman 2019). It is also an active department, as it annually responds to more than half a million calls (Newark Department of Public Safety 2018). Relative to its population, Newark currently employs about 3.9 officers per 1000 residents (United States Census Bureau 2019), exceeding comparably sized cities that are served by 3.4 officers per 1000 citizens (Federal Bureau of Investigation 2020). It should be noted that factors closely related to satisfaction with the police, such as response times (Larsen and Blair 2009), may be significantly impacted by diminished staffing levels (McCabe and O'Connell 2017) as well as local political trends (Levitt 2002;Stucky 2005). Thus, though satisfaction with the police is related to the performance and behaviors of individual officers (e.g., Avdija 2010; ;Larsen and Blair 2009;), because the economic resources available to police departments can significantly alter their day-to-day practices they may, by proxy, also impact citizens' regard for the police. On this note, Smith and Cooper of the Vera Institute of Justice recently examined 72 large municipalities to determine staffing and budgetary trends. Their analysis revealed Newark's policing expenditures totaled roughly USD 208 million and comprised nearly one-third (29%) of its budget as of 2019, matching the study's mean budgetary allocations (29%). Notably, though its staffing for policing personnel was slightly lower than average (341 police employees per citizen vs. 358, respectively), Newark's spending rate per citizen for law enforcement services was nearly two times higher (USD 737 vs. USD 403). Based on the available data, it appears somewhat unclear if the NPD is appropriately funded and staffed, as such decision-making can be quite complex and there are a variety of methods to determine staffing needs (Wilson and Weiss 2014;. Others have argued that nationally, cities are generally inappropriately staffed with respect to law enforcement (Chalfin and McCrary 2016). Recently, in concert with calls nationwide to reduce police-related spending and downsize departments (Levin 2020), Newark has decided to redirect a small portion of its funding for law enforcement toward other city-wide services (Kieffer 2020). However, the NPD has served as a lightning rod for controversy due to its established history of alleged and verified departmental misconduct (Ross 2019). In 2011, serious allegations waged against the NPD spurred an investigation by the Department of Justice (DOJ). Concluding its investigation in 2014, the DOJ's findings were a harsh indictment of the NPD, describing its actions as a model of "constitutional violations" (DOJ 2014, p. 1). It asserted officers had systematically engaged in a variety of unethical behaviors, including inappropriate "stop and arrest practices" and "use of force" as well as "theft by officers" (DOJ 2014, p. 1). Of particular concern, many of these practices disproportionately impacted black citizens relative to their white counterparts. The DOJ concluded these trends were likely exacerbated through a lack of oversight by administrators and a lack of accountability through internal affairs (Gerhardt 2016). As a result of the DOJ's findings, Newark has been subjected to federal monitoring through a consent decree that has been implemented since 2016 (Newark Police Division 2020). The NPD has also amassed repeated allegations of improper conduct against LGBTQ citizens, as high-profile incidents featuring LGBTQ citizens precipitated the DOJ's investigation. For example, in 2010, the NPD was sued by the American Civil Liberties Union (ACLU) in representation of Diana Taylor, a transgender woman who alleged officers verbally abused and humiliated her as a result of her gender identity during a stop (Cavaliere 2010). Furthermore, Defarra Gaymon, a 48-year-old male, was shot and killed by an undercover police officer in 2010 in a park known for anonymous gay sex; in the wake of his death, outrage was directed towards the NPD for their repeated deployment of risky and unnecessary undercover sting operations to target gay men seeking anonymous sex (Wilson and Kovaleski 2010). More recently, the NPD has been accused of failing to aggressively investigate and sensitively handle homicide cases where the victim is a transgender woman (Tracy 2020). Thus, along with the revelation of more generalized patterns of misconduct enacted against Newark's citizens, the DOJ's report also noted "anecdotal evidence that the NPD has engaged in discriminatory policing practices based on sexual orientation or gender identity" (United States v. City of Newark 2016, p. 48). In the aftermath of the DOJ's investigation, Newark has taken steps to ameliorate its documented deficiencies. In doing so, it has endeavored to meaningfully modify the NPD's policies (e.g., use of force, searches, and stops) (Shearn 2020). Newark has also instituted a range of LGBTQ-related policies and trainings designed to close the divide that has formed between the NPD and Newark's LGBTQ community. In addition to educating officers about the LGBTQ community through training, the NPD has also instituted new LGBTQ-specific policies; these changes include revised rules concerning interactions with transgender and gender nonconforming people and the appointment of an LGBTQ liaison officer to "serve as the contact point" between the LGBTQ community and the police (Ambrose 2019, p. 9; O'Dea 2019). As these reforms are still relatively nascent and evolving, their long-term impact remains to be seen and truly understood, particularly in the face of different perspectives on what constitutes true success in relationship to federal monitoring. For instance, critics have doubted Newark's ability to successfully enact significant reform, pointing out that it has repeatedly lagged behind the DOJ's timeline to implement officer trainings (Ross 2019). In contrast, in tandem with other public officials (Di Ionno 2020), Newark's current mayor has asserted several successful outcomes-such as fewer violent crimes and reports of police misconduct alongside increased oversight through internal affairs-are proof that the reforms implemented by the NPD have been impactful (Shearn 2020). Participants and Procedures A total of 12 focus group interviews were conducted with 98 participants, with a range of 5 to 12 participants per group. Interested parties were eligible to participate if they resided in the metropolitan Newark area 2, identified as LGBTQ, and were over 18 years of age. Focus group sessions ranged from one to two hours and participants received a USD 25 stipend as compensation for their time. Each session was audio-recorded. To maintain confidentiality, participants selected and used pseudonyms during each group. 2 Participants from bordering municipalities were included due to their close geographic proximity and resemblance to Newark's sociodemographic and crime characteristics. Additionally, LGBTQ people from surrounding areas often spend time in Newark due to its various LGBTQ-related services and centers that are unavailable in their communities; as a result, they may develop close-knit, chosen fictive kinship networks (Muraco 2006). The study sample included 13 participants who lived in surrounding areas but indicated regular involvement with Newark's LGBTQ community and services; these areas included East Orange, Irvington, Elizabeth, Jersey City, Plainfield, Bloomfield, and Maplewood. Given challenges accessing Newark's LGBTQ population, participants were recruited using a snowball sampling strategy that relied on several sources embedded within the local LGBTQ community. Recruiters included staff at LGBTQ-related agencies 3, key stakeholders and community leaders, and word of mouth. Eligible participants were invited to participate in focus groups held at various community centers throughout the city of Newark. Subsequent to the first two focus groups (N = 13), study participants also completed a demographic questionnaire (N = 85 4 ). Surveyed participants ranged in age from 18 to 65, with an average of 25 years of age. Of the 83 participants who disclosed their racial and ethnic identities, 65 identified as African American or Black (77%), followed by 12 who identified as multi-racial (14%), 3 as Hispanic (4%) and an additional 3 who identified as "other" (4%). Though most participants were born in the United States, 12 (14%) indicated they were immigrants. The sample included participants with the following sexual orientations and gender identities: lesbian women (N = 28), gay men (N = 20), bisexual men and women (N = 8), transgender men (N = 4) and women (N = 17), and other sexual orientations and gender identities (N = 9) 5. Several moderators, including the author and a team of research assistants 6, oversaw the focus groups. We guided participant discussions through a set of semi-structured, open-ended questions about a range of relevant topics (see Appendix A for the complete focus group interview guide). First, to build rapport and open up the group to discussion, participants were asked to give a broad overview of the LGBTQ community in Newark by asking, "Can you tell me a bit about the LGBTQ community in Newark?" Next, participants were asked to discuss their openness regarding sexual orientation and gender identity in their respective neighborhoods. Following these questions, participants were asked about situations in which they would and would not contact NPD. Participants were also asked about who they would first contact if they were the victim of an anti-LGBTQ bias crime as well as their perceptions of police treatment based on sexual orientation and gender identities. They were then asked to recall specific accounts of interactions with NPD through the questions, "Have you had any experiences, good or bad, with the Newark police?" and "What happened and what was your experience?" Finally, participants discussed ways NPD can improve its relationship with the LGBTQ community. They were encouraged to dialogue with one another throughout the process. Table 1 shows the compositions of each focus group. While two groups (6 and 10) included only trans women, most included a mixture of participants with respect to sexual orientation and/or gender identity. Three groups (1, 8, and 9) involved two participant categories, with two groups featuring one dominant category (e.g., group 8, which featured 5 lesbian women and one bisexual woman). Five groups were more diverse (2, 3, 4, 5, and 7) and represented three or more groups of participants. Total ** 91 28 21 4 4 21 4 9 * Participants from two initial pilot focus groups (N = 13) are excluded from this table, as they did not complete a demographic questionnaire and thus, this information is unavailable. ** Additionally, five participants took part in multiple focus groups; four of these took part in two focus groups, while one was present for three focus groups. Thus, the total number of participants presented in Table 1 (N = 91) is higher than the total number of participants referred to elsewhere in the manuscript (N = 85), as Table 1 reflects the distribution of the sample within each focus group, including the five participants who took part in multiple groups (two gay men and three trans women). Analytic Strategy After the data were transcribed verbatim, I performed a content analysis wherein participant narratives were first coded line-by-line to identify larger patterns, tracking the use of keywords, phrases, and themes that were repeatedly mentioned during each focus group. At this stage, participants' narratives were assigned a code in the margins of each group transcript, based on categories such as participants' evaluations of the performance of the police, general stereotypes and attitudes towards the police, and LGBTQ-specific experiences with mistreatment which were then grouped and renamed. For example, the circumstances in which participants would contact the police were counted and compared across the respective SOGI categories described above. Additionally, the number of experiences participants reported with the police as well as the overall nature and tone of these interactions based on participants' descriptions were tabulated and analyzed. For instance, participants' experiences with the police were categorized based on whether or not these experiences were negative or positive, with themes within these groupings then refined (i.e., the police were discourteous, indifferent, did not resolve their complaint, or engaged in other problematic behaviors; the police were friendly and/or participants expressed satisfaction with the interaction and why). To extend and affirm the patterns revealed in the initial content analysis, a domain analysis (Spradley 1979) was also performed to further understand and categorize the types of situations in which participants would summon the police, when they would elect not to do so, and their reasoning for their decision. To identify prominent semantic relationships (i.e., X is a reason for doing Y; see Spradley 1979) and domains, the types of situations in which participants would contact the police were tallied and categorized for the entire sample. These domains included situations participants repeatedly identified as appropriate for contacting the police, such as for violent crimes and serious forms of harassment and threats, along with repeated mentions of when not to contact the police. Upon identification of these patterns, representative quotes were chosen to demonstrate these semantic relationships. An analysis of deviant cases was also performed; for instance, in a departure from most of the sample, a handful of participants felt comfortable contacting the police under any circumstance and held positive regard for the police. Accordingly, these cases were analyzed in relation to the larger patterns to identify what might explain their differences. Both broader patterns and exceptional cases are discussed below. Results The existing literature poses a uniquely troubling conundrum: as a product of negative perceptions of and/or experiences with law enforcement, some LGBTQ people refrain from contacting the police, even during precarious, unstable, or outright dangerous situations. Discomfort reporting victimization to law enforcement can jeopardize victims' well-being and allow the perpetrator(s) to face no legal accountability. Across the 12 focus groups, participants were asked to describe circumstances in which they would and would not ask the police for help. Their reporting behaviors fell into two categories: participants who would rarely or never seek help from the police and those who felt comfortable seeking help from police. There were not observable differences across LGBTQ identities in when participants would or would not contact police. Additionally, when participants described actually having made reports to the police, they did so for similar incidents. However, with the exception of one participant 7, those who reported IPV (N = 9) were exclusively cisgender (N = 6) and transgender women (N = 2). When to Call Police Among the 43 participants who discussed whether and when they would contact the police, one-third indicated they would never seek police assistance. Of the remaining two-thirds, most indicated they would summon the police only under a narrow set of conditions. These included medical emergencies (32%), violent incidents (28%), and harassment and threats (24%). Others gave vague responses like "it depends" (16%). A number of participants viewed the police as an absolute last resort in life-or-death situations. For instance, Dana, an 18-year-old lesbian, stated she would only contact the police "If I'm dying." Valentina, an 18-year-old queer woman, pointed out that unless mortally wounded, she would pursue help outside of the police: Aight, if I got shot but I could still walk, like, I didn't get shot in either of my legs, I'm going to walk myself to a hospital. I'm not going to ask a cop for help so I could get, like, a thousand dollars in like, medical bills from an ambulance ride... Like (pause), like I got to be dying if I ask for help. Notably, during a focus group exclusively with transgender women, participants said if they fought with a cisgender woman, their trans identity would become a focal point, warranting a need to preemptively reach out to the police. Ingrid Their dialogue suggests that preemptively contacting the police-rather than escalating to a physical altercation or allowing the cisgender woman to call the police-was important, as the police would fail to recognize them as women and instead treat them based on their gender assigned at birth. While the majority of study participants were generally reticent to call the police, only 4 participants indicated they would be comfortable doing so under "any" circumstances, while a total of 12 participants reported positive interactions with the police. Natasha, a 19-year-old woman who identified as 'questioning,' explained, "I like Newark Police... because whenever I had a problem, they helped me." Lamar, a 25-year-old gay man, emphatically stated, " the cops for everything. If I feel some type of way, I'm calling. If you look at me a certain type of way, I'm calling. If you cough a certain type of way, bitch, I'm calling." What distinguished this small group from the majority of study participants seemed to be their connections to the police through social networks such as family, friends, or co-workers. Quinn, a 21-year-old transgender woman whose mother worked with Newark police, stated, "I feel comfortable... So if something happens, I call them." However, these participants still demonstrated awareness of reporting barriers and problems with the police. Brandy, an 18-year-old transgender woman, called the police her "girlfriends". But she still displayed reservations, stating, "I will press charges, I will file a restraining order or whatever, but to call the cops if you need help? No." Lamar agreed, noting, " would take their time... depending on how deep in Newark or what ward you're in." Compared with their hypothetical discussions, around a third of the sample (37%) reported having actually summoned the police. Strikingly, this included only one medical emergency, despite a number of participants stating they would seek out the police in this situation. Roughly half of the calls made were for violent incidents, including IPV, robbery, and other physical altercations (50%). Approximately a fifth of calls were for serious harassment and/or threats (19%), while roughly a quarter of these incidents were for general assistance (23%), such as car accidents, being stranded in a vehicle, and requesting a police escort 8. Notably, study participants described nine incidents in which they called the police for IPV; all but two were made by women (six cisgender women and two transgender women, respectively). The outcome of nearly all of these calls (84%) were negative, with participants noting police did not respond in a timely manner, were discourteous, operated in homophobic or heterosexist ways, engaged in harassment, or did not successfully resolve the incident. However, the majority of these police-citizen interactions (64%) were not specifically related to participants' identities as SOGI minorities (i.e., they were not victimized in the context of IPV with a same-sex partner or due to anti-LGBTQ bias and/or their SOGI status was not highlighted by the responding officer). Similar to reported IPV incidents, police-citizen encounters in which a participant's LGBTQ identity was a focal point of the interaction 9 (N = 15) were also more likely to be encountered by female cisgender (N = 4) and transgender women (N = 5) relative to cisgender men (N = 2) and transgender male participants (N = 1). Some participants described situations that were roundly dismissed as trivial and by extension, unworthy of reporting, such as verbal street harassment that did not involve express threats or more serious forms of harassment (e.g., harassment perpetrated by a group). Micah, a 21-year-old gay man, said non-violent verbal harassment did not warrant calling the police, as "you can ignore it until they decide to put hands on you. That's where you draw the line." However, some LGBTQ people-particularly transgender women and men-may not equate verbal harassment with "violence," as this type of victimization can become normalized and perfunctory through regular exposure to it (Jauk 2013, p. 812). Participants also identified several other types of incidents that they did not consider serious enough to warrant police involvement, including noise complaints, car accidents or disabled vehicles, shoplifting, the retrieval of property, and requesting police escorts. Strikingly, when asked who they would initially contact in the event that they were the victim of an anti-LGBTQ bias crime (see Appendix A), nearly all of the participants who answered this question (N = 56) indicated the police would not be their first choice and that they would prefer to reach out to others instead (91%). Of the participants who indicated they would not reach out to the police (N = 51), they most often stated they would seek assistance from their social networks via their family (47%) or friends (22%). Less commonly, participants disclosed they would engage in self-help by fighting back (12%), while others (19%) gave miscellaneous responses, such as "it depends," "someone else," or "not the police." When they discussed their decisions, participants were emphatic about their preference to seek help from informal sources over the police. Khalilah, an 18-year-old lesbian, pointedly asserted her preference to contact her friends, as she felt they "... could get shit done better than the police could." Despite their vulnerability as prospective bias crime victims as LGBTQ people of color (e.g., NCVAP 2017a; Meyer 2010), it is telling that participants would feel more comfortable operating through informal channels versus risking an uncomfortable or unhelpful response from the police. Rationales for Avoiding the Police Participants were also asked to articulate the reasons behind their decision to avoid contacting the police. Broadly speaking, these rationales were grouped into two distinct categories: generalized concerns about the police not specifically linked to participants' LGBTQ identities, and LGBTQ-related concerns directly tied to participants' status as sexual and/or gender minorities. Generalized concerns reflected themes commonly observed in urban contexts (see Brunson and Gau 2014) including doubts about the efficacy and helpfulness of the police, prior negative contacts with the police, and a fear of negative consequences arising from police interactions. Concerns that were specifically and overtly anchored in participants' identities as LGBTQ people related to fears of insensitive or discriminatory treatment based on their SOGI status, feelings that the police would not be helpful or take their complaint seriously in situations where sexuality and/or gender identity was integral (e.g., bias crimes and IPV), and negative prior contacts with the police related to their SOGI identity, including derogatory, harassing, or otherwise homophobic treatment. Trans participants were particularly fearful of discrimination and their identities were especially salient in their consideration of avoiding the police. The Police Are Not Helpful in Urban Communities Consistent with prior work about the perceived effectiveness of the police in urban, economically depressed communities (e.g., Brunson and Miller 2006), each focus group contained participants who expressed skepticism and mistrust towards police. Many underscored slow response times and believed that even if the police did respond, they either could not or would not be of effective assistance. The belief that police would not respond to calls for assistance in a timely manner cemented participants' perceptions that they were uncaring. As a result, participants perceived contacting the police as an exercise in futility. Some drew on their beliefs about police efficacy, while others drew directly on past negative experiences. Brian, a 22-year-old gay man, said the police "take forever to come because they really don't give two shits because you're in the hood where they really don't care what you're doing." According to Eli, a 22-year-old man 10, "The police need to be more efficient.... mad long for the cops to come." Sierra, a 21-year-old transgender woman, bitterly recalled calling the police during a domestic dispute in which she was "thinking I'm in danger and stuff." She said the police took over 90 min to respond and as a result, she "would rather die" than call them in the future. Similarly, Alexis, a 19-year-old gay man and Vivan, a 20-year-old lesbian, elaborated on their disillusionment with the police as a result of slow response times: Vivan: My mom and her ex-boyfriend was fighting and it got to the point where somebody was bleeding really bad... So we called the cops... and it took them, like, at least forty minutes to get to the damn thing. Other facets of participants' belief that the police were unconcerned with their welfare related to police procedures. Participants said that even if the police did respond in a timely manner, they still would not be of help due to the flawed process of reporting crimes and the potential ineffectiveness of police investigations. An exchange between Loretta, an 18-year-old trans woman, and Cedric, a 21-year-old gay man, illustrated their shared belief that calling the police would be fruitless. Loretta explained, "I'm not going to call the cops because the cops won't-what are they going to do?... They going to ask me to come to the law station, fill out a report, then they'll get back to you." In response, Cedric added, "And then nothing is going to get done." Similarly, Jace, an 18-year-old lesbian, posited, "I probably wouldn't even call . Because what's the point?... They like, really don't help you." Veronica (a 23-year-old transgender woman), Brandy, Lyndsey, and Sierra were also doubtful that the police would be able to track down a perpetrator: Ingrid: I mean, they come, but they come slow. By the time they get down there I been handle it. Honey, dust it off, and washed my hands and-Brandy: Especially when you get robbed and they be like, "Alright, file a police report." Girl, I'm not doing all that. Lyndsey: Yeah, what the fuck is that? Sierra: I don't even remember how they look like. Brandy: They are not going to find them... The only thing I may be able to tell them is he was tall, dreads, light skin, cute. And be like, "Okay". Thus, participants felt that bureaucratic inconveniences, such as filing reports, combined with the likelihood that the police could or would be able to identify unidentified perpetrators, meant that summoning the police would be pointless. The Police Are Not Helpful for LGBTQ Citizens in Urban Communities In addition to the concerns outlined above, participants also expressed a range of concerns specifically centered on their SOGI identity. For instance, some participants felt the police would not take their complaint seriously due to their LGBTQ identity and would treat them differently than heterosexual and cisgender victims. Xander, a 21-year-old gay man, noted, "I'll call the cops-I've seen gay people and the cops dealing with them, they take it as a joke, laughing at them, making fun of them. And like, it's serious... hey need to respect us." Lashawn, a lesbian in her twenties, expressed concerns that the police would behave in an outright derisive manner towards LGBTQ victims: If it's a gay situation, they, when they get there, they're going to crack jokes, they're going to have their own personal biases towards you and they're not even going to hide it. They're not going to wait until they get in the cop car to make these remarks. No, they're going to pull up to the scene, snicker when they see that, like, two men or two women who are romantically involved, they'll start snickering like, "We're here for this?" You can see, like, their whole... disposition just changes and their attitude towards helping you and asking you questions. Similarly, Jaden, a 52-year-old lesbian, who called herself an A.G. 11, described a situation in which she called the police during a violent incident involving her relative and their same-sex partner. Six officers responded to Jaden's call, two of whom she described as "part of community." She said these two officers "took very serious," but was irritated by the response of the four heterosexual officers: "When they arrived... the four thought it was a joke, you know.... Like, I had to correct those four officers. Like... 'This is real. This is serious.'" Billie, a 51-year-old lesbian, offered, "I think the consequences... for us as LBGT folk are greater.... e would hesitate to call the police because... they're not supportive." Lauren, an outreach worker, recalled that in the course of her professional role working with LGBTQ youth: We've had several incidents... where some of the kids have been attacked by people... in the community... but they choose not to say anything because we feel like the cops are not going to do anything.... They'll probably be like, "Oh, this is just a faggot. We don't care." And that's exactly how they approach it. Lashawn witnessed an incident that confirmed these expectations: It was two guys who were romantically involved. They were in a relationship and... they were getting ready to like, have a little altercation and somebody else jumped in so they ended up, like, beating the other guy. So then they called the cops, and the cops, you know, at first he just thought it was two, like, straight guys going at it but then when they saw that it was two gay guys who, like, beat this other guy up, it was like, "You let these two fags beat you up? We shouldn't even take this report." Like making a joke of it. With respect to same-sex romantic relationships, Latasha pointed out the police "don't take same-sex relationships the same as um, straight.... Like, when we argue or fight... they just look at it and be like, 'Oh, well, y'all need to go separate.'" Thus, LGBTQ IPV victims may hold perceptions the police would engage in differential treatment regarding their victimization by behaving in a cavalier, uncaring manner. In a similar vein, some participants expressed a desire for the police to treat them with basic humanity in order to affirm the dignity of LGBTQ people and take their victimization seriously. Kobe, a 24-year-old gay man, said Newark's LGBTQ community desires "to be seen as just being people and... human beings." Arleen also wished officers would treat her as "an individual" rather than "this, you know, gay kid." Furthermore, Latasha requested that officers not speak to LGBTQ citizens like "we not a piece of trash," while Deiondra asserted that officers should not "talk to us like animals." Charlene, a 50-year-old trans woman, elaborated: They don't perceive us as people, human beings, mammals of the earth, normal, you know, every day people going through life. They don't even see that.... All gay men, trans women: they're promiscuous, they're hookers. Uh, everything but human beings. Many participants described the police as distant and uncaring, leading to perceptions that they are not a reliable source of assistance. Brayton stated, "a lot of people feel uncomfortable going to the police because they are gay" and believe they will not be treated "in a respectful manner." Andy, a 24-year-old lesbian, said "I can't call the police for everything. I can't call the police for most things," and noted she preferred to handle incidents on her own. Elaborating, Billie, a 51-year-old lesbian, said the police "don't care" about the LGBTQ community and surmised that multiple stereotypes seep into how the police interpret complaints from LGBTQ citizens, deterring them from reporting their victimization: The police officers generally are not going to approach the situation in a neutral way. They're approaching the situation with all these layers of stereotypes that they have around being brown people, being people of other than heterosexual experience. And so, I don't think that... police officers are a safe bet for us. Taken together, such apprehensions and experiences dissuade many LGBTQ victims from reporting their victimization to the police. Indeed, prior research finds that SOGI minority victims who anticipate biased treatment by police may feel discouraged from reporting (Briones- ). The Distinct Experiences of Trans and Gender Nonconforming Participants Participants who identified as trans or as outside of binary gender presentations experienced or anticipated mistreatment due to stereotypes and misgendering. Erica, a 28year-old trans woman, stated, " don't respect the transgender community.... Whether you're wrong, or indifferent, they're always gonna downplay you because you're transgender." Jace, an 18-year-old lesbian, offered, "if I call the cops... and they hear that I'm a female but when they come I'm dressed like a guy... everything changes.... , 'she's a man, so why does she call me?'" Octavia, a 56-year-old lesbian, recounted that police repeatedly called her "sir," adding, "'You have my driver's license in your hand. You don't know I'm a woman? Did you look at it?'" Dion, a 43-year-old lesbian, described similar difficulties and noted, "If I take my cap off or I take my hat off, I don't have that problem. None at all." Consequently, Charlene, a 50-year-old transgender woman, surmised, "I don't like the way treat the trans community." A few trans participants also mentioned fears of being labeled as sex workers. Ayana reported being called "a prostitute" by an officer while walking in a public space. Loretta described an incident in which police attempted to arrest her for sex work, explaining, "One day, you know, I was just (pause) parading around (laughs) and they tried to get me." Loretta added: The Newark cops, it's like, the Newark cops have something against us transgenders. They have something against us.... They be chasing us, child. (laughs).... They don't like us, so that's why people don't really, like, call them for help. We really won't, like, it has to be like, hell on ice for us to have to really like, get up and go to the police station because they won't help us. Cisgender participants also noted the difficulties faced by the trans community. Alexis, a 19-year-old gay man, said, "people get up in drags and walk downtown, then they're accused for prostituting and voguing 12 ". During an exchange with Loretta, Lamar, a 25-year-old gay man, declared, "when Loretta and the other transgender girls out working, um, the police take aggressive force because they don't know how to handle a transgendered woman." Furthermore, Brian, a 22-year-old gay man, disclosed, "And if you are transgender or you... full-term walk as a woman, and you don't look like a woman, you get attacked by anybody. You get harassed by cops." Thus, it appeared transgender and gender nonconforming participants had especially adverse experiences with insensitive or antagonistic officer behaviors, leading to a shared perception that officers are not safe to interact with, even when needed. Consequences of Seeking Help from Police Participants mentioned a range of potential consequences from contacting the police for assistance, including the possibility of negative police interactions (e.g., harassment or arrests), as well as reprisals from the perpetrator(s) implicated, or being labeled a snitch. These concerns were consistent with common misgivings about police contacts among citizens in urban settings. Other themes were directly tied to participants' LGBTQ identities, as they were concerned they would experience or actually had experienced homophobic or transphobic police behaviors. General Consequences Some participants were fearful that contacting the police for help would result in their own arrest and/or that force might be enacted against them. Brian, a 22-year-old gay man, stated, "you'll be trying to call somebody for help and they wind up being the type to get you in trouble and lie on you." Blake, a 19-year-old gay man, had such an experience. He noted, "I called the cops because I was getting attacked and I got locked up." He explained, "even if you never had an experience with ... you're going to think they going to do some bullshit to you." Tristan, a 24-year-old gay man, recalled, "I don't want to call the cops because they quick to put anybody in handcuffs. Even persons who ain't got nothing to do with the situation." Kobe, a 24-year-old gay man, added that he felt police are "too aggressive." He described having been "severely, like, patted down, thrown around, and stuff like that" as well as being "cuffed up." Tristan, Kobe, and Micah (a 21-year-old gay man) each separately discussed concerns about forceful treatment, including the potential for such encounters to culminate in an arrest, even when summoning officers for assistance. According to Micah, "a lot of the community has... their issues with the cops. Their warrants and et cetera.... It might not have been the cause of the situation, but they don't want knowing they have this in their background." Micah's discussion highlights that fear of arrest for unrelated activities can deter LGBTQ community members from relying on the police. Other participants relayed concerns about retribution associated with perceived snitching from the perpetrator, their friends, or other neighborhood residents. Spencer, a 19-yearold gay man, noted that even if he were to reach out to the police and identify a perpetrator, there could be harmful-even deadly-ramifications for doing so: If I get Dayday from down the hill locked up, Dayday from down the hill only get locked up for thirty days. He could've beat me, he could've lit me on fire, but in thirty days Dayday will be outside waiting for me and beating me up again.... Y'all just going to keep him in there and Lord knows, he probably... already been to jail fifty times so he don't really care to go back because he know that if he hit me it doesn't matter, like he's going to go back and he's going to get out.... Nobody is going to stop harassing you because you called the police. Because Dayday also know Tayday from around the corner (laughs).... And Tayday from around the corner is just as ignorant as him. Additionally, Lyndsey, a 20-year-old trans woman, explained, "The cops just make it even worse... Once the cops go, you still live in that household." Kass, a 24-year-old trans woman, affirmed, "if they see you as a snitch, they're going to beat you up. They're going to, you know, jump you... If you call the cops and you live in the 'hood, you could get shot for calling the cops.... Real fast." The Impact of LGBTQ Identities on Interactions with Police Some participants described misgivings and experiences of negative impacts from summoning the police as an LGBTQ person. Participants' fears and experiences related to biased treatment rooted in heterosexism in three areas: police misconduct stemming from homophobia and/or transphobia, the treatment of IPV among LGBTQ couples, and the disparate treatment of the transgender and gender nonconforming community. Participants described overtly homophobic instances of police misconduct with officers, including the use of homophobic slurs and other derogatory language, sexual harassment and sexual propositioning, and physical abuse. Sometimes these incidents occurred outside of calls for service, when study participants were navigating the streets of Newark. Such encounters are troubling, resulting in the collective assessment that calling the police is an unsafe gamble, at best (Brunson 2007). The anticipation of potential homophobic misconduct means contacting the police can be an undesirable prospect, at best. With respect to police misconduct, participants most frequently described homophobic verbal harassment. Latasha recalled a time when Ayana, a transgender woman and fellow participant, was called a "faggot" repeatedly by police officers, while Brayton witnessed officers refer to citizens as "fag boys." Ayana also reported the police barred her from using the women's restroom in a public space after telling her, "well, you're a guy." Aaron, a gay man in his late fifties, recalled that officers called his co-worker a "McGreevey," referencing former New Jersey Governor Jim McGreevey, a gay man. Additionally, Vivan, a 20-year-old lesbian, was asked by an officer about her sexual orientation, and the officer responded, "'Oh, you too pretty to be a lesbian.'" While "up in drags," Alexis was stopped and told by an officer, "'Oh, if you was a girl, we would have sex.'" Harmony also described an incident in which she was harassed by officers: The cop that pulled me over, it was like, well, "Why are you out here? You need to be in the house." You know, um, "only people that's out is your kind." I'm like, "Well, what do you mean my kind? Like, specify that." So, he's like, "Well, you know what I mean. Y'all he-she's," quote unquote. Other participants described officers engaging in misconduct when arresting them. Shakia, a 23-year-old transgender woman, described that when officers arrested her and two others, they "just started like, stomping us," and one of the officers referred to her as "a fucking' fag" during the assault. Erica, a 28-year-old trans woman, recalled when a friend "turnt a date" and the incident devolved into a physical fight that led to police intervention. While being interviewed, Erica said the detective "literally snatched my hair" and also verbally harassed her, stating, "you're a fucking male and this and this, what were you doing, prostituting?" Dana, an 18-year-old lesbian, described an officer arresting her saying, "'Yo, this girl look just like a boy.... She's going to make all the girls go crazy in .'" Some participants also recalled involuntary contacts with the police steeped in transphobic and/or homophobic mistreatment that operated through misgendering. In one such encounter, Wanda, a 20-year-old lesbian who identified as masculine, described an incident where she was stopped and subsequently searched by male officers: They're like, "Get on the wall," and I'm talking to them... "Last time I checked, y'all can't check me." He's like, "Oh, why can't we check you?" "I have breasts and a vagina. Just because I have on baggy clothes that don't mean that I'm a guy," but they still checked me and they didn't care.... I was trying to be funny. I was like, "I'm on my period, so be careful." So, he was like, "Oh, you're going to get blood on me or something?" And then he really thought I was playing that I was a female until... I took my zipper down and everything because they was like, "Take everything off, like, your, your coat or whatever." And I'm like, "I'm a female. Like, I don't have to lie to you." Finally, several outreach workers experienced police harassment when attempting to do outreach activities (see also ). Emma, a transgender woman, described handing out condoms wearing an official ID from her agency. An officer approached her, stating, "'You know you're soliciting sex, right'"? Similarly, Brayton noted "we had five police cars come, they took pictures of us, they took our names... like we were soliciting and everything." Charlene, a 50-year-old trans woman, echoed their experiences, as she recalled, "like sometimes residents would call the cops and they would describe my vehicle and say, 'Well this vehicle is parked outside my house and they're soliciting.'" Aaron attempted to combat this problem for his organization by applying for a permit to perform outreach work in spaces where workers were being regularly accosted by police. Aaron stated, "we actually try to... follow the letter of the law.... They tell us we needed a permit... to do what we do." However, he was met with significant resistance during the process. After being turned away twice, Aaron exasperatedly declared, "so we have been given the run-around about what we're trying to do." Concerning IPV incidents, only women study participants discussed the negative impacts of police responses. These experiences were generally negatively tinged by officers' reliance on heteronormative stereotypes regarding typical IPV incidents featuring cisgender, heterosexual couples, where femininity is equated with victimization and masculinity with perpetration (see Guadalupe-Diaz and Jasinski 2017). Officers seemed to expect stereotypical gender dynamics to occur, even in the context of same-sex couples (see Hassouneh and Glass 2008). These assumptions produced negative outcomes for masculine-presenting women, as they were identified as the initiator of IPV incidents based on their gender presentation. As one example of this dynamic, Bernice, a 30-year-old lesbian, felt she was targeted by the police due to her masculine appearance after her feminine partner battered her: I called the police, for my safety, and at this point I had, like, a black eye or two and, like, a missing tooth and everything. And when the police came, they actually just went straight to me and I ended up going to jail without even having asked questions at all and, like... they automatically thought that I was a man so the, the officers that were there, they were male officers that searched me for everything and that, um, patted me down and that actually arrested me. But then even once they realized I was not a male, I was still the person that went to jail. In other instances of IPV, the couple was simply threatened with dual arrest due to the inability of officers to correctly determine the identity of the perpetrator. This tendency is consistent with a recent analysis that revealed disparate outcomes for IPV events between same-sex romantic partners, as officers were significantly more likely to enact dual arrests relative to IPV between heterosexual couples (Hirschel and McCormack 2020). In one such instance, Andy, a 24-year-old lesbian, called the police after being choked by her partner. As both women were masculine-presenting, the officer's identification of the aggressor appeared to rely on stereotypes related to typical gender-based victim and perpetrator patterns during IPV incidents. Unable to identify the perpetrator, the officer threatened to simply arrest both women: Like, they didn't know what to do. So it's kind of like they were rolling up on a situation of, like, two gay guys and they're just like, "Alright, y'all both going to jail. Y'all can't figure this out, we'll send you both to jail." Because by the time they got there... we were calmed down.... They don't know who the aggressor is, like, they don't want to ask, you know what I'm saying? Because they're going up and they're just making assumptions on your appearance, you know?... Like, so, it was something like, it was my word against hers and it was, like, how I was presenting versus how she was presenting, you know what I'm saying? With respect to the trans community and their experiences with the police, some trans women feared their identity would significantly and negatively impact these interactions. Several described the police as insensitive and/or disrespectful due to their status as trans women. Yvette, a 33-year-old trans woman, stated the police "harass rather than help." Transgender participants were concerned that responding officers would, instead of responding to the incident itself, focus on their transgender status, resulting in a problematic interaction rooted in prejudice and stereotypes against the transgender community. An exchange between Shakia, Charlene, and Erica, all trans women, illustrated these perceptions: Shakia: You could've been the one calling them and they'll make it about you-Charlene: You, mmhmm. Erica: Mmhmm. Shakia: -and your transition. What does that have to do with me calling you? I called you to help me. Furthermore, Jenine, a 32-year-old trans woman, was physically attacked by a group of youths, but described the police as slow to respond. When the police did arrive, they focused on her status as a transgender woman: "Once they got there, they were like, 'oh, um, do you think it was because you're gay?' That was the first question. And I'm like, 'what makes you think I'm gay?... I'm a woman.'" As a consequence, Jenine said, "next time that somebody tries to bash me, I'm gonna bash their head open and then I'm gonna call you.... You never helped me before, this is what you get now." Discussion In an attempt to extend the literature in this area, the current study examined the range of concerns, fears, and negative police experiences that may impact and suppress reporting behaviors among LGBTQ people. By focusing on when and why participants would seek help from the police, this study sought to build upon existing scholarship that has consistently suggested a concentration of negative experiences with and perceptions of law enforcement among the LGBTQ community (;Hodge and Sexton 2018;Satuluri and Nadal 2018;;;;Wolff and Cokely 2007). In doing so, this study embraced an intersectional perspective (Crenshaw 1991a(Crenshaw, 1991b by specifically targeting the experiences of LGBTQ participants of color. Not only are racial and ethnic minorities overrepresented within the LGBTQ community (Deschamps and Singer 2017), but relative to their white counterparts, LGBTQ people of color are also likely disproportionately subjected to inappropriate treatment by the police (Amnesty International USA 2005; Center for American Progress 2016; ;) and experience overrepresentation in the criminal justice system as offenders () despite their vulnerability as prospective victims of crime (Dunbar 2006;Kuehnle and Sullivan 2001;NCVAP 2018;Meyer 2010). Overall, the patterns observed amongst this sample are consistent with and extend the burgeoning literature related to policing of LGBTQ people and the various problems LGBTQ citizens might encounter-or anticipate encountering-when interacting with the police to seek their assistance. Participants expressed sentiments that the police should be avoided except under select circumstances (if they are to be contacted at all), held a generally low opinion of the police, amassed mostly negative encounters with the police, and recounted instances of misconduct and harassment from officers. Their reasons for avoiding the police were both rooted in more generalized concerns observed in urban, over-policed environments as well as rationales specifically linked to their identities as SOGI minorities. When asked when they would hypothetically contact the police, one-third of responding participants indicated they would not reach out to the police regardless of the situation, while two-thirds expressed that they would only do so for especially serious incidents, such as violent encounters, serious harassment, and medical emergencies, signaling a pronounced hesitation to seek help from the police barring potentially life-threatening events. This slightly departed from the situations in which participants actually did report calling the police, which were more varied and only included one medical emergency. Most participants who voluntarily contacted the police were met with a poor experience, meaning that the police were untimely, discourteous, ineffective, and/or engaged in harassment or other problematic behaviors. The importance of the tone of police-citizen interactions cannot be understated, as an accumulation of negative experiences with the police may potentially suppress reporting behaviors (;Miles-Johnson 2013b;) while positive interactions may improve citizens' perceptions of the police and lead to increased cooperation (). Even the viewpoints of those who have not directly interacted with the police can be altered by recounted vicarious negative experiences relayed through their respective social networks (), ultimately producing a shared negative view of the police (Brunson 2007) that may discourage them from reporting crimes. Some of participants' rationales for police avoidance were grounded in general concerns commonly broached by citizens regarding their lack of confidence in the general performance and effectiveness of the police that has been noted in studies of racial and ethnic minorities (e.g., Weitzer and Tuch 2005). Participants relayed frustrations with poor response times and other barriers to service, such as the perceived difficulties associated with reporting crimes, resulting in the perception that calling the police was largely pointless. Furthermore, participants believed that even if the police did respond in a timely manner, they would not be particularly effective in their assistance, might be discourteous, or may engage in more serious forms of misconduct. Participants also relayed concerns about the potential consequences of contacting the police based both on previous negative experiences and/or expectations that the police would behave ineffectively or even aggressively. Consistent with work that has noted a concentration of police harassment and misconduct in economically depressed communities with a concentration of people of color (e.g., Brunson and Miller 2006;Rengifo and Fratello 2015), some participants feared these same dynamics might be present during their own interactions with the police, while others directly experienced these consequences. Other themes outlined by participants were directly related to their SOGI status, including lack of sensitivity among the police when responding to calls about specific LGBTQ issues, such as bias-related victimizations and IPV, and/or situations in which participants' LGBTQ identities were highlighted at some point during the interaction (e.g., officers engaging in misgendering, referring to a participant's SOGI status, or making assumptions about the dynamics of same-sex romantic relationships). Some participants felt the police would be homophobic or transphobic in their disposition towards SOGI minorities, with some participants drawing from direct experiences that lent credence to their misgivings. These sentiments are consistent with the literature that has studied the nature of police interactions with LGBTQ citizens and have detected a troubling level of insensitivity, harassment (Hodge and Sexton 2018;NCVAP 2018;Wolff and Cokely 2007), and anticipated harassment (Briones- ). Further, in line with previous work with transgender people that has revealed an especially low regard for the police and frequent exposure to police misconduct (;Guadalupe-Diaz 2016;Miles-Johnson 2016;Miles-Johnson 2020;Stotzer 2014b;), transgender participants were concerned their gender identity would become the central focus of their encounters with the police and produce a poor outcome. Unfortunately, some participants also felt the police would treat their complaints as insignificant and would not take them seriously, as has been mirrored in prior work with LGBTQ populations and their interactions with the police (Bernstein and Kostelac 2002;;NCVAP 2017aNCVAP, 2017bWolff and Cokely 2007). A sensitive response from the police is especially crucial for offenses that are underreported and tied to feelings of shame, including IPV and sexual assault, as LGBTQ citizens who anticipate discriminatory behaviors on the part of the police may be less likely to report their victimization (Briones- ) despite an increased risk of sexual assault (), the presence of IPV within the LGBTQ community (e.g., Guadalupe-Diaz 2016; Messinger 2017), and the dangers posed by anti-LGBTQ bias-related crimes interlaced with violence (e.g., Herek 2009). In particular, the police need be perceived as an approachable, trustworthy, and safe source of assistance for anti-LGBTQ bias crimes due to their inherent stigma and reduced likelihood of disclosure to the police (Herek 1989;). Overall, the patterns observed among the data should be viewed from the vantage point of intersectionality, as under its assumptions, LGBTQ individuals of color will have particularly adverse interactions with the police due to multiple identities traditionally associated with societal marginalization (Burgess-Proctor 2006;Crenshaw 1991a;Crenshaw 1991b;Gaynor and Bassett 2020). This study offers a unique look at these dynamics due to the composition of its sample, which is primarily comprised of people of color. It also extends the literature concerning the role of interlocking identities and their potential to generate negative ramifications during interactions with the police (e.g., Panfil 2018). Analogous to other work that has examined intersectionality's effects upon multiplymarginalized LGBTQ people (;Robinson 2020), participants offered legitimate concerns about interacting with the police due to distrust and a desire to avoid them. Participants' experiences can also be viewed as an extension of the racial, ethnic, and class-based characteristics present in Newark, as it is an urban environment featuring elevated police surveillance experienced by people of color in similar settings (e.g., Brunson and Miller 2006). The role of intersectionality is also illustrated by transgender and gender nonconforming participants, as they described being subjected-or feared being subjectedto stereotypes espoused by officers; given the existing work surrounding transgender women of color and their negative police-citizen interactions (e.g., Graham 2014;;;), these concerns are not surprising. Consistent with other work that has pointed to possible underreporting among LGBTQ people of color (Guadalupe-Diaz 2016; Kuehnle and Sullivan 2001), many participants indicated they would not seek police assistance in the event of anti-LGBTQ bias victimization; this is especially concerning in light of elevated victimization rates experienced by SOGI minorities who are also people of color (Dunbar 2006;Meyer 2010;NCVAP 2017aNCVAP, 2017b. Taken together, these results highlight the importance of increasing the comfort of those with intersectional identities during encounters with the police. Indeed, it is apparent increased cultural competency and sensitivity among law enforcement is sorely needed (see ;Hodge and Sexton 2018;Satuluri and Nadal 2018;;) in Newark and as well as other similar locales that aim to improve interactions between the LGBTQ community and the police. Newark represents a particularly suitable environment to examine these issues. Alongside "anecdotal" accounts of police mistreatment towards LGBTQ citizens in Newark (United States v. City of Newark 2016, p. 48), only two other cities-Baltimore and New Orleans-have verified systematic anti-LGBTQ bias through Consent Decrees (DOJ 2017; United States v. Police of Baltimore City 2017; United States v. City of New Orleans 2013). Like Newark, Baltimore and New Orleans are urban, metropolitan areas with above-average crime and poverty rates that are also comprised mostly of people of color (United States Census Bureau 2017). As a result, the findings of this study may be applicable to similar urban contexts that have historically fostered a troubled relationship with the police. Thus, it can provide insights about how to better police LGBTQ communities in these locales. In the aftermath of Newark's Consent Decree with the Department of Justice (Newark Department of Public Safety 2018), it has taken early steps to attempt to improve its treatment of LGBTQ citizens. For instance, the NPD recently directed officers to "not question person's gender identity" and separate transgender people from the general population while they are in custody (Nelson 2019a, para. 1). Unfortunately, however, the variety of reforms included in the NPD's Consent Decree to address Newark's policing practices does not formally include the LGBTQ community and are still a work in progress (Nelson 2019b). As noted by Dwyer, it can be difficult to ascertain the best way to improve the competency of the police with regard to LGBTQ issues and people through targeted training. In the current study, participants offered a range of suggestions 13 to better assist Newark's LGBTQ community through four overarching areas that can be used to guide future research and policy-making; on a positive note, some of these suggestions are currently being undertaken by the city of Newark presently, lending credence to the need to address these issues. First, participants requested mandatory, department-wide training to promote increased cultural competence, sensitivity, and professionalism towards the LGBTQ community; ideally, this would include educating officers about the wide spectrum of sexual orientations, gender presentations, and gender identities present in the LGBTQ community, the proper usage of LGBTQ-related terminology and pronouns, and guidance regarding appropriate responses to LGBTQ victims of IPV and bias-related crimes. Second, participants mentioned a need to appoint trained NPD officers as liaisons that serve LGBTQ residents and crime victims. Third, participants requested a more prominent and visible presence of "out" LGBTQ NPD officers. Finally, participants also expressed a desire for the police to immerse themselves in the Newark's LGBTQ community through participation in Newark's LGBTQ-related events (e.g., Newark's annual pride festival), hosting events such as meet-and-greets, and demonstrating their support of local LGBTQ-oriented grass roots organizations and community leaders. To repair the divide that has formed between the police and LGBTQ citizens, establishing trust and feelings of comfort is crucial in order to increase reporting rates, and ultimately, better protect and serve this vibrant and important segment of the population. Informed Consent Statement: Informed consent was obtained from all participants involved in this study. Data Availability Statement: The data are not publicly available because of its extreme sensitivity. Due to the specific and detailed qualitative narratives provided by participants-some of whom hold social significance through their open involvement Newark's LGBTQ community-it is possible that they may be publicly identified through this data and may face repercussions as a result.
Response to "Can We Really Control the Inframammary Fold (IMF) in Breast Augmentation?" In a recent letter to the Aesthetic Surgery Journal, Can We Really Control the Inframammary Fold (IMF) in Breast Augmentation? Dr Swanson made claims regarding the inability to control the IMF during primary breast augmentation.1 We were surprised by the comment and almost decided not to respond, however, we decided to offer the following. Controlling IMF position and nipple: fold length is the single most important aspect of breast surgery and is often disregarded. The surgeon, who pays no attention to this philosophy, is sure to have poor results; and the surgeon, who makes this a priority, will enhance results; however, controlling the IMF can never be 100% successful as many variables exist, the most unpredictable being living tissue. Our clinical practice has demonstrated an easily adaptable, low-risk, three-point IMF suture
Characterization of L-arginine transport in adrenal cells: effect of ACTH. Nitric oxide synthesis depends on the availability of its precursor L-arginine, which could be regulated by the presence of a specific uptake system. In the present report, the characterization of the L-arginine transport system in mouse adrenal Y1 cells was performed. L-arginine transport was mediated by the cationic/neutral amino acid transport system y+L and the cationic amino acid transporter (CAT) y+ in Y1 cells. These Na+-independent transporters were identified by their selectivity for neutral amino acids in both the presence and absence of Na+ and by the effect of N-ethylmaleimide. Transport data correlated to expression of genes encoding for CAT-1, CAT-2, CD-98, and y+LAT-2. A similar expression profile was detected in rat adrenal zona fasciculata. In addition, cationic amino acid uptake in Y1 cells was upregulated by ACTH and/or cAMP with a concomitant increase in nitric oxide (NO) production.
Re: Meat, fish, and colorectal cancer risk: the European Prospective Investigation into Cancer and Nutrition. In a large cohort comprising 10 populations in the European Prospective Investigation into Cancer and Nutrition, Norat et al. ( 1 ) reported that processed and red meat intake was associated with elevated rates of colorectal cancer and its subtypes. Although the authors considered several study limitations, they may have omitted one that is key: the possibility that confounding by socioeconomic position may be responsible for the diet disease gradients. In populations drawn from some of the countries featured in the article, markers of socioeconomic position have been shown to be associated with selfreported dietary characteristics, including meat consumption ( 2 ). Thus, persons who are socioeconomically disadvantaged are more likely to report higher intake than their affl uent counterparts ( 2 ). A raised risk of colorectal cancer has also been found in persons from deprived social groups, as indexed by lower levels of educational attainment ( 3 ). In exploring the relationship between meat consumption (indeed, most indicators of food intake) and colon cancer ( indeed, most chronic disease outcomes), surprisingly few investigators adjust for socioeconomic indices, so judging the impact of this covariate on the dietdisease relationship is problematic. However, a suggestion that socioeconomic deprivation may have a role as a confounder in the meat colon cancer relationship can be found in a study that appears to comprise a socioeconomically homogenous group of women. As cited by the authors ( 1 ), but not discussed in the present context, an early report from the Nurses Health Study ( 4 ) found a positive relationship of both unprocessed meat (beef, pork, or lamb) and processed meat intake with incident colon cancer. This association was essentially lost in a later follow-up study of the same population ( 5 ) containing over four times the number of cases ( n = 670) and therefore greater statistical precision than the earlier report. In a series of articles ( 3, 6, 7 ), the European Union Working Group on Socioeconomic Inequalities in Health has reported that the methodologic issues of comparing the relationship between mortality and socioeconomic indices (i.e., education and occupational social class) across culturally disparate European settings can be surmounted. Assuming that similar data are available in at least some of the cohorts comprising the present report ( 1 ), as they should be, presumably the potentially confounding role of socioeconomic position in the meat colon cancer relationship could be explored in a subgroup of study participants and reported by the authors.
Interleaver design for turbo codes by distance spectrum shaping Interleavers play a critical role in the performance of turbo codes. They can be best designed given the structure of the code. In this paper a new methodology for systematically designing the interleaver for a parallel-concatenated convolutional code (PCCC) is presented that exploits the structure of the code. Some novel approaches that result from this methodology are introduced. This methodology can be a basis for constructing algorithms to design very good interleavers for turbo-like codes. This algorithm tries to maximize the minimum distance of the code. The performance of the interleavers designed with this methodology is compared with previous methods and its superiority is illustrated by simulation results.
Digital Occlusal Load Analysis and Evaluation of Oral Health Quality of Life of Mandibular Complete Denture retained by Ultra Suction System Background: Occlusal unbalance is considered a major challenge for complete denture wearers. It could affect functional intra oral stability and decreases patient desires related to complete dentures as a definite treatment plan. Materials and Methods: Twelve mandibular complete dentures had been divided into two groups according to retentive protocol into group (C) as a control group: who received conventional heat-cured acrylic resin mandibular complete dentures, group (U) as test group: who received conventional heat-cured acrylic resin mandibular complete dentures retained by ultrasuction device. T-Scan occlusal load analysis system was used to evaluate the balanced occlusion existence on the complete denture followed by OHIP questionnaire administration to the patients to evaluate patient satisfaction. Studying the effect of time over six months follow-up for each group performed by paired t-test and comparison between group (C) and group (U) were performed using Independent t-test at the level of significance ≤ 0.05. Results: Considering balanced occlusion of both groups, they showed a significant difference between both sides posteriorly before adjustments (46.5%, 39.9% and 45%, 37.7%) and an insignificant difference between both sides posteriorly after adjustments (38.3%, 41.6% and 49.6%,49.3%) after one month except for group (C) which showed a significant difference after six months (38.3%, 41.6% and 44.5%, 39.7%) posteriorly between both sides. Investigated domains showed a higher significant difference for group (C) about oral healths effect on the quality of life (42.8±11.69). Conclusion: Ultra Suction retained mandibular complete dentures revealed better-balanced occlusion and patient satisfaction than ordinary mandibular complete dentures.
. Regulatory science is, broadly speaking, the effort to insure that the products of our advanced technological civilization are developed in harmony with human needs. More specifically, regulatory science can be described as the science of evaluating the safety, efficacy and quality of these products. An unbiased assessment of these aspects is necessary for proper regulation of food, drugs, the environment, agricultural chemicals as well as the countless new materials available to the public every year. Evaluation does not interfere with product development; indeed, it often hastens the appearance of beneficial products in the public sector. Evaluation criteria should be established through consensus between industry, academia, and government and only after a thorough scientific discussion grounded in the basic principle of protecting the welfare of society's citizens. Even more important than broad-ranging knowledge is the need to develop new evaluation strategies and methodologies. Numerous problems confronting the world today can surely benefit from the evaluative techniques of regulatory science. Since research in the academic sphere often fails to address many of these issues, I want to reiterate the need for our National Institute to play a more prominent role in coordinating regulatory policy and pursuing these issues based on my firm belief that such activity is indispensable for human survival.
Dynamics of viscous slugs fall in dry capillaries The dynamics of viscous slug fall in vertical dry capillaries is investigated by extending a published model (perfect wetting case) accounting for the film left behind the slug as it falls, and Laplace pressures at both ends of the slug in the momentum balance. The present investigation provides and uses an advancing contact-angle correlation determined based on a published theoretical work. The results are found in excellent agreement with published experimental values for falling slugs. The present model does not require any fitting parameter in the perfect wetting case, and is extended to include the non-perfect wetting case along with the unsteady-state dynamics using the quasi-steady-state approximation.
Impact of COVID-19 type events on the economy and climate under the stochastic DICE model The classical DICE model is a widely accepted integrated assessment model for the joint modeling of economic and climate systems, where all model state variables evolve over time deterministically. We reformulate and solve the DICE model as an optimal control dynamic programming problem with six state variables (related to the carbon concentration, temperature, and economic capital) evolving over time deterministically and affected by two controls (carbon emission mitigation rate and consumption). We then extend the model by adding a discrete stochastic shock variable to model the economy in the stressed and normal regimes as a jump process caused by events such as the COVID-19 pandemic. These shocks reduce the world gross output leading to a reduction in both the world net output and carbon emission. The extended model is solved under several scenarios as an optimal stochastic control problem, assuming that the shock events occur randomly on average once every 100 years and last for 5 years. The results show that, if the world gross output recovers in full after each event, the impact of the COVID-19 events on the temperature and carbon concentration will be immaterial even in the case of a conservative 10\% drop in the annual gross output over a 5-year period. The impact becomes noticeable, although still extremely small (long-term temperature drops by $0.1^\circ \mathrm{C}$), in a presence of persistent shocks of a 5\% output drop propagating to the subsequent time periods through the recursively reduced productivity. If the deterministic DICE model policy is applied in a presence of stochastic shocks (i.e. when this policy is suboptimal), then the drop in temperature is larger (approximately $0.25^\circ \mathrm{C}$), that is, the lower economic activities owing to shocks imply that more ambitious mitigation targets are now feasible at lower costs. Introduction The impact of the COVID-19 pandemic on the global economy is more severe than the impact from the 2008 global financial crisis (see e.g., International Monetary Fund ), and the projection of the COVID-19 impact on the economy and climate is a major concern. In this paper, we study the impact of COVID-19 type events on the economy and climate using the dynamic integrated climate-economy (DICE) model, extended to include stochastic shocks to the economy. The DICE model introduced by Nordhaus 1 is an extremely popular integrated assessment model (IAM) for the joint modeling of economic and climate systems. It has been regularly revised over the last three decades with the first version dating back to Nordhaus et al., and the most recent revision being DICE-2016Nordhaus (2017 2. The DICE model is one of the three main IAMs (the other two are FUND and PAGE) used by the United States government to determine the social cost of carbon; see Interagency Working Group on Social Cost of Greenhouse Gases. It balances parsimony with realism and is well documented with all published model equations; in addition, its code is publicly available, which is an exception rather than the rule for IAMs. At the same time, it is important to note that IAMs and the DICE model in particular have significant limitations (in the model structure and model parameters), which have been criticized and debated in the literature (see the discussions in Ackerman et al. ;Pindyck ; Grubb et al. ; Weitzman ). Despite the criticism, the DICE model has become the iconic typical reference point for climate-economy modeling, and is therefore used in our study. The DICE model is a deterministic approach that combines a Ramsey-Cass-Koopmans neoclassical model of economic growth (also known as the Ramsey growth model) with a simple climate model. It involves six state variables (atmospheric and upper and lower ocean carbon concentrations; atmospheric and lower ocean temperatures; and economic capital) evolving in time deterministically, two control variables (savings and carbon emission reduction rates) to be determined for each time period of the model, and several exogenous processes (e.g., population size and technology level). Uncertainty about the future of the climate and economy is then typically assessed by treating some model parameters as random variables (because we do not know the exact true value of the key parameters) using a Monte Carlo analysis (see Nordhaus ; Ackerman et al. ). Modeling uncertainty owing to the stochastic nature of the state variables (i.e., owing to the process uncertainty that is present even if we know the model parameters exactly) requires the development and solution of the DICE model as a dynamic model of decisionmaking under uncertainty, where we calculate the optimal policy response, under the assumption of continuing uncertainty throughout the time frame of the model. This is a much more difficult problem that requires more computational and mathematical sophistication, whereas the deterministic DICE model can be solved using an Excel spreadsheet or GAMS (a high-level programming language for mathematical modeling https://www.gams.com/). Few attempts have been made to extend the DICE model to incorporate stochasticity in the underlying state variables and solve it as a recursive dynamic programming problem. For example, Kelly and Kolstad and Leach formulated the DICE model with stochasticity in the temperature-time evolution and solved this as a recursive dynamic programming problem. These studies are seminal contributions to the incorporation of uncertainty in the DICE model (although their numerical solution approach is difficult to extend to higher dimensional space and time-frequency). Cai et al. formulates DICE as a dynamic programming problem with a stochastic shock on the economy and climate. In addition, Traeger developed a reduced DICE model with a smaller number of state variables, whereas Lontzek et al. studied the impact of climate tipping points. There are other studies that approached optimal strategies addressing climate change through a simple minimization of the damage function to find the optimal timing for an investment, such as in Conrad ; Luo and Shevchenko, which is extremely different from the DICE modeling approach and is not pursued in our paper. In our study, we extend the DICE model by adding a discrete stochastic shock variable, shifting the economy into a stressed regime owing to events such as COVID-19. This is similar to the model formulation in Cai et al. but with different types of jump processes for the shocks. The economy after our shocks is allowed to recover, whereas the jump shock considered in Lontzek et al. ; Cai et al. is an irreversible climate tipping point event. One of the scenarios we consider allows for stochastic shocks affecting productivity, which leads to a persistent impact on the economy. This is somewhat similar to the tipping point modeling in Lontzek et al. ; Cai et al.. However, it is important to note that shocks considered in our paper reduce both the world net output and emission through the shock reduction of the gross output while tipping point models assume shock on the net output and no shock on the emission. Thus our shocks lead to a reduction in a policy stringency while tipping point shocks lead to the opposite effect. In addition, our base model is the more recent version of DICE-2016, whereas Lontzek et al. (2015; Cai et al. use older DICE versions. COVID-19 has spread across the globe, with over 75 million confirmed cases and 1.6 million deaths from December 30, 2019, to December 20, 2020, according to the Weekly Epidemiological Update from the World Health Organization on December 22, 2020 3 (with over 4.6 million new cases and 79,000 deaths since the previous weekly updates). Large amounts of emergency loans are needed around the world to develop therapeutic agents and vaccines, as well as to implement various interventions to prevent the spread of infections, such as "stay-at-home" policies, and provide financial support for them. However, in recent years, the effects of climate change on global warming have become more serious on a global scale and the "Paris Agreement" to limit the global temperature increase 4 was adopted in 2015. This has resulted in more public and private funds being provided for green projects involving renewable energy and energy conservation, as the world works to prevent the effects of global climate change. In a pandemic of infectious diseases such as COVID-19, it is important to consider the economic impacts of both such pandemics and global warming at the same time. One of the unique features of the COVID-19 pandemic is extreme and widespread disruption to the global economy when compared to other global pandemics, such as the 1918 Influenza Pandemic ("Spanish Flu") or the Hong Kong Flu of 1968 5. On the official government website of the Bureau of Economic Analysis of the United States (see BEA ), it was reported on September 30, 2020 that the real gross domestic product (GDP) decreased at an annual rate of 31.4% in the second quarter of 2020. According to a news report on September 2, 2020 in The Japan Times, Japan's April-June GDP is expected to be revised after making an annualized 27.8% drop on a preliminary basis, which is the largest contraction in the post-World War II period. Though, at the time of revision of this paper, the reported drop of the real GDP in the United States in 2020 (compared to 2019) is only 3.4% (https://www.measuringworth.com accessed on 9 October 2021). The amounts of the stimulus packages released by governments in many countries to limit the human and economic impacts of the COVID-19 pandemic have been unprecedented. The International Monetary Fund policy tracker 6 presented a summary of the key economic responses around the world (e.g., the Coronavirus Aid, Relief and Economy Security Act introduced in United States in March 2020 has been estimated at 2.3 trillion USD (around 11% of the nation's GDP)). Although the impact of COVID-19 on the economy, human capital, and well-being in the long run is unknown, the historical experience of global pandemics and global recessions can provide valuable insight. Arthi and Parman provides an excellent review of the long-run effects on health, labor, and human capital from both historical pandemics and historical recessions. It has been argued that, from a historical perspective, the impact of COVID-19 has been similar to that of the "Spanish Flu" in terms of direct effects on the health and well-being of individuals, and similar to the Great Depression in terms of economic disruption. In this study, we consider the impact of COVID-19 type events on the DICE model outputs. We reformulate the DICE model as an optimal control problem and solve it using dynamic programming involving six state variables evolving over time deterministically and affected by two controls (emission control and savings rates). We then extend the model by adding a discrete stochastic shock variable to the gross world output to shift the economy into a stressed regime owing to events such as COVID-19, assuming that the economy recovers in full after the stressed period. The extended model is solved as an optimal the global temperature increase by 2 C (above the pre-industrial levels) by 2100, United Nations Treaty Collection. 5 The real GDP was not significantly affected during previous global pandemics; see for example GDP data for the United States available from https://www.measuringworth.com 6 www.imf.org/en/Topics/imf-and-covid19/Policy-Responses-to-COVID-19 stochastic control problem under different scenarios for the world gross output drop owing to these shock events. With the reference to the Great Depression and "Spanish Flu", for our scenarios, we assume that shocks occur on average once during a 100year period and last for 5 years. In addition, the world economic output during a stressed regime decreases by 5%-10%. We note that during the Great Depression, the real GDP in the United States dropped for 6 years comparing to the pre-depression level in 1929 (averaging to 17% drop per annum over that period) 7, thus our assumption for the shock magnitude is a bit less conservative. Under all considered conservative scenarios, the impact of COVID-19 type events on the long-term temperature and carbon concentration appear to be quite small. The results show that if the world's gross output recovers in full after each event, the impact of COVID-19 on the temperature and carbon concentration will be immaterial even in the case of a conservative 10% drop in the annual gross output over a 5-year period. The impact becomes noticeable, although remaining extremely small (i.e., a long-term temperature drop by 0.1 C), if the shocks are persistent 5% drops in productivity, leading to a 5% drop in output propagating to the subsequent time periods. If the deterministic DICE model policy is applied to the stochastic model (i.e., a suboptimal policy is applied in the case of stochastic shocks), then the drop in temperature will be larger (approximately 0.25 C), that is, the lower economic activities owing to the occurrence of a shock imply that more ambitious mitigation targets are now feasible at lower costs, which is qualitatively consistent with the results presented in Meles et al.. The remainder of this paper proceeds as follows. The model is defined in Section 2. Section 3 describes the numerical method used to solve the model. The results are presented in Section 4, and some concluding remarks are given in Section 5. DICE model The DICE model maximizes the utility of consumption (social welfare) over an infinite time horizon with a tradeoff between consumption, investment, and CO 2 abatement. Let t = 0, 1,... be a discrete time measured in steps of ∆ years (e.g., t = 2 corresponds to 2∆ years). Using the DICE-2016 model as the foundation 8, the stochastic DICE model can be formulated as subject to the state vector X t = (K t, M t, T t, I t ) evolving over time 9 as Here, superscript denotes a transposition, is the utility discount rate 10, K is the annual rate of depreciation of capital, and ( M, T,, 1 ) are parameters for the carbon and temperature transition from t to t + 1. Other variables and functions are as follows. U (c t, L t ) is the utility function defined as where ≥ 0 is the risk aversion parameter ( = 1 corresponds to a logarithmic utility), and L t is the world population in billions at time t. ( K t, M t, T t, I t ) are independent and identically distributed random disturbances for t = 1, 2,.... The random disturbance I t corresponds to the transition probability of the shock process Pr. Other random disturbances correspond to the uncertainties of the world's net output, carbon concentration, and temperature, and can be modeled using e.g. a Gaussian distribution. In this study, all numerical results are presented for the case in which all random disturbances are set to zero, except for I t. Q t (K t, T t, t ) is the world's net output (output net of the damages and abatement) divided between the consumption and investment, E t (K t, t ) is the carbon emission (in billions of tons per year), and F t (M AT t ) is the radiative forcing, which are modelled as where Here, (I t ) is the impact of COVID-19 type shocks on the Cobb-Douglas produc- A t is the total productivity factor, and t ( t, t, T AT t ) is the damage abatement cost factor. The damage function as a fraction of the gross output is 2 2 ; see. The model parameter values and deterministic functions F EX t, E Land t, A t, and t are specified in Table 1. Note that, Y (A t, K t, L t ) is the annual gross world output (output before damage and abatement costs) affected by the shocks (I t ); thus, the shocks affect both the net output Q t in and the carbon emissions E t in. The carbon price (USD per ton) is calculated as The typically quoted savings rate output from the DICE model is defined as (1−c t /Q t ). Given that (X t ) t≥0 is a Markov process, the solution to the stochastic DICE model is a standard optimal stochastic control problem for a controlled Markov process (the transition of X t to X t+1 is affected by t, c t ). For a good textbook treatment of such problems in finance, see Buerle and Rieder. This type of problem can be solved using the dynamic programming performed recursively backward in time for t = N −1,..., 0 through the backward induction Bellman equation: and the optimal strategy can be found as Note that, the optimal strategy (optimal decision for the values to be set for carbon emission reduction t and consumption c t ) depends on the information available at time t, that is, depends on the state variable X t. In addition, note that to solve the DICE model under the infinite time horizon numerically, one should use a large enough number of time steps N (it should be confirmed by the sensitivity of the numerical solution that N is sufficiently large and thus its impact on the solution for the period of interest is immaterial). If the random disturbances and impact from the random shock (I t ) are all set to zero, then the above model is reduced to the standard deterministic DICE-2016. The dynamic programming solution is still valid in this case and can be used to solve the model. Note that the standard DICE-2016 solution is a brute force maximization in with respect to (c 0,..., c N −1, 0,..., N −1 ) and their constraints simultaneously (a total of 200 parameters plus their constraints when N = 100). In one of the scenarios presented in this paper, to introduce a persistent shock to the gross output Y t (A t, K t, L t ), we consider the total productivity A t affected by the shock variable I t. Then, A t becomes an additional state variable, and the new state vector is X t = (A t, K t, M t, T t, I t ) with the additional state transition equation, where (I t ) is the impact of shock I t on productivity, which leads to a persistent shock on the annual economic net output and emission through the above recursive formula. Remark 2.1 Utility discounting can be interpreted as the relative weighting given to the well-being of various generations. The choice of an appropriate utility discount rate is a controversial subject in the literature on global warming models. Some economists have argued that a small or zero utility discount rate should be used to weight different generations. This topic is discussed in detail in a Stern Review (Stern, 2007, Chapter 9). In DICE-2016, the utility discount rate is set to 1.5% per year, and the risk-aversion parameter is 1.45. These parameters are set to generate consumption rates and real returns on capital, consistent with observations (see the discussions in Nordhaus ). This approach to setting the discount rate in the DICE model is called a "descriptive approach." Under this approach, the real return on capital r is not an exogenous but endogenous variable determined through the Ramsey equation r = + g *, where g * is the rate of growth of consumption; see (Nordhaus, 2008, chapter 3). Thus, we assume that the economy shock events do not affect the utility discounting rate, although the real return on capital r implied by the affected consumption and risk aversion can change. Numerical solution The stochastic DICE model can be solved using the Bellman equation and the optimal decisions * t (X t ), c * t (X t ) can be found using applied backward in time through numerical deterministic dynamic programming (and then if needed we can simulate forward in time random trajectories of X t based on the calculated optimal decisions to assess the uncertainty). The logical steps of this numerical procedure are presented in Algorithm 1. This type of algorithm is often referred to in the literature as a value function iteration. Hereafter, T () denotes the transition function for the evolution of state variables X t+1 = T (X t, t, c t, t+1 ) implied by the state processes, where t+1 is the vector of random disturbances of the state variables. Algorithm 1 is the standard approach for solving dynamic programming problems numerically. Its performance depends on problem-specific details, such as the type of interpolation across grid points and the type of method used to calculate the required expectations. For example, Cai et al. utilizes Chebyshev nodes for grid points and the Chebyshev polynomial approximation for interpolation. Cubic spline interpolation is also a possible choice. In our numerical experiments, we observed that even the simplest linear interpolation works extremely well for DICE model (it is not the most efficient but is the simplest and quickest way to implement the algorithm). Algorithm 1 Dynamic programming with deterministic grid 1: Discretize the state variable space to obtain nodes x j, j = 1,..., J. This discretization can be different for different time slices, t. The state variable vector may include discrete and continuous variables (in this case, only continuous variables should be discretized). Interpolate across V t+1 (x j ), j = 1,..., J to obtain the approximation V t+1 (x) for any x. This step is not necessary when t = T − 1 because the maturity condition can be found for any x without interpolation. 18: end for 19: end for The calculation of E in Algorithm 1 can be accomplished through a simulation or quadrature integration methods with respect to the continuous state random variables and simple summation with respect to the discrete state random variables. In our study, we consider only one discrete random variable representing the shock of COVID-19 type events on the world gross output. Thus, the expectation is simply the sum over the states of the shock variable I t. In addition, note that the interpolation on line 4 in Algorithm 1 is required across grid points of the continuous state variables only. In the case of many stochastic state variables (i.e., if we want to account for stochasticity in all state variables), we can use the least squares Monte Carlo with control randomization proposed in Kharroubi et al. with some special adjustments to handle the expected utility problems introduced in Andrasson and Shevchenko. This goes beyond the purpose of this study and is the subject of our ongoing research project. For numerical calculations, we implemented Algorithm 1 in the statistical computing programming language R 11 and then in Fortan because of the long computational times for some scenarios 12. We used the following settings: Each of the six continuous state variables is discretized using equally spaced points (9 points for K t and 5 points for other variables) and we use a two-state discrete shock variable I t, i.e., in total, there are 2 9 5 5 points in the deterministic grid in Algorithm 1. We verified that the increase in the number of discretization points does not have a material impact on the results. In the case of productivity A t affected by shocks (see Eq., we discretize A t using 9 points. To approximate the infinite time horizon, we use N = 80 (i.e., 400-year time horizon) and then report the results for t = 0, 1,..., 40 (i.e., up to 200 years). We verified that increasing the time horizon did not materially change the results. The range for K t state variable is selected to be time-varying because this variable changes from K 0 = 223 to approximately 8, 000 at t = 40. We denote the solution of the standard DICE-2016 model for capital K t as K t. The range is then set to Here, T b max, M a min, and M a max are the maximum temperature, minimum concentration and maximum concentration of the standard DICE-2016 solution, respectively. It was verified that increasing the bounds did not cause any material difference. In the case of stochastically affected productivity A t, the range is set as , where A t is a deterministic function of productivity used in the standard DICE-2016 (see Table 1). The optimal values of the control variables ( t, c t ) are not calculated at t = 0 but set to the values produced by the standard DICE-2016 because t = 0 corresponds to the year 2015, which is already in the past. For other time periods, optimal controls are calculated in Algorithm 1 on lines 6 and 13 using numerical maximization with the same bounds on t as in the standard DICE-2016. We verified that when stochasticity is set to zero, then our numerical dynamic programming solution leads to virtually the same results as from the original deterministic DICE model. We also set I 0 = 0 and I 1 = 1 for all trajectories in Algorithm 1 to reflect the fact that there is a shock at the beginning of 2020 and no shock in 2015. To allow for a random change from the normal economy regime to the stressed regime for each time period owing to COVID-19 like events, one could consider the shock variable I t with two states such that I t = 0 corresponds to the normal regime and I t = 1 corresponds to the stressed regime. To enforce the change from the stressed to the normal regime, the matrix of transition probabilities Pr can be defined as Here, q is the probability of moving from I t = 0 to I t+1 = 0, and (1 − q) is the probability of moving from I t = 0 to I t+1 = 1. If the annual probability of a COVID-19 type event is p, then the transition probability q over ∆ years can be approximated as q = (1 − p) ∆. Note that modeling of climate tipping point shocks such as in Cai et al. can be achieved using the above setup with the important difference; the probability of shock (1 − q) should become a function of temperature T AT t and the second row of the transition probability matrix should be changed from (1 0) to (0 1). That is, the shock of the tipping point event is irreversible, whereas in the case of COVID-19 type shocks, the stressed regime is forced to be followed by the normal regime. Scenario B calculated and discussed in the next section assumes persistent shocks somewhat similar to the tipping point modeling but note that tipping point modeling assumes shocks to the net output only and no shocks to the emission. Results To study the impact of the COVID-19 type events on the world economy and climate under the DICE model, we calculated the following four scenarios. We set the recovery duration, the decrease in gross world output, and frequency of such events with reference to the Great Depression for the economic impact and to the "Spanish Flu" for the frequency of the events. Scenario A1) Random shocks reduce the gross world output by 5% and it takes 5 years to recover in full. That is, in equation for gross output, we set (I t ) = 0.05 if I t > 0, and is zero otherwise. These events occur on average once in a 100-year period (i.e., the annual event probability is p = 0.01). Scenario A2) Random shocks reduce the gross world output by 10% and it takes 5 years to recover in full. That is, in equation for gross output, we set (I t ) = 0.1 if I t > 0, and is zero otherwise. These events occur on average once in a 100-year period (i.e., the annual event probability is p = 0.01). Scenario B) Random shocks reduce productivity A t by 5%, i.e., in equation we set (I t ) = 0.05 if I t > 0, and is zero otherwise. We also set (I t ) in equation to be the same as (I t ) so that gross output persistent drop starts at time t. This leads to the persistent drop in the net output and emission. Scenario C) The same shock parameters as in Scenario B are used, i.e., (I t ) = 0.05 and (I t ) = 0.05 for I t > 0; however, we assume that the control decisions t and c t undertaken are the same as in the deterministic DICE model. That is, when simulating trajectories, in Algorithm 1 on line 13, we do not calculate the optimal stochastic control for the stochastic DICE model but use controls found by deterministic DICE. This also means that we undertake suboptimal decisions. Figures 1, 2, and 3 show the DICE outputs under the four scenarios described above. Figure 1 presents numerical results for the carbon emission mitigation rate t, savings rate (1 − c t /Q t ), economic capital K t, and the net gross world product Q t, corresponding to plot titles MIU, S, K, Ynet. Figure 2 plots the results for temperatures T AT t and T LO t, carbon price P t, and fraction of output lost owing to a temperature increase 2 (T AT t ) 2, see equation, corresponding to plot titles TATM, TOCEAN, Cprice, DamFct. All plots show results under the standard DICE model (i.e., in the case of no random shocks) using the dashed line. For the case of stochastic DICE model, to see the uncertainty/range of outcomes introduced by the shock process (I t ) t≥0, all plots show the 95% probability intervals (indicated by the gray area in the plots); these intervals are calculated by simulating 1000 random trajectories and calculating 2.5% and 97.5% quantiles over the trajectories at each t = 1,..., N to form the interval. These trajectories were simulated forward in time using optimal controls * t (X t ) and c * t (X t ) obtained by solving the stochastic DICE model, except for scenario C, where controls are taken from the deterministic DICE solution. The results for Scenario A1 show no material change for all DICE outputs from the deterministic case; only the net output Ynet has a small visible gray area owing to stochastic shocks. The gray area is below the deterministic DICE solution for Ynet as expected because shocks reduce net output Q t. The capital state variable K t also has very small gray area below the deterministic DICE solution consistent with the saving rate S virtually unaffected by stochastic shocks while output Ynet is reduced by the shocks. Scenario A2 leads to a more visible (compared to Scenario A1) impact on the economic variables K and Ynet. The gray area for these variables is below the deterministic solution and corresponds to approximately 10% variation in 200 years. This is expected because the shock size in Scenario A2 is 10%, larger than 5% shock under Scenario A1. Saving rate S is also slightly but visibly affected with most of trajectories below the deterministic case. However, the impact on climate variables (temperature, carbon concentration, emission control rate) from shocks in this scenario is immaterial. Though, we still can note a very small decrease is emission control rate MIU and as a result a small decrease in carbon price Cprice; also a tiny drop in TATM and in resulting DamFct. This is also not surprising because shocks reduce not only the net output but the carbon emission too. Scenario B clearly leads to larger and material impacts on economic variables K and Ynet compared to Scenarios A1 and A2 because shocks on the economic output are persistent (propagate recursively to all subsequent time periods). Under Scenario B, there is a material impact on the emissions control MIU and carbon price Cprice, a small but visible impact on the temperature TATM, and a small but visible impact on the concentrations MAT and MU. Other variables, such as ML and TOCEAN, are not affected. There is a small impact on the savings rate S, which is larger than under Scenario A1, but smaller than that under Scenario A2. The gray area of stochastic DICE trajectories is below deterministic DICE solution for most of the trajectories across all plotted variables. More specifically, stochastic trajectories of MIU and Cprice are always below those under the deterministic DICE solution between now and to about 100 years (corresponding to approximately 25% range for drop of Cprice in about 100 years); then, for longer time horizons, there is no difference between trajectories of these variables and their solutions from the deterministic DICE. This means that if we account for stochastic persistent shocks, then the policy for a carbon emission reduction can be less demanding compared to the deterministic case. This is because persistent shocks reduce emission (leading to a reduction in concentration and temperature) more than simple shocks in Scenarios A1 and A2. Trajectories for K and Ynet are also below those under the deterministic DICE which is explained by persistent shocks on the net output. In the case of atmospheric temperature TATM, and concentrations MAT and MU, most of the gray area is below the deterministic DICE solution, though there are few trajectories going a bit above deterministic solution after approximately 200 years only. Finally, the results for Scenario C show the case of applying decisions MIU and S from the deterministic DICE model to the trajectories of stochastic DICE model. That is, we apply suboptimal decisions that are optimal under the deterministic DICE but suboptimal under the stochastic DICE. Thus, one can see that all trajectories under the stochastic DICE for MIU, Cprice and S are the same as the deterministic DICE solution. Trajectories for all other variables are always appear to be below corresponding deterministic DICE solution. Under this scenario, we see even more material impact on temperature; all trajectories for TATM in the gray area are below the deterministic DICE solution. On average, TATM is approximately 0.25 C below the deterministic DICE solution when temperature peaks after about 150 years, with gray area corresponding to about 0.15 C range. The same is for carbon concentration MAT, where all trajectories are below the deterministic solution (on average a 10% drop for a concentration at its peak in about 100 years). In other words, this scenario shows that more ambitious mitigation targets are now feasible at lower costs, or mitigation targets will be achieved faster if the policy is unchanged (i.e. not adapted for environment with stochastic shocks). This is qualitatively the same as the results of the analysis conducted in Meles et al.. Again, this outcome is somewhat expected due to persistent shocks reducing not only the net output Ynet but also carbon emission. Conclusion In this paper, we studied the impact of the COVID-19 type events on the carbon concentration, temperature, economic capital and other outputs of the DICE model extended to include corresponding stochastic shocks on the world gross annual output. We solved the extended model under different scenarios as an optimal stochastic control problem, assuming that shock events occur randomly on average once during a 100 period. The results show that if the world gross output recovers in full after each event, then the impact of the COVID-19 events on the temperature and carbon concentration will be immaterial even in the case of a conservative 10% decrease in the annual gross output over a 5-year period. The impact becomes noticeable but small (the long-term temperature in the atmosphere drops on average by 0.1 C) if a 5% decrease in the gross output owing to a shock is allowed to propagate over time (i.e., allowed to be a persistent shock). Finally, if the deterministic DICE model policy is still applied in the case of stochastic shocks (i.e., it is a suboptimal policy in this case), then the drop in temperature will be larger (approximately 0.25 C). That is, the lower economic activities owing to the occurrence of a shock imply that more ambitious mitigation targets are now feasible at lower costs, which is qualitatively consistent with the results presented in Meles et al.. Shocks considered in our study reduce both the world net output and emission through the shock reduction of the gross output while tipping point modeling studies such as Lontzek et al. ; Cai et al. assume shock on the net output only and no shock on the emission. Thus our shocks lead to a reduction in a policy stringency while tipping point shocks lead to the opposite effect. In general, incorporation of uncertainty in integrated climate-economy assessment models, such as the DICE model, is an under-developed research topic. Typically, the uncertainty is assessed by calculating the models under the perturbed parameters, and state-of-art stochastic control methods are not really used. This can be partly explained by the difficulty of implementing dynamic programming algorithms. This is probably due to the large number of state variables that call for the use of Monte Carlo simulation methods while until recently, Monte Carlo techniques have not been used for optimal control problems that involve controlled processes. A relatively recent development in this area is the least-squares Monte Carlo approach with the control randomization technique developed in Kharroubi et al.. However, to solve stochastic control problems maximizing the expected utility (as in the DICE model), some special adjustments are required for this technique, as discussed in Andrasson and Shevchenko. Implementing this approach for the DICE model incorporating stochasticity in all state variables is the subject of our research project in progress. Of course, although the DICE model is a typical reference point for many climate-economy studies, it is important to remember that IAMs and the DICE models in particular have significant limitations (in the model structure and model parameters) and have been criticized and debated in the literature (
The composition of lipids in intestinal digesta of young pigs receiving diets containing tallow and tallow fatty acids. Three semipurified diets containing a low level of fat or 10 percent of either beef tallow or beef tallow free fatty acids were fed to young pigs. Jejunal digesta was sampled 1.5, 2.5, 3.5, and 4.5 h after feeding by aspiration through tubes leading from the jejunal lumen to the exterior. The samples were forced through Millipore filters (1 x 10(-7) m pore size) to separate aqueous phase and oil phase lipid. The total and aqueous phase lipid was separated into triglyceride, monoglyceride, and free fatty acid, and the fatty acid composition of each fraction determined. The concentration of aqueous phase lipid was not influenced by diet, although the concentration of the oil phase lipid was generally higher for the addition of fat to the diets; the increase was greater for the beef tallow free fatty acid diet than for the beef tallow diet. Free fatty acids were the predominant component of the aqueous phase lipid along with some monoglyceride and traces of triglyceride. The major component of the oil phastions of triglyceride and monoglyceride. These must have been derived from endogenously secreted lipid in the case of the tallow fatty acid diet. Thus, the lower digestibility of completely hydrolyzed beef tallow than of conventional beef tallow was not due to an absence of monoglyceride in the intestinal lumen. The proportion of stearic acid in the jejunal digesta was greater than in the dietary lipid, whereas there were lower proportions of palmitic and oleic acids in the jejunal digesta than in the diet; the effect being most pronounced for the tallow free fatty acid diet. The ratio of oleic to palmitic acid in the aqueous phase was less than in the lipid phase suggesting preferential uptake of oleic acid from the micelle by the intestinal mucosa.
Quantum vibrational analysis and infrared spectra of microhydrated sodium ions using an ab initio potential. We present a full-dimensional potential energy surface and a dipole moment surface (DMS) for hydrated sodium ion. These surfaces are based on an n-body expansion for both the potential energy and the dipole moment, truncated at the two-body level for the HO-Na(+) interaction and also for the DMS. The water-water interaction is truncated at the three-body level. The new full-dimensional two-body HO-Na(+) potential is a fit to roughly 20,000 coupled-cluster single double (triple)/aug-cc-pVTZ energies. Properties of this two-body potential and the potential describing (HO)(n)Na(+) clusters, with n up to 4 are given. We then report anharmonic, coupled vibrational calculations with the "local-monomer model" to obtain infrared spectra and also 0 K radial distribution functions for these clusters. Some comparisons are made with the recent infrared predissociation spectroscopy experiments of Miller and Lisy .
Characterization of Compost-Like Outputs from Mechanical Biological Treatment of Municipal Solid Waste Abstract Throughout the world, most municipal solid waste consists of biodegradable components. The most abundant biological component is cellulose, followed by hemicellu-lose and lignin. Recycling of these components is important for the carbon cycle. In an attempt to reduce the environmental impacts of biodegradable wastes, mechanical biological treatments (MBTs) are being used as a waste management process in many countries. MBT plants attempt to mechanically separate the biodegradable and nonbiodegradable components. The nonbiodegradable components are then sent for reprocessing or landfilled, whereas the biodegradable components are reduced in biological content through composting or anaerobic digestion, leaving a compost-like output (CLO). The further use of these partially degraded residues is uncertain, and in many cases it is likely that they will be landfilled. The implications of this for the future of landfill management are causing some concern because there is little evidence that the long-term emissions tail will be reduced. In this study, the CLOs from four different biological treatment processes were characterized for physical contamination through visual inspection and for biological content using a sequential digestion analysis. The results indicate that the composition of the incoming waste, dependent on the way the waste was collected/segregated, was the factor that influenced biological content most, with length of treatment process the second most important.
Interaction Study of an Amorphous Solid Dispersion of Cyclosporin A in Poly-Alpha-Cyclodextrin with Model Membranes by 1H-, 2H-, 31P-NMR and Electron Spin Resonance The properties of an amorphous solid dispersion of cyclosporine A (ASD) prepared with the copolymer alpha cyclodextrin (POLYA) and cyclosporine A (CYSP) were investigated by 1H-NMR in solution and its membrane interactions were studied by 1H-NMR in small unilamellar vesicles and by 31P 2H NMR in phospholipidic dispersions of DMPC (dimyristoylphosphatidylcholine) in comparison with those of POLYA and CYSP alone. 1H-NMR chemical shift variations showed that CYSP really interacts with POLYA, with possible adduct formation, dispersion in the solid matrix of the POLYA, and also complex formation. A coarse approach to the latter mechanism was tested using the continuous variations method, indicating an apparent 1:1 stoichiometry. Calculations gave an apparent association constant of logKa = 4.5. A study of the interactions with phospholipidic dispersions of DMPC showed that only limited interactions occurred at the polar head group level (31P). Conversely, by comparison with the expected chain rigidification induced by CYSP, POLYA induced an increase in the fluidity of the layer while ASD formation led to these effects almost being overcome at 298K. At higher temperature, while the effect of CYSP seems to vanish, a resulting global increase in chain fluidity was found in the presence of ASD. Introduction One challenging task in the manufacturing process is to improve the bioavailability of poorly water-soluble drugs. Thus, in recent decades, numerous potentially bioactive pharmaceutical ingredients (APIs) were found to have only low aqueous solubility. As a result, oral delivery of poorly water-soluble drugs often results in low bioavailability. Poorly water-soluble drugs cannot achieve dissolution and therefore have great difficulty passing through digestive fluid to contact absorbing mucosa and be absorbed. If the drug molecules' dissolution process is slow, due to inherent physicochemical properties of the molecules or formulation factors, then dissolution may be the rate-limiting step in absorption and will influence drug bioavailability. This is the case with class II drugs, for example, cyclosporine A (CYSP) (according to the drug Biopharmaceutics Classification System (BCS)). Cyclosporin A (CYSP), a hydrophobic cyclic peptide, is widely used as an immunosuppressant drug for transplant therapy. For this specific kind of drug, many enabling technologies are available for the formulator to consider, including lipids, cosolvents, surfactants, nanoparticles, cyclodextrin complexes, and amorphous solid dispersions. The suitability of a particular formulation approach depends largely on the physicochemical properties of the active pharmaceutical ingredient (API). Among these methods, the preparation of amorphous solid dispersions (ASD) with cyclodextrin copolymer (POLYA) is particularly 2 Journal of Drug Delivery attractive for many poorly water-soluble drug candidates because these formulations offer many of the advantages of more conventional solid oral dosage forms and also provide faster dissolution rates and higher drug concentrations in the gastrointestinal milieu. However, its limitation is its toxicity. Among several mechanistic hypotheses, several studies addressed possible interactions of CYSP with biological membranes. The first ESR studies of CYSP's interactions with model membranes failed to identify any dynamics or structural consequences resulting from the presence of CYSP. By way of contrast, small-angle X-ray diffraction and differential scanning calorimetry (DSC) studies of the effect of CYSP's interactions with model membranes composed of dimyristoylphosphatidylcholine (DMPC) bilayers showed that CYSP affected the fatty acyl chains in the bilayer, especially the part of the chain proximal to the head group. These results were in good agreement with other more recent works performed on different phospholipid (dipalmitoylphosphatidylcholine (DPPC)) bilayers using other spectroscopic methods ( 2 H-NMR). The goal of the present paper was to investigate the membrane interactions of this ASD in comparison with POLYA and previous studies on CYSP. As a first step, the stoichiometry and apparent constant affinity were estimated; then, its interactions with membranes were investigated using synthetic membranes in combination with 31 P-, 2 H-NMR and ESR methods. Model Membranes. Multibilayers (MLV): DMPC liposomes for 31 P experiments were prepared by successive freeze/thaw cycles until a homogenous milky sample was obtained. The suspensions were degassed under nitrogen gas and then introduced into NMR tubes and sealed. The final lipid concentration was 50 mM, while CYSP/DMPC in mixed systems was 6% M/M as described in previous studies. Various W/W proportions of DMPC to POLYA (from 3 to 12) and POLYA-CYSP complexes (from 3 to 15) were tested. The results presented here used 4/50 complexes to DMPC and 3/50 POLYA to DMPC weight ratios. The same procedures were used to prepare multilayers for 2 H-NMR experiments, except that 25% DMPC with perdeuterated chains was used (DMPC-d 54 ) to prepare the liposomes. Methods 2.3.1. NMR Experiments. 1 H-NMR experiments were recorded at 295 K on a Brker AVANCE III-400 spectrometer using a presaturation of the water resonance and a spectral width of 10 ppm. As preliminary relaxation studies evoked T1 values around 0.6 s, a recycling delay of 2.5 s between pulses was used with /3 pulses (4.8 s). The chemical shifts were referenced by setting the water resonance at 4.75 ppm. 1 H-NMR attribution was considered in reference to natural alpha-cyclodextrin and controlled by standard correlation spectroscopy experiments. The first recordings of the POLYA/CYSP complex showed chemical shift variations with respect to POLYA, suggesting that a molecular association operating under fast exchange kinetics conditions was present. Using its very coarse approximation of a complex formation, the classical method described by Job was used to extract an apparent macroscopic stoichiometry of the complex, while the SIMPLEX mathematic determination method (EXPREX or MURIEL-X algorithms generously provided by Bruno Perly, CEA Saclay, France) gave estimations of the apparent association constant. 31 P-NMR experiments were performed at 162 MHz. Phosphorus spectra were recorded using a dipolar echo sequence ( /2---) with a t value of 12 sec, a recycling delay of 2.5 s, and a composite proton decoupling. Phosphoric acid (85%) was used as external reference. 2 H-NMR experiments were performed at 61 MHz. Deuterium spectra were recorded using a quadrupolar echo sequence ( /2--/2-) with a t value of 15 sec and a 10 s recycling delay. The free induction decay was shifted in fractions of the dwelling time to ensure that the effective time for the Fourier transform corresponded to the top of the echo. The sample temperature was regulated within 1 ∘ C by a BVT-1000 unit. 2 H-NMR spectra treatment: in order to extract suitable quadrupolar splitting measurements (] ), the spectra were de-Paked according to the Seelig procedure. This allowed a fluidity profile to be built and calculation of the carbondeuterium bond segmental order parameter CD using the following classical relation : where is the average angle between the carbon deuterium bond and the direction perpendicular to the bilayer normal. This value can be drawn for a given CD bond from a measurement of the quadrupolar splitting, ] (kHz), that is, the frequency separation of the two deuterium resonances on the spectrum, by using the following relation: where (e 2 qQ/h) is the quadrupolar coupling constant, equal to 170 kHz for aliphatic carbon deuterium bonds. ESR Experiments. The DMPC dispersions were prepared as for 31 P-NMR experiments. Each 100 L sample of this suspension (with or without CYSP, POYA, or ASDP) was then labeled with 2 L of a radical nitroxide marked probe solution (10 −2 M in dimethylsulfoxide); the probe was 5 DOXYLstearic acid (5NS). After labeling, the sample was transferred by capillary action into a 20 L Pyrex capillary tube and incubated for 10 minutes. These tubes were placed in a 3 mm diameter quartz holder and inserted into the cavity of a Bruker ESP 380 spectrometer (Karlsruhe, Germany) operating at 9.79 GHz. Complete membrane incorporation of the spin labels was ascertained by the absence in the spectra of highly resolved EPR lines corresponding to free rotating markers. The spectra were recorded at temperatures below (292 K), around (297 K), and over (308 K) the temperature transition under the following conditions: microwave power 20 mW, modulation frequency 100 kHz, modulation amplitude 2.868 G, and time constant 327 msec. The parameters measured were the hyperfine splitting constants (2 // and 2 ⊥ ), allowing for calculation of the order parameter : with = 1.4 − 0.053 * ( // − ⊥ ). 2 // is related to the molecular organization surrounding the probe and accounts for an order parameter. If 2T// increases, then the order increases at this level of the membrane, that is, the outer hydrophilic moiety of the layer. Characterization of Amorphous Solid Dispersion (ASD). ASD was prepared by the classical slow evaporation method for a total concentration of 2 mM, with the POLYA/CYSP molar ratio scaled from 1/9 to 9/1 M/M. The 1 H-NMR spectrum of POLYA (D 2 0, 297 K) is presented as the bottom trace of Figure 2(a). As described previously, the method of synthesizing the POLYA yields polymers of alpha cyclodextrin connected by citric acid building blocks, with a mean molecular mass of 240,000 and a polydispersion index of 8. This means that, in addition to the main macromolecular assembly, smaller objects are also present, even if in small amounts. The corresponding 1 H-NMR spectrum thus consists of relatively broad lines (6 Hz) that could be assigned by comparison with natural alpha-cyclodextrin and/or by recording standard basic COSY experiments In the coarse study of the association between CYSP and POLYA, the POLYA resonances were considered as a whole while a CYSP molecular mass of 2000 was assumed. From this, different apparent molecular ratios of R = CYSP/POLYA were prepared using the slow evaporation method of complexation described classically, with the total concentration kept constant at 2 mM. The result is presented as the top trace of Figure 2 This led us to select a 1/1 preparation for the following experiments using the spray-dried dispersion method. As, on the one hand, POLYA was supposed to enhance the biodisponibility of CYSP and, on the other, the interactions of water insoluble CYSP with membranes had been investigated in previous studies, it was of interest to explore such interactions of POLYA and especially of the POLYA/CYSP complex itself with membranes. This study is proposed in the next section. Interactions with Membranes. Homogeneously prepared systems consisting of synthetic phospholipid dispersions (MLV) offer a suitable tool with which both structural and dynamic consequences of drug-membrane interactions are observed. The results are presented in this section, using 31 Pand 2 H-NMR spectroscopy and ESR spectroscopy on CYSP, POLYA, and a 1/1 complex (ASD) containing MLV of DMPC. The Polar Head Group Level: 31 P-NMR Experiments. As shown in the insert in Figure 3, the 31 P-NMR spectrum of the pure DMPC dispersion (MLV) was typical of an axially symmetric powder pattern, with a chemical shift anisotropy of 58 ppm typical of DMPC bilayers in their liquid crystalline phase (298 K). The chemical shift difference between the lowfield and highfield edges of the 31 P-NMR spectrum is called the chemical shift anisotropy (CSA, ppm) and is directly related to fluidity reorientation at the polar head level where the phosphorus nuclei are located. Hence, a mobile phosphorus group gives a single narrow resonance (several Hz) as detected in a true solution or with small structures (micelles), while solid state phosphorus gives extremely broad contributions (greater than 100 ppm). Note that membrane fluidity increases (and CSA decreases) with temperature, with a special jump at the transition temperature between the gel phase and the liquid crystal structure (around 297 K for DMPC). Thus, the plot of CSA as a function of temperature provides a good overview of the membrane dynamics at the polar head level where the phosphorus nuclei are located. Such plots are presented on the trace in Figure 3 for pure DMPC dispersions and for MLV containing CYSP, POLYA, and the 1 : 1 complex (ASD). As expected, a decrease in CSA (of around 18 ppm) was observed between the low (295 K) and high temperatures (313 K), with a transition-related jump at around 297 K. Such a temperature dependence was also found for CYSP, POLYA, and ASD containing MLV. However, in the case of the CYSPcontaining system, the transition temperature was slightly lower (up to 1 K), while its amplitude was lowered by 10 ppm, in agreement with an interaction with the polar head group, even of relatively weak importance, possibly being related to an enhanced fluidity below the transition temperature. In addition, the curves built with POLYA and ASD were very similar and close to that constructed with DMPC alone, with the same transition temperature and only a limited reduction in CSA at low temperature, indicating only minor interactions at the polar head level at the concentration used. Moreover, no isotropic contribution was found in the spectrum, precluding any solubilization or detergent effect. However, by using higher POLYA/DMPC or ASD/DMPC weight ratios, R, a broad isotropic component was detected immediately for R = 1/5 or, following some passage of time, when R exceeded 6/50 (see Figure 4(a)). Due to its 600 Hz linewidth, such a structure had to be distinguished from a solubilization, which should provide a resolved line of couple of tenth Hz wide at the same position, corresponding rather to membrane destruction into smaller heterogeneous fragments. This point is supported by the line shape of the corresponding 2 H-NMR spectrum (Figure 4(b)). Hence, even if a strong isotropic line is detected at the isotropic position (with a line width of 1 kHz) residual doublets (of 6, 10, and 24 kHz) still remain observable, revealing that some structure (membrane fragments, etc.) is present. Nevertheless, this feature cannot be explained at this point. Spin labeling and the ESR method were therefore used to observe the membrane chain sides close to the polar head group. The Acyl Chain Level Close to the Polar Head: ESR 5NS Experiments. As described in Section 2.3, an estimate of order parameter can be extracted from ESR spectra (measurements of hyperfine splittings 2 // and 2 ⊥ ), as shown in the inset of Figure 5. This allows the temperature dependence to be observed as well as the transition temperature just below the surface, at the carbon 5 level where the spin label is grafted on the stearic acid. The typical trace and transition temperature (297 K) were found for DMPC. This was also roughly the case for CYSP-containing systems; however, even though the transition is respected, one can note that overall order is increased above this temperature while, conversely, fluidity is increased below this point. The result is a smoothing of the transition. When POLYA is present, a transition is no longer clearly observable and a loss of local order is apparent over the whole temperature range, consistent with fluidization at this level. The curves built from ASD-containing systems demonstrate an intermediate situation, that is, a recovery of the transition-even if smoothed-with an intermediate fluidity profile between the POLYA system on the one hand and DMPC or CYSP-containing MLV on the other. At this stage these different aspects appear homogenous with the phosphorus results, even if more markedly observed in the present case. From this, it was of interest to perform further investigations at deeper levels of the layer, that is, the whole acyl chain, which were realized by recording 2 H-NMR spectra of chain perdeuterated DMPC (DMPD) under the same conditions. Figure 6(A) shows the spectrum of a pure DMPC-d 54 (dimyristoylphosphatidylcholine with perdeuterated chains) dispersion. This spectrum istypical of phospholipid bilayers in the liquid crystal phase (temperature of 298 K). Such a spectrum appears as a superimposition of symmetrical doublets, each doublet corresponding to a CD 2 group of the acyl chain; thus, for a given doublet, the splitting of (] ) is directly related to the local chain fluidity (see Section 2.3). This splitting can be used in a first approximation as an order parameter. As the acyl chain fluidity decreases from the terminal methyl group (CD 3 ) to the methylenic groups close to the polar head of the lipids (the so-called "plateau region, " from C-2 to C-8), the resulting spectrum consists of (i) an inner doublet with a quadrupolar splitting of 4 kHz attributed to the CD 3 group, (ii) doublets with increasing quadrupolar splitting assigned to successive CD 2 groups from C14 to C9, and (iii) an external edge doublet, attributed to the deuterium in the C2-C8 plateau region where a 29 kHz quadrupolar splitting is measured. In the presence of CYSP (Figure 6(B)), where the overall trace looks very similar, one can notice an increase in quadrupolar splitting at the plateau region level (31 kHz) and also all along the different doublets; while smaller and smaller as one moves along the chain down to the CD 3 doublet, this difference is almost negligible (4.2 kHz). The Overall Acyl Chain Level: 2 H-NMR. The situation is quite different when POLYA ( = 1/5) is present in the MLV; here, an = homogenous diminution in quadrupolar splitting is observed for all resonances (e.g., from 4 to 3.6 kHz for the CD 3 doublet and from 29 to 26.6 kHz for the plateau contribution), indicating overall fluidization of the bilayer at 298 K (Figure 6(C)). In addition, the use of a preformed complex in MLV ( = 1/5), while almost restoring the splitting at the plateau level (28 kHz), induced an increase in CD 3 splitting (to 4.4 kHz), as shown in Figure 6(D). These observations are also visible in the fluidity profile shown in Figure 7. The data used to obtain the top traces were also used to build, for all CD groups, histograms of relative local fluidity variation by plotting for each resonance in a given system X: where QS X is the quadrupolar splitting of the system X and QS DMPC that of the corresponding resonance in the DMPC reference MLV (bottom histograms of Figure 7). Such a plot, while confirming the previous results, also shows that the most significant rigidification induced by CYSP takes place in the middle of the chain, even if it is also close to the carbonyl group in the plateau region. Similarly, the fluidizing properties of POLYA appear to be present at both ends of the chain, while the presence of the complex almost overcomes the effects of CYSP. Temperature dependence: as mentioned in the previous section, the dynamics of DMPD multilayers are characterized by a phase transition from a gel state to a liquid crystal state at a given temperature. This specific transition temperature in DMPD-d 54 is also 297 K, with a dramatic reduction in quadrupolar splitting (QS) noted around 297 K. This transition temperature was not significantly modified between the different samples used (not shown). However, by increasing the temperature, besides the expected reduction in the QS values (reflecting an increase in fluidity), the fluidity profiles and relative local fluidity modifications appear quite different (e.g., see Figure 7 in the right column at 308 K). With regard to DMPD, the CYSP effects appear nearly negligible, while the fluidizing effect of POLYA was more pronounced and homogenous. Furthermore, the presence of the complex results in an overall homogenous rigidification at all chain levels. Discussion The goal of the present paper was to investigate the interactions of a preformed cyclosporine complex with a copolymer of alpha-cyclodextrin (ASD) and to study its interactions with membranes by comparison with those of CYSP and POLYA alone. Due to the polydispersity of POLYA (DI of 8), the first step was to select the experimental conditions, that is, the concentration and complex stoichiometry of ASD, to use when in the presence of membranes. Previous studies evoked different mechanisms of interactions, either a true inclusion of CYSP in the cyclodextrin cavity as suggested by 13 C-NMR HRMAS spectra recording, or solid dispersion in the POLYA matrix, favored by the SDD mode of preparation. The same author also mentioned that such a dissolution was thought to be related to the absence of crystallinity and improved wettability of CYSP A. Another possible mechanism would be a solubilization of CYSP by an interaction with POLYA without any inclusion, mediated by hydrogen bonds. After recording several preliminary 1 H-NMR spectra of POLY CYSP under different W/M ratios and concentrations, and as chemical shift differences were noted in the POLY resonances, we decided to use complexation studies methods on this model. Higuchi and Connors solubility diagrams did not give clear information (rapid steady state, no ambiguous curvature of the line slope), so we decided to use the continuous variations method. After constructing such a Job plot, it in fact proved unrealistic to propose a stoichiometry as well, due to polydispersion of the POLYA. It is unlikely that the interactions of CYSP A with POLYA, either in oligomeric assemblies or with a 30,000 MW assembly, would be similar. This coarse approach did not lead to an acute determination of the stoichiometry or affinity constant; however, this was proposed considering the following. (ii) Heterogeneous/randomly substituted cyclodextrin complexation studies have been performed in the past (e.g., poly randomly methylated cyclodextrins, RAMEB) and published. Conversely, natural products such as natural phospholipids-which are always mixtures of various chain lengths and degrees of unsaturation-have been investigated in the presence of cyclodextrins. (iii) The historical Job paper was designed to build such plots by using any observable variable, which may mean fluorescence frequency, absorbance, DSC or IR band, or, as here, chemical shift variations. Nevertheless, although the maxima of all traces were close to = 0.5 (apparent stoichiometry of 1), with calculations giving an apparent association constant of 4.5, it cannot be assumed that inclusion in the cyclodextrin cavity is the exclusive mechanism. Hence, such a mechanism would have given mainly or exclusively chemical shift variations in the H3 and H5 resonances, located well inside the torus structure. Conversely, simple adducts would have modified external proton resonances. As assumed by Boukhris et al., different mechanisms would in fact be present. However, the macroscopic result led us to use 1/1 preparations of CYSP/POLYA by SDD for membrane studies. Preliminary 1 H-NMR experiments in small unilamellar vesicles (SUV) of lecithins using classical paramagnetic broadening methods (not shown) had shown that all three species truly interacted with membranes and these interactions were probably not at the level of the choline groups. These results were not in agreement with older works performed by ESR in large unilamellar vesicles of DMPC, where no significant interaction was found. This discrepancy led us to study these interactions further by using a membrane system more adapted to structural and dynamic studies, that is, MLV of DMPC, in combination with static solid NMR technics. As a control the experiments described by Stuhne-Sekalec and Stanacev were also replicated. CYSP interactions with membranes had been suspected early on and were investigated from the late 90s onward. According to these studies, 31 P-NMR in MLV confirmed that the overall interactions of CYSP, POLYA, or ASD with the phosphorus in the head group were weak, except when high concentrations of POLYA were present. Membrane damage was then identified, suggesting a limit to the amount of POLYA which is reasonable to use (molar ratios exceeding 6/50). ESR results also confirmed a limited smoothing and lowering of the transition in the presence of CYSP, in agreement with a superficial interaction with the polar head. The result is an increased fluidity at low temperature and rigidification above the transition temperature. It is noteworthy that such a feature is not observed in the presence of a preformed complex. At the concentration used, fluidizing properties of POLYA are not apparent and cannot overcome CYSPinduced rigidification; a geometric hindrance appears to be the most probable hypothesis. In addition, a competition for CYSP between the membrane and POLYA also has to be considered. Looking at the chain level in the membrane, CYSP was found to increase the order parameter all along the chain ( 2 H-NMR), especially close to C10 at 298 K, but of limited amplitude in the plateau region. A previous study of Wiedmann et al. used dipalmitoylphosphatidylcholine; a longer chain length would modify the mutual relationships between the chain and CYSP. Similarly, they detected only minor effects at the polar head group level where the phosphorus is located. This does not run counter to the broadening of the chemical shift anisotropy previously observed in the presence of ethanolamine phospholipids, suggesting that the nature of the polar head group would also play a role in the interactions. Conversely, POLYA exhibits increased fluidity all along the chain; hence, POLYA was designed to be soluble and able to solubilize hydrophobic molecules; as confirmed by its amphiphilic properties and good solubility in water (1 mg/mL), these properties are all in favor of interactions with membranes. As shown in Figure 7, such an increase in fluidity appears to be sufficient to overcome CYSPinduced rigidification when the complex is formed. The effects are also present below and especially close to the transition temperature (at 298 K, present in Figure 7). When the temperature rises (308 K, see Figure 7) closer to biological conditions, the membrane interactions of CYSP almost completely vanish, while POLYA-and ASD-induced fluidization appear to become more effective. If it is considered that only the OH of the hydrophobic molecule CYSP is appended as a lateral group (MeBMt-1) to the main ring structure, then the molecule can both be embedded in the layer and form a hydrogen bond close to the carboxylic group of the chains, in agreement with very limited interactions at the polar head level. This is also supported by several papers that consider CYSP as being loaded in the membrane interior with the MeBmt-1 amino acid folded over the molecule itself assuming a globular shape. Any fluidizing reagent (POLYA), temperature jump, or hiding of this hydroxyl via complex formation would minimize CYSPchain interactions, in accordance with the data recorded at 308 K. Conclusions Finally, this work shows that POLYA can truly solubilize CYSP: this is probably achieved by forming a complex. The dispersion of hydration water in POMR experiments on the different systems would also probably show the role of wettability in such interactions. In addition, POLYA interacts with membranes, directly by fluidizing effects at the chain level (especially at biological temperatures) and by overcoming the rigidifying effect of CYSP just over the transition temperature of DMPC. Discrepancies with some published studies still remain, such as the precise location of CYSP, ASD, and POLYA interactions with the membranes. This will require studying different head groups and also chain lengths. These conclusions also have to be validated in biological models (e.g., in red blood cells using ESR methods) and finally in terms of biocompatibility to identify the mechanism of the membrane damage that occurs at high POLYA or ASD concentrations. These experiments are now in progress.
Investigating verbal and nonverbal indicators of physiological response during second language interaction Abstract Second language (L2) researchers have long acknowledged that affective variables (e.g., anxiety, motivation, positive emotions) are essential in understanding L2 learner psychology and behavior, both of which influence communication and have implications for language learning. However, there is little research investigating affective variables during L2 interaction, particularly from a dynamic rather than a static, trait-oriented perspective. Therefore, this study examined 60 L2 English speakers affective responses in real time during a paired discussion task using galvanic skin response sensors to capture speakers anxiety. Analyses focused on speakers speech, their behavioral reactions, and the content of their discussion while experiencing anxiety episodes (high vs. low arousals). Findings revealed that speakers glanced away, blinked, and used self-adaption gestures (touching face, hair-twisting) significantly more frequently during high arousals than low arousals, whereas head nods were found to occur significantly more often during low arousals. In comparison to low arousals, a larger proportion of high arousals occurred while discussing personal topics. Implications are discussed in terms of the role of affective variables in communication processes.
DNA Barcodes of Arabian Partridge and Philbys Rock Partridge: Implications for Phylogeny and Species Identification Recently, DNA barcoding based on mitochondrial cytochrome c oxidase subunit I (COI) has gained wide attention because of simplicity and robustness of these barcodes for species identification including birds. The current GenBank records show the COI barcodes of only one species, chukar partridge (Alectoris chukar), of the Alectoris genus. In this study, we sequenced the 694 bp segment of COI gene of the two species including, Arabian partridge (Alectoris melanocephala) and Philbys rock partridge (Alectoris philbyi) of the same genus. We also compared these sequences with earlier published barcodes of chukar partridge. The pair-wise sequence comparison showed a total of 53 variable sites across all the 9 sequences from 3 species. Within-species variable sites were found to be 4 (Alectoris chukar), 0 (Alectoris philbyi) and 3 (Alectoris melanocephala). The genetic distances among the 9 individuals varied from 0.000 to 0.056. Phylogenetic analysis using COI barcodes clearly discriminated the 3 species, while Alectoris chukar was found to be more closely related to Alectoris philbyi. Similar differentiation was also observed using 1155 bp mitochondrial control region (CR) sequences suggesting the efficiency of COI gene for phylogenetic reconstruction and interspecific identification. This is the first study reporting the barcodes of Arabian partridge and Philbys rock partridge. Introduction Partridges are non-migratory birds of dry, open and often hilly terrains and belong to Alectoris genus of Phasianidae family. These are rotund birds, with a light brown or grey back, grey breast and buff belly. They have red legs and their face is either white or whitish with a dark gorget. There are seven species of partridges including, Arabian partridge (Alectoris melanocephala), Philby's partridge (Alectoris philbyi), chukar partridge (Alectoris chukar), Przevalski's partridge (Alectoris magna), rock partridge (Alectoris graeca), barbary partridge (Alectoris barbara), and red-legged partridge (Alectoris rufa). Their representatives inhibit in southern Europe, North Africa, Arabia, and across Asia in Pakistan to Tibet and western China. Some species, notably the Chukar and Red-legged partridge, have been introduced to the United States, Canada, New Zealand and Hawaii, while hybrid between the two widely introduced species are also common in some countries, such as Great Britain. DNA barcoding using mitochondrial cytochrome c oxidase subunit I (COI) sequences has enormous potential of discriminating closely related species across diverse phyla in the animal kingdom. 1,2 Using a large dataset of North American birds, it has been concluded that DNA barcoding can be effectively applied across the geographical and taxonomic expanse of bird species. 3 Even a single DNA barcode has been suggested as a rapid tool to discover monophyletic lineages within a metapopulation that might represent undiscovered cryptic species. 4 For evaluating the discriminatory power of a barcode, it would be more appropriate that all members of a genus be examined, rather than a random sample of imprecisely defined close relatives, and taxa to be included from more than one geographic region. 5 In the recent years, DNA barcoding has been utilized for species identification of birds from different regions of the world. 3,6-10 However, the barcodes of Saudi Arabian birds have not been firmly established. In the present investigation, we have sequenced the 694 bp region of COI gene of Arabian partridge and Philby's rock partridge and compared these sequences with previously published sequences of another species (chukar partridge) of the same genus. Arabian partridge is a resident, endemic to mountain areas of Saudi Arabia, Oman and Yemen, and found from 250-2800 m height. The Philby's partridge is a native to mountain area (1500-3000 m high) of southwestern Arabia and northern Yemen. This species is related to chukar, red-legged and barbary partridges and sometimes considered to be a race of the rock partridge. Although COI barcodes of chukar partridge are available in the GenBank, this is the first study reporting the barcodes of Arabian partridge and Philby's rock partridge. Materials and Methods The blood samples were collected from 3 specimens of Arabian partridge and 2 specimens of Philby's rock partridge. The taxonomic classification of these birds is as follows: Kingdom-Animalia, Phylum-Chordata, Class-Aves, Order-Galliformes, Family-Phasianidae, Genus-Alectoris, Species-Alectoris melanocephala (Arabian partridge) and Alectoris philbyi (Philby's rock partridge). All these 5 birds belonged to the captive breeding program of the National Wildlife Research Center (NWRC) at Taif, Saudi Arabia. These birds were brought to NWRC from the local animal market so their primary origin is unknown. The DNA was extracted from the blood samples using DNeasy Blood and Tissue Kit (Qiagen GmbH, Germany) according to manufacturer's instructions. The extracted DNA was finally dissolved in 200 l of elution buffer and stored at -20 °C. COI sequences were amplified using the primer pair of BirdF1 and BirdR1 3 and FideliTaq PCR master mix (GE Healthcare) in a reaction volume of 30 l. The PCR conditions included a denaturation step (1 min at 94 °C) followed by six cycles of 1 min at 94 °C, 1.5 min at 45 °C, and 1.5 min at 72 °C, followed in turn by 35 cycles of 1 min at 94 °C, 1.5 min at 55 °C, and 1.5 min at 72 °C, and a final extension for 5 min at 72 °C. The PCR products were electrophorsed on a 1% agarose gel stained with ethidium bromide. The PCR products were purified using MicroSpin S300 columns (GE Healthcare) before being sequenced using BigDye Terminator Cycle Sequencing Kit (Applied Biosystems, USA) on 3130XL genetic analyzer (Applied Biosystems). For each sample, two sets of sequencing reactions were performed using the forward and reverse primers for high accuracy. All the nucleotide sequences obtained in this study have been deposited in the GenBank with the following accession numbers: Arabian partridge specimens 1 to 3 (HQ168027 to HQ168029) and Philby's rock partridge specimens 1 and 2 (HQ168030 and HQ168031). For comparative evaluation of barcode sequences of this study with previously published sequence data of other species form Alectoris genus, we searched the GenBank nucleotide database and found only 6 barcode records of a single species, Alectoris chukar. Of these, we omitted the short sequences of 2 records (606 and 679 bp) and downloaded 4 sequences (Accessions: GQ481313 to GQ481316) with 694 nucleotides. We also compared the COI-based phylogenetic inference with another suitable mitochondrial gene, control region (CR), which often evolves faster than rest of the mitochondrial genome and is highly variable in birds. 11 This variability has led to the expanding usage of CR sequences to examine questions ranging from population structure to phylogenetic relationship. We downloaded 9 records (3 for each species) of mitochondrial control region (CR) genes of Alectoris melanocephala (GenBank Accessions: AJ222734-AJ222736, all 1155 bp), Alectoris philbyi (AJ005574, AJ222737, AJ222738, all 1153 bp) and Alectoris chukar (FM203234, 1154 bp; FM203235, 1152 bp; FM203236, 1154 bp). The sequences were aligned by ClustalW 12 and the alignment file was saved in appropriate formats (MEGA and PHYLIP). The aligned sequence data were subjected to unweighted pair group method with arithmetic mean (UPGMA) and maximum likelihood (ML) methods for looking into phylogenic inference. The UPGMA analysis was performed using MEGA4 software and the bootstrap consensus trees inferred from 1000 replicates were taken to represent the evolutionary history of the taxa analyzed. 13,14 The software, Tree Puzzle was used for ML analysis 15 and the resulting phylogenetic trees were viewed by TreeView software. 16 Results Although all the 9 sequences of COI gene segment (5 from this study and 4 from GenBank) were of equal length (694 bp), the alignment showed disparity in 2 nucleotides at both the ends of sequences. The A. chukar sequences downloaded form the GenBank had 2 extra nucleotides at the 5 end. Thus, to avoid gaps at both the ends, we trimmed 2 nucleotides each from the 5 ends of A. chukar sequences and 3 ends of our sequences, resulting in a data set of 9 sequences of 692 bp each. The sequence alignment also confirmed that there were no gaps in between. All the noncoding sequences of mitochondrial CR gene were subjected to alignment without any prior modification. Phylogenetic analysis using 692 bp nucleotide segment of COI clearly discriminated the 3 species of Alecoris genus, while Alectoris chukar was found to be more closely related to Alectoris philbyi than Alectoris melanocephala (Fig. 3). Use of CR gene revealed the same phylogenetic pattern with comparable bootstrap support (Fig. 4). The interspecific differentiation among the taxa appeared to be similar for both the analysis protocols, UPGMA and ML (Figs. 3 and 4). The translation of nucleotide sequences (codon starts at position 2) revealed identical sequences of amino acids for all the taxa except one individual (Alectoris chukar, GQ481316) which showed a single variation (Ile to Thr) at position 176 of total 230 amino acids. This variation in the protein sequence was caused by a transversion form T to C at 528 position of the nucleotide sequence resulting in codon change from AUC (Ile) to ACC (Thr). The phylogenetic tree based on protein sequences clearly separated this particular individual from the remaining samples (Fig. 5). Discussion The results of this study clearly demonstrated the discriminatory power of COI barcodes for species identification. The sequences from the two samples of Philby's rock partridge were found to be identical whereas only 3 and 4 within-species variable sites were observed in the samples of Arabian partridge and chukar partridge respectively (Fig. 1) barcode for investigating the population structure in water rails at the genetic level. Yang et al 24 amplified COI barcode from the blood stain and confirmed the identification of the bird involved in the birdstrike incident as red-rumped swallow. Herbert et al 6 determined COI barcodes for 260 species of North American birds and found that all the species had a different COI barcode while the differences between closely related species were about 18-fold higher than the differences within species. Kerr et al 7 have determined intraspecific sequence divergences in eastern Palearctic birds using COI barcodes. Yoo et al 8 have utilized COI barcode for unambiguous discrimination of 92 species of Korean birds. Several investigators have also used mitochondrial CR sequences for molecular diversity and phylogenetic analyses of bird species. Escalante et al 29 have studied the phylogenetic relationship based on CR together with other two mitochondrial genes to trace the ancestry of the avian genera Oporornis and Geothlypis. Sequencing of the CR has also supplied valuable information on the ancestry of chukar partridge populations. 30 Huang et al 31 have analyzed CR from 180 individuals from 13 populations of the partridges and suggested the existence of a geographical structure among Chinese bamboo partridge populations, resulting from the synergistic affect of Pleistocene climatic variations. Qu et al 32 have observed a close relationship between CRbased phylogeny and phylogeographical structuring, while the level of genetic diversity in all avian populations studied were associated with the wide ecological distributions and niche variation. Huang et al 33 have utilized mitochondrial CR for studying population differentiation in context with phylogeographic structure of rusty-necklaced partridges. The mitochondrial CR sequences have suggested the existence of a phylogeographical structuring among rock partridge populations, resulting from genetic divergence in southern refugia and subsequent postglacial colonization of northern mountain areas. 34 Barbanera et al 35 have used CR in combination with cytochrome b to determine the genetic structure of Mediterranean chukar populations to aid management decisions. Moreover, female-mediated gene flow determined by CR haplotyping could be an important consideration for captive-breeding programs for threatened birds. 26 comparison, indicating that both the genes evolved at similar speeds, at least for the taxa of our study. This is not surprising as a previous study looking at the CR of 68 species of birds found that even cytochrome b evolved at equal speed or faster than the CR in the majority of cases. 17 There was a symmetrical distribution of variable sites in COI barcode (Fig. 1) as compared to skewed distribution in CR gene, where most of the variable sites occurred at the ends while the middle region (400-800 bp) represented most of the conserved sites (Fig. 2). Phylogenetic analysis using nucleotide sequences of COI gene separated the three species into three different clades with high bootstrap support (Fig. 3); similar phylogenetic inference was also observed using CR gene (Fig. 4). However, the COI protein tree did not appear to be phylogenetically informative, apparently due to the use of closely related species (Fig. 5). Nevertheless protein trees are virtually more advantageous for intraspecific discrimination among distantly related organisms. We used two different methods for creating the phylogenetic trees; a distance method (UPGMA) and a character-based method (ML). Earlier, we have noticed that UPGMA could be a better alternative to maximum parsimony (MP) method for phylogenetic inference using mitochondrial sequences. 18 Moreover, ML and NJ methods have been shown to be nearly equally efficient and generally more efficient than the MP method. 19 Due to their faster evolutionary rates compared to ri bosomal RNA genes, the mitochondrial proteincoding genes (such as COI) and noncoding CR are regarded as powerful markers for genetic diversity analysis at lower categorical levels, including families, genera and species. 20 However, COI barcodes are able to identify entities below the species level that may constitute separate conservation units or even species units. 21 Recently COI barcodes have been utilized for various purposes. Fleischer et al 22 have conducted DNA analysis of seven museum specimens of the endangered North American ivory-billed woodpecker (Campephilus principalis) and three specimens of the species from Cuba to document their molecular diversity. The sequences of these woodpeckers have been shown to provide an important DNA barcoding resource for identifica tion of these critically endangered and charismatic woodpeckers. Tavares et al 23 have used 686 bp of the mitochondrial DNA COI Although partridges have not acquired the threatened status at the global level, it is likely that local vulnerable population exists within the species range, mainly due to habitat loss and hunting that may warrant special attention. 36 Barilani et al 37 have reported widespread introgressive hybridisation, suggesting that released captive-bred partridges have reproduced and hybridized in nature polluting the gene pool of wild rock partridge populations in Greece. A clear understanding of evolutionary history and phylogeography provides valuable information about the influence of physical barriers and habitat preference on gene flow in birds. 38 High levels of mitochondrial genetic diversity in combination with genetic differentiation among subgroups within regions and between regions highlight the importance of local population conservation to preserve maximal levels of genetic diversity. 26 Genetic data have also suggested that patterns of speciation and population diversification of Przewalski's rock partridge have been affected by the stability of the climate, natural selection, and human intervention. 39 Thus, application of COI barcoding for molecular diversity and phylogenetic analysis of partridges could provide valuable information about species identification, population structure, evolutionary history and molecular conservation. In conclusion, COI barcoding is a powerful tool for species identification and phylogenetic inference. The nucleotide sequence of partial segment of COI gene effectively discriminated 3 species of the genus Alectoris. This is the first study reporting the COI barcodes of Arabian partridge and Philby's rock partridge. publish with Libertas Academica and every scientist working in your field can read your article "I would like to say that this is the most author-friendly editing process I have experienced in over 150 publications. Thank you most sincerely." "The communication between your staff and me has been terrific. Whenever progress is made with the manuscript, I receive notice. Quite honestly, I've never had such complete communication with a journal." "LA is different, and hopefully represents a kind of scientific publication machinery that removes the hurdles from free flow of scientific thought." Your paper will be: Available to your entire community free of charge Fairly and quickly peer reviewed Yours! You retain copyright http://www.la-press.com
The manner of the Prophet concealed, found and regained ABSTRACT This article continues an earlier investigation in this journal, of the synthesis of Sufism and Tantrism in a corpus of texts from Aceh between the 16th and 19th centuries. The revisiting was stimulated by a rapid development of scholarship on the interaction of Sufism and Tantra/yoga in the Islamic oikoumene and the discovery of new, more detailed texts in Malay on this subject. The most important among them is Bustn al-slikn (Garden of wayfarers); found in the MS PNI Jakarta Ml. 110 (ff.2v30r). A loosely structured themed anthology, Bustan consists of ten chapters that contain treatises providing a comprehensive idea of the Sufi-Tantric branch of Islamic mysticism in Aceh and, mutatis mutandis, in the Malay-Indonesian world. Although summarising the entire Bustan, the article concentrates on ilm al-nisa (ilm al-nis; the science of women) from the texts early chapters and examines it within the context of the Acehnese Sufi-Tantric corpus and Sufi-yogic/Tantric works of the Islamic world. The author of Bustan made every effort to legitimise the science of women as a genuinely Islamic doctrine of spiritual wedding with unio mystica as its final goal. Allegedly created and practised by the Prophet Muammad himself, ilm al-nisa includes the practices of mystical gazing, breathing, touching and coition. The article scrutinises their Sufi and Tantric aspects, revealing the synthesis underpinning them. Against the background of early forms of Islam in the Malay-Indonesian world, this synthesis, by facilitating the mutual translatability of the old and the new religion, was instrumental in the peaceful Islamisation of the region.
Additional 6, Decays with Large CP Violation and No Final State Phase Ambiguities The B, modes, D " Xo, generated by the quark process, b-, c+u d, have a large CP asymmetry within the Cabibbo-Kobayashi-Maskawa (CKM) model. This asymmetry depends only on a ratio of CKM elements and not on final state phases. The CKM model predicts the same asymmetries for the D " Xo, YK, and D'D-modes. We therefore advocate measuring the asymmetries of the modes D " Xo and Y/K,, D+D-separately, because a difference in them presents a violation of the CKM model. Since this note sums over many states and since opposite CP parities flip the sign of the asymmetry, a general prescription for deriving CP parities of two body modes is presented utilizing the helicity formalism.
Pantalar Arthrodesis for Post-Traumatic Arthritis and Diabetic Neuroarthropathy of the Ankle and Hindfoot Background: Pantalar arthrodesis is an important salvage option for stabilizing the hindfoot and salvaging the limb following trauma or collapse. This report evaluates the healing rates and complications which occur in diabetics and posttraumatic patients. Materials and Methods: Twenty patients presenting with post-traumatic arthritis of the ankle-hindfoot (twelve) or with Type II or Type IIIA Charcot arthropathy (eight) were managed with a pantalar fusion. Followup averaged 46 months. Patients were evaluated using the Short Form-36 (SF-36), the American Orthopaedic Foot and Ankle Society (AOFAS) Ankle-Hindfoot score, the Short Musculoskeletal Function Assessment (SMFA) and the Visual Analog Pain Scale (VAS). Results: There were no amputations in either group. Casting averaged 14.9 weeks, full weightbearing was achieved at 25.1 weeks and time to union averaged 44.1 weeks. Average age was 56.3 yrs. and BMI averaged 34.2. Fourteen patients (70%) had their surgery performed in multiple stages. Acceptable outcomes were noted for all patients for the SF-36, AOFAS and SMFA scores. VAS scores averaged 2.2. There were ten complications (50%); four patients (two in each group) required additional surgery. Conclusions: Pantalar arthrodesis is a reasonable salvage option for patients with severe post traumatic arthropathy and neuropathic arthropathy. Patients should be informed of the increased risks as well as the long periods of postoperative immobilization and nonweightbearing. We believe a pantalar arthrodesis can produce acceptable outcomes regardless of the cause of disability, with a staged or single approach, and whether the surgery is performed with plates and screws or an intramedullary device. Level of Evidence: IV, Retrospective Case Series
Comparison of real time image transfer in wireless multimedia sensor networks Wireless Multimedia Sensor Networks (WMSN) allows developing many applications which addresses to many areas like mobile healthcare, environmental monitoring and traffic monitoring. During those applications, real time data transmission is very important in terms system security and usability while the multimedia data is processed and transferred. Namely, it is required to transfer data taken from a sensor to another sensor or to a base station as soon as possible. The most time-consuming part is caused by image compression algorithms. In this study, the computation speeds of the basic image compression algorithms Discrete Cosine Transform (DCT) and Embedded Zero-tree Wavelet (EZW) are compared. This operation is realized on the sensors and the transmission time is not added to the computation time. The algorithms are compared via MATLAB. This study may set light to the efficiencies of some algorithms which are derived from the compared algorithms in real time image transmission.
Hospitalization for community-acquired pneumonia in Alberta First Nations Aboriginals compared with non-First Nations Albertans 1Department of Medicine and 2Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton; 3Research and Evidence, Alberta Health and Wellness, Edmonton; 4Department of Public Health Sciences, University of Alberta, Edmonton, Alberta Correspondence: Dr David Johnson, #30 The designation of racial origin in Canadian administrative data is limited. Aboriginals identified by Indian and Northern Affairs Canada are referred to as status or First Nations Aboriginals and were identified in the administrative data. In the present study, the course of hospitalization for communityacquired pneumonia for First Nations Aboriginals and an agesex-matched group of non-First Nations Albertans were compared. It was hypothesized that the frequency and outcomes of hospitalizations for community-acquired pneumonia in the First Nations Aboriginal group differed from those of the matched non-First Nations group. METHODS Two administrative health service databases were used and the analysis was done within the protected environment of Alberta Health and Wellness, which is governed by provincial legislative guidelines on the confidentiality of health information. These data capture nearly the entire population and included a unique anonymous personal identifier, allowing linkage between data bases. The databases were the Canadian Institute for Health Information's Inpatient Discharge Abstract Database of hospital abstracts for the province of Alberta for 1997/1998 to 1998/1999, and the Alberta Health Insurance Plan Registry File for 1997 to 2000. Community-acquired pneumonia was defined as the most responsible diagnosis, or any of the other 15 diagnosis codes defined to be Type 1 (pre-admit comorbidity) by the International Classification of Diseases, 9th revision, Clinical Modification (ICD-9-CM) ( ) found in the hospital abstracts. Exclusion criteria Exclusion criteria were: not an Alberta resident or not treated in an Alberta acute care facility; adjacent diagnosis related group defining hospitalization for a surgical procedure; and any previous hospitalization within 10 days of incident pneumonia case. Identifying First Nations Aboriginal status First Nations status (recorded as federal band registry identification with Indian and Northern Affairs Canada) were recorded in the Alberta Health Insurance Registry database, which was then linked to hospital abstracts. The comparison group for the population was created by matching each First Nations hospitalization by age, sex and year of hospitalization to non-First Nations pneumonia cases. Three controls for each First Nations Aboriginal case were used. Length of stay Length of stay was calculated as days between discharge and admission date. Transfers were attributed to the index admission and cumulative hospital length of stay calculated over all admissions. Alternate level of care referred to patients who remained in hospital but no longer required acute care. Alternate (not acute care) level of care days were subtracted from the length of stay. All active hospital acute care beds in each year per resident region were surveyed and maintained in the provincial databases. Severity Severity of pneumonia illness was defined as any of the following: transfer to hospital from a nursing home, long-term care institution or continuing care institution; transfer from another acute care facility as defined by readmission to hospital for the diagnosis of pneumonia within 48 h of previous discharge; special care unit admission (defined by each hospital); diagnosis code of respiratory failure or arrest (ICD-9-CM 518.81, 799.1); diagnosis code of hypotension or shock (ICD-9-CM 458.xx, 758.5x); procedure code for ventilation for greater than 96 h (ICD-9-CM 96.72); or procedure code for dialysis (ICD-9-CM 39.95 54.98). Defining hospital type by patient volume Hospitals admitting patients with community-acquired pneumonia were categorized into five groups on the basis of the average number of hospital discharges per year over the five-year study period, geographic location and medical school proximity. Rural hospital were categorized by the number of pneumonia cases (50 and 108 representing the 50th and 75th percentiles, respectively). Rural hospitals with less than 50 cases/year (77 hospitals). Rural hospitals with 50 to 108 cases/year (27 hospitals). Regional hospitals (five hospitals) were categorized for each of the five nonmetropolitan regional health care cities (67 to 251 cases/year) and one high volume rural hospital (221 cases/year) was also added to this group. Metropolitan hospitals (seven hospitals) were located in the metropolitan health regions of Calgary and Edmonton hospitals (92 to 813 cases/year). Medical school metropolitan hospitals (2 hospitals) were located adjacent to medical schools -one hospital in each of two metropolitan centres (493 and 610 cases/year). Travelling distances Each case was mapped to the centre of a postal code and distances, as the 'crow flies,' between the centroids were calculated. The nearest hospital and actual admitting hospital distances to resident postal code were obtained for all nonurban residents (not residing in Calgary or Edmonton health regions). Urban resident distances were zero. Hospital costs Inpatient cost per resource group number was calculated using the provincially approved methodology. Total costs combine the direct and indirect costs associated with an inpatient hospitalization from the time a patient was admitted to the hospital to the time of discharge. All costs were estimated for 1998/1999 and assumed similar for all the study years. The quality of the data reporting of costs in Alberta has been highly ranked by the Canadian Institutes for Health Information, which avoids methodological issues such as those that arise around the collection of cost data in the United States, in part, due to the use of prices rather than costs. Outcomes Outcomes were defined as: hospital discharge rate for the First Nations Aboriginal and matched non-First Nations group; length of hospital stay per hospital discharge; any rehospitalization between zero to 30 days after index pneumonia hospital discharge date, excluding rehospitalization for pneumonia between zero to two days after discharge (considered to be a hospital transfer); and median daily hospital cost per hospital discharge. Statistical analysis Age-sex-adjusted provincial hospital discharge rates and their 95% CIs were calculated. The indirect standardization method using the two-year average, age-sex specific rates as the standard rates (18 to 34 years old, 10 year increment afterwards, up to 74 years old, 75 and over for each sex) was used. To calculate standard error for the standardized rates, the patient's age at a fixed date was necessary because a patient may have had more than one hospital discharge in a year. Age at the fiscal year end was used for the rates. Hospital discharge rates were compared using the Student's t test. If the Student's t test was significant, individual rate differences were compared using the overall 95% CI after adjusting for multiple comparisons. To analyze the case-control data, the conditional logistic analysis tool LogXact V. 2.0 (Cytel Software Corporation, USA) was used. The data on continuous outcomes (costs and length of stay) were dichotomized using their median. Covariates were age, sex, year of hospitalization, per capita number of acute care beds per resident region in each study year, hospital type as defined above, exported case (ie, service region not equal to recipient region), urban or nonurban resident region, urban or nonurban service region, nursing home transfer to hospital, transfer to another hospital, special/intensive care unit admission, diagnosis code of respiratory failure/arrest, diagnosis code of hypotension/shock, procedure code for ventilation greater than 96 h, procedure code for dialysis and number of comorbid diagnoses (zero, one, two, greater than two). Significance was P<0.05. RESULTS During the two years of the study there were 1230 First Nations Aboriginal hospitalizations for 976 unique patients and 3691 non-First Nations hospitalizations for 3581 unique patients. Compared with the non-First Nations group, First Nations Aboriginals (Table 1) admitted to hospital with community-acquired pneumonia were more likely to reside in a rural region and were more likely to be hospitalized in a small rural hospital. The frequency of comorbidity in the two groups was similar but the case mix was different (ie, more diabetes and less malignancy in the First Nations Aboriginals). First Nations Aboriginals had less severe pneumonia (measured by frequency of admission to special/intensive care unit, hypotension/shock, respiratory failure/arrest and ventilation for greater than 96 h). A larger proportion of First Nations Aboriginals resided in a rural region and, on average, rural First Nations Aboriginals travelled a greater distance to a hospital compared with other rural residents ( Table 2). More rural First Nations Aboriginals had services provided in a health region different from their region of residence and more were transferred to another acute care hospital after the initial hospital admission. The First Nations Aboriginal hospital discharge rate for community-acquired pneumonia was five times greater than the Alberta provincial average ( Table 3). The number of First Nations Aboriginals with alcohol-related problems was twice that of the non-First Nations group (Table 1). The First Nations Aboriginals hospital discharge rate for aspiration pneumonia was also five times greater than that of the Alberta population (Table 3), but the proportion of aspiration pneumonia to all pneumonia was similar in both groups. Length of hospital stay for First Nations Aboriginals was nearly two days shorter than that for the non-First Nations group. The rate of all cause readmissions for First Nations Aboriginals was approximately 25% greater that of the non-First Nations group ( Table 4). The average hospital cost per day was greater for First Nations Aboriginals (Table 4). Cumulatively, one hospital accounted for 11.5%, two hospitals accounted for 20% and 10 hospitals accounted for 52% of discharges. After accounting for comorbidity and severity of pneumonia (Tables 5-7), in-hospital mortality and hospital length of stay were lower for First Nations Aboriginals compared with the matched non-First Nations group (odds ratio 0.49; 95% CI 0.37 to 0.66 and odds ratio 0.87; 95% CI 0.79 to 0.97, respectively), and the rate of 30-day hospital readmission was higher for First Nations Aboriginals compared with the non-First Nations group (odds ratio 1.42; 95% CI 1.21 to 1.68). The cost per hospital admission for First Nations Aboriginals was 94% of the average cost of the matched non-First Nations group (CDN$4,206). However, their daily average cost was 1.25 higher (95% CI 1.14 to 1.36) than that of the matched non-First Nations (Table 8). DISCUSSION The hospital discharge rate for community-acquired pneumonia is approximately five times greater in First Nations Aboriginals compared with the Alberta population. It is unlikely that the high rate of hospitalizations in First Nations Aboriginals is due to more severe pneumonia because clinically, these hospitalizations were shorter in length and involved less specialized medical care (such as special/intensive care unit admission and mechanical ventilation greater than 96 h). After adjustment (see regression methods), in-hospital mortality was lower among First Nations Aboriginals. As well, it is unlikely that the high rate of hospitalizations in First Nations Aboriginals was due to a generally unhealthy population because their comorbidty was similar to that of the matched non-First Nations group. Hospital costs during the first days of admission are greater due to investigations, greater severity of illness and the more intensive care required. The greatest morbidity and mortality burden for pneumonia falls disproportionately on the elderly. First Nations Aboriginals hospitalized for community-acquired pneumonia were younger than the Albertan population hospitalized for community-acquired pneumonia. We used age-and sexmatched cases in the modelling to minimize differences in the two populations. The non-First Nations group was more likely to have an urban residence compared with First Nations Aboriginals. Rural First Nations Aboriginals were located at a farther distance from their local hospital compared with those in the rural non-First Nations group. Physicians may have been more likely to admit patients who did not reside in close proximity to the admitting hospital. However, a greater distance to the local hospital did not account for the greater frequency of transfer to another acute care hospital (despite less comorbidity and severity of illness). As well, 31% of rural First Nations Aboriginals were admitted to a service region different from their resident region; over twice the frequency of the rural non-First Nations group. The distribution of hospitals used demonstrates that despite their more nonurban residency, rural First Nations Aboriginals were admitted to urban hospitals more frequently than those in the rural non-First Nations group. Thus, not only did the rate of hospitalization differ in the First Nations Aboriginal population, the location of the hospital with respect to residence was also different. First Nations Aboriginals have many other important differences from non-First Albertans. Alcohol/substance abuse, smoking, diabetes and end-stage chronic renal failure are more common in First Nations Aboriginals. In the present study, we controlled for other comorbities. In addition, aspiration pneumonia (related to substance abuse) was not proportionally more prevalent and chronic obstructive pulmonary disease (as related to smoking) was not more frequent in First Nations Aboriginals. There are a number of limitations to the present study. First Nations Aboriginals registered with Indian and Northern Affairs Canada may not be reflective of Aboriginals without registration. The Aboriginal population is not considered a homogenous group because they are composed of First Nations people registered with Indian and Northern Affairs Canada and include nonregistered Aboriginals, Metis and Inuit. The place of residence for First Nations Aboriginals may be more related to band location rather than actual domicile. Systematic underreporting of out-of-hospital death in First Nations Aboriginals may result in an overestimation of those at risk and an underestimation of pneumonia rates. These limitations underscore why secondary use of administrative data should be interpreted with caution. Population-based administrative database research is highly generalizable but limited in clinical detail. We attempted to adjust for case severity (hypotension/shock, respiratory arrest/failure, ventilation, special care unit admission, export to another region, transfer to another hospital) and case mix (comorbidity, transfer from nursing home, age, sex) but may not have captured all variations. These variables are likely less reliable than a clinically derived pneumonia index. Individual data about the use of influenza and pneumococcal vaccines were unavailable. In particular, the influenza vaccine has been shown to reduce the hospitalization rate for pneumonia and its use was lower in Aboriginals. CONCLUSIONS It is unlikely that the high rate of hospitalizations among First Nations Aboriginals is due to more severe pneumonia or having more comorbidity. Due to the higher rates of hospitalization and subsequent rehospitalization, total hospital costs for community-acquired pneumonia in First Nations Aboriginals was greater than the cost for the non-First Nations group. Understanding the underlying reasons for the increased frequency of hospitalizations in First Nations Aboriginals and the possible remedies require further study. For example, it is important to obtain a better understanding of the reasons physicians choose to admit or not admit patients with pneumonia at a defined degree of severity and to determine if these choices are systematically different in First Nations Aboriginals. Given the concentration of First Nations Aboriginal hospitalizations in a few hospitals, local efforts may be helpful. For example, a high rate of pneumococcal pneumonia in First Nations Aboriginals admitted to these hospitals may support a local public health program of universal pneumococcal vaccination. With a mean age of 53 years, most of the First Nations patients who require admission to hospital for the treatment of pneumonia would not qualify for pneumococcal vaccine under the program in Alberta, which recommends vaccination for all adults 65 years of age and older. Studying the incidence of pneumococcal pneumonia in First Nations Aboriginals could substantiate the potential benefit for a lowered age of pneumococcal vaccination. Community-acquired pneumonia hospitalization in Aboriginals
Liver segmentation using structured sparse representations Segmentation of liver from volumetric images forms the basis for surgical planning required for living donor transplantations and tumor resections surgeries. This paper introduces a novel idea of using sparse representations of liver shapes in a learned structured dictionary to produce an accurate preliminary segmentation, which is further evolved using a joint image and shape based level-set framework to obtain the final segmented volume. Structured dictionary for liver shapes can be learned from an available training dataset. The proposed approach requires only 3 orthogonal segmented masks as user-input, which is less than half the number required by current state-of-the-art interaction-based methods. The increased accuracy of the preliminary segmentation translates into faster convergence of the evolution step and highly accurate final segmentations with mean average symmetric surface distances (ASSD) of (1.03±0.3)mm when tested on a challenging dataset containing 62 volumes. Our approach segments a volume on an average of 5 mins and, is 25% (approx.) faster than comparably performing techniques.
HSA Special Issue: Housing in Hard Times: Marginality, Inequality and Class Abstract This paper reasserts the relationship between class and housing through a sociological exploration of working-class place attachment, against the backdrop of a recession and government disinvestment in social housing. These are hard times for housing and harder still if you are working class. Interest in working-class lives within sociological research has declined; meanwhile, place attachment is deemed a middle-class proclivity of elective belonging: a source of place-based identity in response to ontological insecurity. I draw from an ethnographic exploration of Partick, Glasgow to demonstrate how working-class residents express strong elective belonging in financially and ontologically insecure times yet, paradoxically, their ability to stay physically fixed to place is weakened. I argue that working-class place attachment is broadly characterized by strong elective belonging and poor elective fixity: choice and control over ones ability to stay fixed within their neighbourhood.
Spin thermopower and thermoconductance in a ferromagnetic graphene nanoribbon The spin thermoelectric properties of a zigzag edged ferromagnetic (FM) graphene nanoribbon are studied theoretically by using the non-equilibrium Greens function method combined with the LandauerBttiker formula. By applying a temperature gradient along the ribbon, under closed boundary conditions, there is a spin voltage Vs inside the terminal as the response to the temperature difference T between two terminals. Meanwhile, the heat current Q is accompanied from the hot terminal to the cold terminal. The spin thermopower S = Vs/T and thermoconductance = Q/T are obtained. When there is no magnetic field, S versus ER curves show peaks and valleys as a result of band selective transmission and Klein tunneling with ER being the on-site energy of the right terminal. The results are in agreement with the semi-classical Mott relation. When |ER| < M (M is the FM exchange split energy), is infinitesimal because tunneling is prohibited by the band selective rule. While |ER| > M, the quantized value of = 2 k B 2 T / 3 h?> appears. In the quantum Hall regime, because Klein tunneling is suppressed, S peaks are eliminated and the quantized value of is much clearer. We also investigate how the thermoelectric properties are affected by temperature, FM exchange split energy and Anderson disorder. The results indicate that S and are sensitive to disorder. S is suppressed for even small disorder strengths. For small disorder strengths, is enhanced and for moderate disorder strengths, shows quantized values.
Yeasts are able to inhibit growth of disease-associated fungi CURRENT STATUS: POSTED Background Fungal sepsis is often caused by non-albicans Candida or other species. These disease-associated species have strong virulence and often show resistance to the commonly used antifungal treatments. Therefore, finding new inhibitory agents nowadays is increasingly urgent. Results Our screening revealed that although the pathogenic fungi were much more tolerant to yeast-produced bioactive agents than the non-disease-associated yeasts, growth of Kodamaea ohmeri and Candida tropicalis could be inhibited by Metschnikowia andauensis, while Cryptococcus albidus can be controlled by Pichia anomala and Candida tropicalis. The size of the inhibitory zone formed by yeasts was depended on media, pH and temperature. However, extensive studies were carried out, we failed to find inhibitory yeast against Pichia kudriavzevii, suggesting that it must have high natural resistance. Conclusions Certain yeast species can contribute to the future solutions of problems caused by fungal resistance and can be good candidates for finding new bioactive agents which inhibit growth of disease-associated fungi. Background Fungal sepsis is often caused by non-albicans Candida or other species. These diseaseassociated species have strong virulence and often show resistance to the commonly used antifungal treatments. Therefore, finding new inhibitory agents nowadays is increasingly urgent. Results Our screening revealed that although the pathogenic fungi were much more tolerant to yeastproduced bioactive agents than the non-disease-associated yeasts, growth of Kodamaea ohmeri and Candida tropicalis could be inhibited by Metschnikowia andauensis, while Cryptococcus albidus can be controlled by Pichia anomala and Candida tropicalis. The size of the inhibitory zone formed by yeasts was depended on media, pH and temperature. However, extensive studies were carried out, we failed to find inhibitory yeast against Pichia kudriavzevii, suggesting that it must have high natural resistance. Conclusions Certain yeast species can contribute to the future solutions of problems caused by fungal resistance and can be good candidates for finding new bioactive agents which inhibit growth of disease-associated fungi. Background Fungaemia is associated with substantial morbidity and mortality of immuno-compromised persons. Studies have demonstrated that fungal sepsis can quite often be caused by non-albicans Candida species. Pichia kudriavzevii (is the teleomorph of the Candida krusei) was isolated from neonates and hospitalized patients . It is supposed to be the fifth most common cause of candidemia. Kodamaea ohmeri cells (is the teleomorph of Candida guillermondii) were isolated from infant and neonate or wound lesions and blood in several cases . Candida tropicalis is one of the most common colonizer in tropical countries. Its infections involve gastrointestinal invasions or arthritis , while Cryptococcus albidus was isolated from transplant recipient and lesion. Successful infection of the mentioned above species can be in connection with their dimorphisms (ability to morphological switch), polymorphisms of their virulence-related genes and possibly with their resistance to the commonly used antifungal agents. Because of these problems, we wanted to investigate whether cell division of the disease-associated species mentioned above can be inhibited by bioactive agents produced by yeasts or not. Well-known antagonistic species and species not studied for biological control were equally tested. Our screening revealed the species that were able to inhibit cell division of infectious fungi and shed light on that size of the inhibitory zones produced by the yeasts, strongly depended on media, pH and temperature. Our data suggested that Pichia kudriavzevii must have strong inherited resistance to the yeast-produced antifungal agents. As the Table 1 shows, growth of Kodamaea ohmeri (Fig.1a) and Candida tropicalis could be inhibited by M. andauensis cells, while Cryptococcus albidus was controlled by P. anomala and C. tropicalis. Other test species were not able to form inhibitory zone on the lawns of disease-associated species, in turn they were effective in the case of non-disease-associated yeast lawns, which were used as control (Table 1). Among the non-diseaserelated species, the Saccharomycopsis crataegensis and Wickerhamomyces orientalis ) cells were especially sensitive, because almost all test strains were able to inhibit their growth (Table 1). Interestingly, in some cases, growth stimulation around of the lawn (indicated with S in the Table 1, Fig.1b) or co-occurence of inhibitory-and stimulation zones could also be observed (indicated with I-S in the Table 1, Fig.1c). Pichia kudriavzevii was highly resistant Our screening suggested that Pichia kudriavzevii can have strong resistance against yeasts (Table 1). To learn whether it is true or not, further test strains belonging to different species and originated from different regions of the World were investigated on the Pichia kudriavzevii lawns. Our data confirmed the strong resistance of Pichia kudriavzevii (Table 2), since altogether 50 strains belonging 35 species were not able to inhibit its growth on complete and minimal media ( Table 2). In contrast, Saccharomycopsis crataegensis cells (used as control) could be inhibited by several yeast species ( Table 2). Size of inhibitory zone can strongly depend on media, pH and temperature Our earlier data suggested that medium and culture conditions can have strong impact on biocontrol activity (see Saccharomycopsis crataegensis- Table 2). Thus, we repeated our experiments with one of the disease-associated species (Cryptococcus albidus) applying minimal (EMMA) and complete (YPA) media, different pH and temperature and using further test strains. Our data confirmed that culture conditions can strongly influence antagonistic effect of the test strains (Table 3). Consequently, modifying of the culture conditions could lead to finding further antagonistic species, such as e.g. Candida insectorum against Cryptococcus albidus (Table 3). Discussion Non-albicans Candida or other species including Pichia kudriavzevii, Kodamaea ohmeri Candida tropicalis or Cryptococcus albidus have been more frequently isolated from hospitalized patients. These species seem to be very virulent and often show resistance to the commonly used antifungal treatments. Thus, consequences of these fungal infections can be very serious, especially in children, neonates or immune-compromised patients. Accordingly, finding new inhibitory agents is increasingly urgent. In order to identify yeast species which can have inhibitory effect against disease-associated fungi, screening of yeasts on Pichia kudriavzevii, Kodamaea ohmeri, Candida tropicalis, Cryptococcus albidus lawns were carried out. Our data showed that growth of Kodamaea ohmeri and Candida tropicalis could be inhibited by Metschnikowia andauensis, while Cryptococcus albidus can be controlled by Pichia anomala and Candida tropicalis (Table 1, Fig.1a). It means that bioactive agents of these inhibitory test strains well worth examining and yeasts can be attractive possibilities in the future solution of fungal resistance problems. Although, certain enzymes and proteins produced by these yeasts are partly known, we do not know exactly, which inhibitory agent was effective against the disease-associated strains mentioned above. To identify them precisely, further studies are required. Our tests shed also light that pathogenic fungi are much more tolerant to bioactive agents than the non-disease-associated yeast, such as e.g. Saccharomycopsis crataegensis and Wickerhamomyces orientalis ( Table 1). The antagonistic effects were often dependent on media, pH and temperature ( (Table 3). In contrast, application of minimal and complex media and 50 different test strains (belonging to 35 species) did not lead to success in the case of Pichia kudriavzevii, because we failed to find inhibitory yeast against it ( Table 2). Causes of its high resistance are not known and require further studies. We suppose that it can be an inherited speciesspecific feature of Pichia kudriavzevii, because our strains were isolated from nature and did not meet earlier with antifungal medicaments. Its high tolerance is in good agreement with multidrug resistance of the clinical isolates. Our experiments shed also light on complexity of the action of bioactive agents, since growth stimulation was noticed in certain lawns (Fig.1 b) (Tables 1, 3), similarly to the previous experiences. Co-appearance of inhibitory-and stimulation zones was more interesting and unexpected ( Fig.1 c). The latter phenomenon suggests a sophisticated mechanism of action and can indicate that effect of the bioactive agent produced by M. andauensis might be concentration dependent. Conclusion 6 Taken together, this study demonstrates that yeasts can be good candidates for finding new bioactive agents which can inhibit growth of disease-associated fungi. These bioactive agents can contribute to the future solutions of problems of fungal resistance. Taxonomic position PCR and sequencing methods were used for identification of the strains. Taxonomic position of the yeast species were identified by analysis of D1/D2 domain of 26S rDNA (Table 2). Spot assay for growth inhibition Cells of the overnight culture (YPL incubated at 28 o C) were harvested and cell suspension was prepared in sterile water (final cell density was OD 595 =1). EMMA minimal and YPA complete media were flooded with 1mL of the cell suspension. After drying of the cell suspension in sterile box (lawn), yeast strains to be tested for antagonistic capacity (test-strain) were streaked or dropped (10 ul of cell suspension, OD 595 =1) onto the surface of agar plates and were incubated at the indicated temperature. Appearances of inhibitory zones were investigated after 3-10 days. The results are coming from three separate experiments. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable. Availability of data and materials Data of this study are included in this published article. Competing of interest The authors declare no conflict of interest.
COOJA testbed for assessment of broadcast mechanism efficiency in clustered Wireless Sensor Networks A Wireless Sensor Network (WSN) in definition, is a self-configuring network consisting of small wireless nodes. Sensor nodes are essentially small computers with extremely basic functionality deployed to sense the physical world. The processing unit of each node has limited computational power, a memory and capacity of battery. The WSN are applied in a wide range of scenarios. Some of the application areas are environmental monitoring, agriculture, health and security.
Ultrastructural changes in granulosa cells and plasma steroid levels after administration of luteinizing hormone-releasing hormone in the Western painted turtle, Chrysemys picta. In this study we investigated the effects of treatment by luteinizing hormone-releasing hormone (LHRH) on the morphology and steroid release of ovarian tissues in the Western painted turtle, (Chrysemys picta). In Experiment I, four adult female turtles were injected with synthetic mammalian LHRH (i.p., 500 pg/g bodyweight) and four with saline 2-3 weeks prior to ovulation. Granulosa cells from LHRH-treated turtles vs controls contained both preovulatory follicles (16-20 mm in diameter) and small follicles (0.5-1.00mm in diameter) with increased RER, free ribosomes and mitochondria with swollen cristae. An increase in the amount of cytoskeletal material (microfilaments) was observed in granulosa cells of the experimental turtles compared to the controls. Cytoplasmic extensions of the oocyte and granulosa cells were longer in the small follicles of treated animals, accounting for the observed increase in the thickness of the zona pellucida (ZP) over the controls. In Experiment II, administration of LHRH (i.p.) to 10 turtles during the same period triggered a substantial increase in plasma progesterone and estradiol-17beta levels over the 10 saline-injected controls. This supports the idea that in this species, as in mammals, steroidogenic activity in the ovarian follicles are under the control of the hypothalamic-pituitary axis. The ultrastructure and hormonal levels of the experimental animals were typical of untreated turtles just prior to ovulation. In this species the development of follicles and steroidogenesis can be stimulated prematurely by a releasing hormone from a nonreptilian origin.
Pain, hand function, activity performance and apprehensiveness, in patients with surgically treated distal radius fractures. Distal radius fracture (DRF) is a common injury, affecting both function and activity performance. Postoperative rehabilitation is an essential part of the treatment of a surgically treated DRF. The study aims were to assess pain, hand function, activity performance and apprehensiveness and their association, during the first three months after a surgically treated DRF. Eighty-eight patients with a DRF were assessed for pain, hand function, activity performance and apprehensiveness three days and two, six and 12weeks after surgery. The results indicated that pain, range of motion (ROM), grip strength, apprehensiveness, and activity performance (PRWE) improved significantly between follow-ups (p<.001-.01). Apprehensiveness correlated moderately with activity performance on all visits (0.40-0.47, p<.01), which implies a correlation between the variables, but the regression model showed that the differences in the PRWE at twelve weeks cannot be explained by the differences in apprehensiveness or range of motion at cast removal. At 12weeks, the study participants had regained almost 70% of their grip strength and 74-96% of the ROM of the uninjured hand.The study shows that, during the study period, the participants improved in both pain, hand function and activity performance, and indicates that a simple question on apprehensiveness in terms of using the injured hand in daily life could be an important factor in distal radius fracture rehabilitation.
Apple Peel Polyphenols and Their Beneficial Actions on Oxidative Stress and Inflammation Since gastrointestinal mucosa is constantly exposed to reactive oxygen species from various sources, the presence of antioxidants may contribute to the bodys natural defenses against inflammatory diseases. Hypothesis To define the polyphenols extracted from dried apple peels (DAPP) and determine their antioxidant and anti-inflammatory potential in the intestine. Caco-2/15 cells were used to study the role of DAPP preventive actions against oxidative stress (OxS) and inflammation induced by iron-ascorbate (Fe/Asc) and lipopolysaccharide (LPS), respectively. Results The combination of HPLC with fluorescence detection, HPLC-ESI-MS TOF and UPLC-ESI-MS/MS QQQ allowed us to characterize the phenolic compounds present in the DAPP (phenolic acids, flavonol glycosides, flavan-3-ols, procyanidins). The addition of Fe/Asc to Caco-2/15 cells induced OxS as demonstrated by the rise in malondialdehyde, depletion of n-3 polyunsaturated fatty acids, and alterations in the activity of endogenous antioxidants (SOD, GPx, G-Red). However, preincubation with DAPP prevented Fe/Asc-mediated lipid peroxidation and counteracted LPS-mediated inflammation as evidenced by the down-regulation of cytokines (TNF- and IL-6), and prostaglandin E2. The mechanisms of action triggered by DAPP induced also a down-regulation of cyclooxygenase-2 and nuclear factor-B, respectively. These actions were accompanied by the induction of Nrf2 (orchestrating cellular antioxidant defenses and maintaining redox homeostasis), and PGC-1 (the master controller of mitochondrial biogenesis). Conclusion Our findings provide evidence of the capacity of DAPP to reduce OxS and inflammation, two pivotal processes involved in inflammatory bowel diseases. Introduction Gastrointestinal mucosa is constantly exposed to luminal oxidants from ingested nutrients, such as alcohol, cholesterol oxides, and key among these is the simultaneous consumption of iron salts and ascorbic acid, which can cause oxidative damage to biomolecules. Moreover, local microbes or infections, ischemia/reperfusion, gastric acid production and nonsteroidal anti-inflammatory drugs may promote the formation of reactive radicals. Additionally, the intestinal mucosa is subject to prolonged oxidative stress (OxS) from reactive oxygen species (ROS) generated during aerobic metabolism. The influx of neutrophils and monocytes associated with inflammation can further generate ROS via respiratory burst enzymes as well as those involved in prostaglandin and leukotriene metabolism. Even if the etiology of inflammatory bowel diseases (IBD) has yet to be fully elucidated, a close relationship has been noted between ROS and the mucosal inflammatory process. Although the specific events by which oxidants contribute to inflammation are not entirely elucidated, potential mechanisms include the activation of cyclooxygenage-2 (COX-2) and the transcription factor nuclear factor-kappa B (NF-kB) by pro-oxidants, thereby resulting in the initiation of the expression of genes controlling several aspects of the inflammatory, immune and acute phase responses. Current epidemiological and experimental studies support a beneficial role of dietary polyphenols in several gastrointestinal diseases, including IBD. Polyphenols are the most abundant antioxidants in the diet, (i.e. fruit, vegetables, beverages, herbs and spices). However, their poor intestinal absorption is responsible for luminal concentrations of phenolic compounds up to several hundred mmols in the gastrointestinal tract. Most of these polyphenols exhibit powerful antioxidant activity by acting as free radical scaven-gers, hydrogen donating compounds, singlet oxygen quenchers and metal ion chelators, while they are also able to induce cellular antioxidant defense modulating protein and gene expressions. In the present investigation, we hypothesize that apple peel-derived polyphenols act in the gut as powerful antioxidants and anti-inflammatory agents capable of exerting protective effects against harmful intraluminal components in the gut, which may maintain the body's natural defenses against a variety of intestinal diseases, including IBD. Chemical and Reagents HPLC-grade acetonitrile, methanol, acetone and Optima grade water were from Fisher Scientific (New Jersey, USA). Formic acid was purchased from Fluka (Steinheim, Germany). MTT was from Sigma (MO, USA). Apple peel crude extract (AB powder) and a purified polyphenolic fraction (JC-047) derived from dried apple peel powder (DAPP) were supplied from Leahy Orchards Inc. and AppleBoost Products Inc. DAPP Extraction The phenolic compounds of apples (80% McIntosh and 20% a blend of Northern Spy, Cortland, Empire, Ida Red, Jonagold and Spartan) were extracted by a method similar to that reported previously by Liu's laboratory. Briefly, 25 g apple peels were blended with 200 g chilled 80% acetone solution in a Waring blender for 5 min. The sample was then homogenized for 3 min using a Virtis 45 homogenizer. The slurry was filtered through Whatman No. 1 filter paper in a Buchner funnel under vacuum. The solids were scraped into 150 g of 80% acetone and homogenized again for 3 min before refiltering. The filtrate was recovered and evaporated using a rotary evaporator at 45uC. This residue represented the apple peel crude extract (AB powder) while the purified polyphenolic fraction (JC-047) was isolated by preparative HPLC. LC-MS Analysis of DAPP Crude Extract and Purified Fraction A reversed phase LC-MS method has been developed to separate and identify the mass and chemical structure of phenolic compounds derived from crude extract and purified fraction. Separations were performed on HPLC with fluorescence detection and HPLC-ESI-MS TOF (Agilent Technologies, Santa Clara, CA). The chromatographic column was a Halo C18, 3.06100 mm, 2.7 mm particle sizes (Advanced Materials Technology Inc., Wilmington, DE) maintained at 50uC and operated at 0.3 mL/min. A two-step linear acetonitrile gradient was used for elution. The acetonitrile concentration was increased from 2 to 40% over 20 min then from 40 to 90% over the next 15 min followed by an equilibration step with the initial mobile phase composition for a total run time of 40 minutes. The mass spectrometer was operated in negative electrospray mode with a dual spray configuration allowing for internal calibration and therefore for a very good mass accuracy. This allowed us to extract narrow mass range peaks for quantitation purposes and increase the selectivity of the method. Mass spectra were acquired from m/ z 100 to 2000 with an acquisition cycle of 0.89 s and a resolution greater than 10 000. The electrospray voltage was set at 3.5 kV, the fragmentor at 200 V and the source temperature at 300uC. Major phenolic compounds identified by HPLC-ESI-MS TOF were quantified by ultra-performance liquid chromatography system (UPLC) coupled to a tandem quadrupole mass spectrometer (MS/MS QQQ) equipped with an ESI source (UPLC-ESI-MS/MS QQQ). The UPLC-ESI-MS/MS QQQ system consisting of a Waters-ACQUITY UPLC with an AQUITY TDQ mass spectrometer (Waters, MA, USA). An Agilent Plus C18 column (2.16100 mm, 1.8 mm particle sizes) (CA, USA) was used, and column temperature was maintained at 30uC. The phenolic compounds were separated using a gradient mobile phase consisted of 0.1% formic acid in ultrapure water and acetonitrile (solvent A and B respectively) with the flow rate of 0.4 mL/min. The following gradient was used: 0-8 min, 3-35% B; 8-9 min, 35-60% B; 9-10 min, 60-85% B; 10-11 min, 85% B; 11-11.10 min, 85-3% B; 11.10-14 min, 3% B. Data were acquired by MassLynx V4.1 software and processed for quantification with QuanLynx V4.1 (Waters, MA, USA). The UPLC-ESI-MS/MS QQQ system was operated with an ESI interface in negative ionization mode. Cone and collision gas flow rates, obtained from a nitrogen generator N2 were 80 L/h and 900 L/h, respectively. The mass spectrometer parameters were defined with Waters IntelliStart software (automatic tuning and calibration of the AQUITY TQD), and manually optimized as follow: capillary voltage of 3 kV, source temperature at 130uC and desolvation temperature at 400uC. Cone voltage was 30 V, and collision energy was 18 eV for all phenolic compounds. Quantification was determined using multiple reactions monitoring mode for all transitions of phenolic acids, flavonols, flavan-3-ols, procyanidins and dihydrochalcones. Determination of Total Phenolic Content of DAPP Crude Extract and Purified Fraction The total phenolic content of AB powder or JC-047 fraction was determined using the Folin-Ciocalteu method, with gallic acid as a main standard. Briefly, 100 mL Folin-Ciocalteu reagent (diluted 10-fold in ultrapure water) and 80 mL sodium carbonate solution (7.5% in ultrapure water) were added to 20 mL MeOH (50% solution of extracts) in a 96-well plate. A blank sample and five calibration solutions of gallic acid (12.5 to 200 mg/mL) were analyzed under the same conditions. After 1 h-incubation at room temperature, the absorbance was measured at 765 nm using a Fisher Scientific Multiskan GO microplate reader (MA, USA). All determinations were carried out in triplicate and results were expressed as percentage of extract weight 6 SEM. Heterogeneity of Fractionated Oligomers and Polymers of DAPP on Normal-phase HPLC The procyanidin composition of AB powder and purified JC-047 fraction was analyzed as previously described by normal phase analytical HPLC using an Agilent 1260/1290 Infinity system. Samples (5 mL of 25 mg/mL solutions in acetone/ultrapure water/acetic acid, 70:29.5:0.5) were injected into the HPLC system, and the separation was performed at 35uC with a flow rate of 0.8 mL/min using a Develosil Diol column (250 mm 6 4.6 mm, 5 mm particle size), protected with a Cyano SecurityGuard column (Phenomenex, CA, USA). The elution was performed using a solvent system comprising Table 2. Heterogeneity of fractionated procyanidin oligomers and polymers of DAPP on normal-Phase HPLC. The procyanidin composition of DAPP from 25 mg/mL crude extract (AB powder) and purified fraction (JC-047) was analyzed by normal phase analytical HPLC using an Agilent 1260/1290 Infinity system coupled to a fluorescence detector. Individual procyanidins with degrees of polymerization (DP) from DP1 to DP.10 were quantified using external calibration curve of -epicatechin, taking into account their relative response factors in fluorescence. The results were expressed as mg/100 g of extract weight 6 SEM. *P,0.05, ***P,0.001 vs. AB powder. doi:10.1371/journal.pone.0053725.t002 solvents A (acetonitrile/acetic acid, 98:2) and B (methanol/ water/acetic acid, 95:3:2) mixed using a linear gradient from 0% to 40% B in 35 min, 40% to 100% B in 40 min, 100% isocratic B in 45 min and 100% to 0% B in 50 min. The column was re-equilibrated for 5 min between samples. Fluorescence of the procyanidins was monitored at excitation and emission wavelengths of 230 and 321 nm with the fluorescence detector, set to low sensitivity with a gain of 7X for the entire run. Individual procyanidins with DP from DP1 to DP.10 were quantified using an external calibration curve of (-)epicatechin, taking into account their relative response factors in fluorescence. The results were expressed as percentage of extract weight 6 SEM. Induction of Oxidative Stress and Inflammation Differentiated intestinal Caco-2/15 cells were used to study the effects of the aforementioned polyphenols in OxS (Fe, 200 mM/Asc, 2 mM) and inflammation (LPS, 200 mg/mL). Crude extract (AB powder, 250 mg/mL) and purified (JC-047, 250 mg/mL) fraction were added to the apical compartment of Caco-2/15 cells for 24 h before incubation with iron/ascorbate (Fe/Asc) and/or lipopolysaccharide (LPS) for 6 h at 37uC. In order to distinguish between acute and chronic inflammation, Caco-2/15 cells were also incubated with LPS for a 24-h period. To highlight the mechanisms behind the beneficial actions of DAPP against OxS and inflammation, some experiments were carried out with 50 mM caffeic acid phenethyl ester (CAPE; Sigma, MO, USA) and 0.4 mM indomethacin heptyl esters (Cayman Chemical, Ann Arbor, MI) to inhibit NF-kB and COX-2, respectively. Lipid Peroxidation Estimation of lipid peroxidation was assessed by measuring the release of malondialdehyde (MDA) from Caco-2/15 cells exposed to Fe/Asc (200 mM/2 mM) by HPLC. Briefly, proteins were precipitated with 8% sodium tungstate (Na 2 WO 4 ) (Aldrich, Milwaukee, WI). The protein-free supernatants were then reacted with an equivalent volume of 0.5% (wt/vol) thiobarbituric acid solution (TBA; Sigma, MO, USA) at 95uC for 60 min. After cooling to room temperature, the pink chromogene was extracted with 1-butanol and dried over a stream of nitrogen at 50uC for 3 hours. The dry extract was then resuspended in 100% MeOH before MDA determination by HPLC with a fluorescence detection (Jasco Corporation, Tokyo, Japan) set at 515 nm excitation and 550 nm emission. Endogenous Antioxidant Enzyme Activities Differentiated Caco-2/15 cells were harvested in hypotonic lysis buffer (10 mM HEPES, 1.5 mM MgCl 2, 10 mM KCl, 0.5 mM DTT, 0.2 mM PMSF). Total superoxide dismutase (SOD) activity was determined as described by McCord et al.. Briefly, superoxide radicals (O 2 -) were generated by the addition of xanthine and xanthine oxidase, and the oxidation of the SOD assay cocktail was followed using a spectrophotometer at 550 nm for 5 min. The same reaction was then repeated with the addition of the sample, and the SOD assay cocktail was less oxidized because of the SOD activity in the sample. The total SOD activity was then calculated. For glutathione peroxidase (GPx) activity, aliquots of cell homogenates were added to a PBS buffer containing 10 mM GSH, 0.1 U G-Red and 2 mM NADPH with 1.5% H 2 O 2 to initiate the reaction. Absorbance was monitored every 30 sec at 340 nm for 5 min. For G-Red activity, cell homogenates were added to a PBS buffer containing 2 mM NADPH and 10 mM of GSSG to initiate the reaction. Absorbance was monitored every 30 sec at 340 nm for 5 min. Immunoblot Analysis Following the incubation with the various stimuli, differentiated Caco-2/15 cells were sonificated and the Bradford assay (Bio-Rad, Mississauga, Ontario) was used to determine the protein concentration of each sample. Proteins were denatured in sample buffer containing SDS and -mercaptoethanol, separated on a 7.5% SDS-PAGE and electroblotted onto Hybond nitrocellulose membranes (Amersham, Baie D'Urf, Quebec, Canada). Signals were detected with an enhanced chemiluminescence system for antigenantibody complexes. No specific binding sites of the membranes were blocked using defatted milk proteins followed by the addition of one of the following primary antibodies: 1/1000 polyclonal antivillin (94 kDa; BD Biosciences, Mississauga, Ontario); 1/1000 polyclonal anti-occludin (59 kDa; Abcam, Campbridge, MA); 1:1000 polyclonal anti-COX-2 (70 kDa; Novus, Oakville, ON); The relative amount of primary antibody was detected with specie-specific horseradish peroxidase-conjugated secondary antibody (Jackson Laboratory, Bar Harbor, Maine). The b-actin protein expression was determined to confirm equal loading. Molecular size markers (Fermentas, Glen Burie, Maryland) were simultaneously loaded on gels. Blots were developed and the protein mass was quantitated by densitometry using an HP Scanjet scanner equipped with a transparency adapter and the UN-SCAN-IT gel 6.1 software. Prostaglandin E2 Determination Cellular prostaglandin E2 (PGE2) was measured by enzymelinked immunosorbent assay (Arbor Assay, Michigan, USA). After a short incubation, the reaction was stopped and the intensity of the generated color was detected in a microtiter plate reader (EnVision Multilabel Plate Readers, PerkinElmer) capable of measuring 450 nm wavelengths. Nuclear Extraction for Immunoblot Analysis of NF-kB, Nrf2 and PGC-1a Differentiated Caco-2/15 cells were washed twice with PBS and left on ice for 4 min in a lysis buffer containing 10 mM HEPES, 10 mM KCl, 1.5 mM MgCl 2, 2 mM DTT, and 0.4% Nonidet and antiproteases. Cells were then scraped and centrifuged for 5 min at 1,500 g at 4uC. Pellets were then washed with the same buffer, but without the Nonidet, and centrifuged again under the same conditions. The resulting pellets were then resuspended in 50 mL of final hypertonic lysis buffer (20 mM HEPES, 400 mM NaCl, 1.5 mM MgCl2, 0.2 mM EDTA, 2 1,4-dithio-DL-treitol, and 20% glycerol and antiproteases) and left on ice for 1 h with vortexing. They were then centrifuged for 10 min at 10,000 g at 4uC, and the supernatants were collected for protein and Western blotting to analyze NF-kB, nuclear factor erythroid-2-related factor 2 (Nrf2) and peroxisome proliferator-activated receptor gamma coactivator-1 alpha (PGC-1a) protein expression. Statistical Analysis All values are expressed as mean 6 SEM. Data were analyzed by using a one-way analysis variance and the two-tailed Student's t test using the Prism 5.01 (GraphPad Software) and the differences between the means were assessed post-hoc using Tukey's test. Statistical significance was defined as P,0.05. Profile of Phenolic Compounds of Crude and Purified DAPP A reversed phase LC-MS method has been developed in order to separate and identify masses and chemical structures of polyphenolic compounds contained in the crude extract (AB powder) and purified polyphenol fraction (JC-047) derived from DAPP. Flavonoids figured among the major polyphenol classes: they were identified on the basis of their common structure consisting of two aromatic rings bound together by three carbon atoms that form an oxygenated heterocycle. Representative extracted ion chromatograms of identified polyphenolic compounds (using accurate mass measurement) are shown in Figures 1A and 2A Figure 2B, left). The trimeric oligomers as proanthocyanidin trimer C1-C4 ( Figure 2B, right) share the same m/z of 865.199 ( Figure 2C). Colorimetric methods, including the Folin-Ciocalteu, were used for quantifying total phenolics content. The purified fraction contains a higher proportion (26%, P,0.01) of total phenolic compounds (19006160 mg of gallic acid equivalents/100 g of extract weight) compared to the crude extract (14106120 mg of gallic acid equivalents/100 g of extract The distribution of oligomers in the AB powder and JC-047 fraction was from degrees of polymerization (DP1 to DP10) . However, the determination of total procyanidins content was 3 times higher in the JC-047 compared to the AB powder extract ( Table 2). No procyanidin oligomers higher than decamers were detected in the polymeric procyanidin signal. Cell Integrity following Various Treatments The effects of Fe/Asc and LPS on Caco-2/15 cells integrity were examined by morphology assessment, protein content quantification and MTT assay after incubation periods of 6 and 24 h. The morphology and the protein content remained unchanged with the administration of Fe/Asc, LPS and their combination, as well as following treatment with the AB powder or JC-047 (data not shown). Similarly, Caco-2/15 cell viability was not affected by the addition of the various treatments ( Figure 4). Interestingly, an enhancement of villin protein mass was observed when Caco-2/15 cells were cultured in the presence of AB powder. Finally, there was no impact on Caco-2/15 cell monolayer transepithelial resistance (an indicator of cell conflu- ence and monolayer integrity) ( Figure 4) and on occludin protein mass (a biomarker for tight junction and mucosal barrier functions) ( Figure 4). Therefore, it could be concluded that our experimental conditions, including the use of DAPP, did not exert any cytotoxic effects on Caco-2/15 cells. Effects of DAPP on Lipid Peroxidation The extent of lipid peroxidation following the treatment of Caco-2/15 cells with Fe/Asc during 6 h was assessed by determining cellular levels of MDA. HPLC analyses indicated a four-fold increase in MDA (P,0,001) following the administration of the oxygen free radical-generating system Fe/Asc compared to controls ( Figure 5A). The presence of the AB powder or JC-047 fraction counteracted Fe/Asc-mediated lipid peroxidation with a more favorable impact of the former. Since OxS markedly altered the composition and properties of the bilayer lipid environment, we determined the profile of fatty acids (FA). In fact, the addition of Fe/Asc resulted in substantial differences in FA following the 6 h-period of cell incubation (Table 3). In particular, a significant decrease was noted in n-3 and n-6 polyunsaturated fatty acids (PUFA) (EPA, 20:5n-3; DHA, 22:6n-3; AA 20:4n-6) as well as in monounsaturated FAs (18:1n-9) ( Table 2). As a consequence, the calculated total n-3, n-6 and n-9 was reduced by 3-fold, 0.5-fold and 2-fold compared to controls (Table 3). As n-3 FAs were more affected by OxS than n-6 FAs, a decline was recorded in the ratio n-6/n-3, which indicates an inflammatory state. Nevertheless, preincubation with the AB powder or JC-047 fraction restored the levels and composition of PUFAs. Mechanisms for the Action of DAPP on Oxidative Stress As failure of antioxidant defense may explain the induction of OxS, we examined various endogenous antioxidant enzymes in Caco-2/15 cell line. Treatment with Fe/Asc alone or in combination with LPS caused a significant augmentation in the SOD activity, but preincubation of Caco-2/15 cells with the AB powder or JC-047 fraction blunted the effects of OxS and inflammation ( Figure 5B). Under these conditions, GPx activity was down-regulated by Fe/Asc and LPS, and restored by treatment with the AB powder or JC-047 ( Figure 5C). On the other hand, G-Red ( Figure 5D) showed a trend of increase with the polyphenol treatments. Effects of DAPP on Inflammatory Markers Cytokines and eicosanoids are pro-inflammatory compounds produced by the cells in response to injury. We therefore assessed the production of TNF-a and IL-6, two powerful inflammatory biomarkers, in Caco-2/15 cells incubated with Fe/Asc, LPS or their combination for 6 h. Analysis by Western Blot disclosed an elevation of protein mass of TNF-a (1.5 to 2.0-fold) and IL-6 (1.5 to 1.8-fold) in the presence of Fe/Asc and LPS, respectively, compared to control cells ( Figure 6). Pre-treatment with the AB powder or JC-047 fraction abolished the increase in TNF-a and IL-6 protein expression in Caco-2/15 cell line. We next turned to the formation of inflammatory eicosanoids such as PGE2 that is synthesized from arachidonic acid by COX-2. Our experiments showed that Fe/Asc and LPS elicited exaggerated synthesis of PGE2 whereas preincubation with the AB powder displayed high ability to prevent PGE2 accumulation in response to LPS but not Fe/Asc ( Figure 6). Mechanisms for the Action of DAPP on Inflammation Since the COX-2 enzyme may be behind the elevation of Fe/ Asc-and LPS-induced PGE2, we determined its protein expression. Both stimuli raised its protein mass as evidenced by Western blot (Figure 7). Pre-incubation of Caco-2/15 cells with the AB powder or JC-047 fraction averted the positive action of the oxidative and inflammatory stimuli on COX-2 protein expression. Importantly, the polyphenol antioxidants were as effective as indomethacin heptyl ester, a selective COX-2 inhibitor in preventing the elevation of PGE2. In addition, their combination provided a more substantial synergetic effect, which is indicative of different mechanisms of action for LPS-induced inflammation ( Figure 8). Mechanisms for the Action of DAPP on Transcription Factors NF-kB signaling pathway plays a crucial role in the initiation and amplification of inflammation via the modulation of multiple inflammatory mediators. Figure 9 shows that Caco-2/15 cells exposed to Fe/Asc or LPS displayed a high NF-kB signal in the nucleus along with a low level of IkB protein expression in the cytoplasm, which suggests that the inhibitory protein is degraded by the proteasome, leaving NF-kB free to enter the nucleus and activate the transcription of its target genes. As a consequence, the NF-kB/IkB ratio was increased under the presence of Fe/Asc ( Figure 9A) and LPS ( Figure 9B and 7C). Importantly, the AB powder or JC-047 fraction displayed their great potential to neutralize IkB degradation and NF-kB mobilization to the nucleus compared to CAPE, the NF-kB inhibitor, with LPS at 6 h ( Figure 9B) and LPS at 24 h ( Figure 9C) to mimic an acute and a long inflammation, respectively. The combined administration of CAPE and AB powder or JC-047 fraction did not produce significant changes, thereby indicating the same mechanisms of action. To decipher the mechanisms of action of the AB powder or JC-047 fraction, we examined the transcription factors that are involved in the regulation of antioxidant genes expression. The protein mass of Nrf2 in homogenates ( Figure 10A) and nuclei ( Figure 10B) was down-regulated by Fe/Asc-or LPS-induced OxS and inflammation, respectively. However, treatment with the AB powder or JC-047 fraction restored Nrf2 protein expression to the basal level. We also assessed the protein expression of PGC-1a a powerful transcriptional co-activator that up-regulates Nrf2. PGC-1a protein mass was down-regulated in response to OxS and inflammation in homogenates ( Figure 10C) and nuclei ( Figure 10D) in Caco-2/15 cells. However, the effect was reestablished when Caco-2/15 cells were pre-incubated with the AB powder or JC-047 fraction. Discussion Growing evidence suggests important roles of dietary factors in preserving health and even reversing the progression of chronic diseases, with anti-inflammatory effects as important underlying mechanisms. In the present study, we first characterize the polyphenol compounds of DAPP by HPLC-ESI-MS TOF and then tested their impact on cell integrity and viability. After we excluded any possible toxicity of this natural DAPP (crude extract) and its purified fraction, which has frequently been detected in various chemical drugs, we could subsequently document their remarkable capacity in scavenging ROS and neutralizing inflammation in intestinal absorptive cells. By dissecting the mechanisms of action, our in vitro experiments highlighted the ability of apple peel polyphenols to increase the antioxidant/anti-inflammatory defense by (i) preventing LPS-induced inflammation via limitation of the pro-inflammatory expression and activity of COX-2; (ii) ruling out LPS-mediated cytokine production through downregulation of NF-kB, an essential transcription factor for numerous cytokines and (iii) up-regulating the expression of transcription factors (Nrf2 and PGC-1a), key redox-sensitive transcription factors and crucial elements for mitochondrial biogenesis. The results of our comprehensive study provide fundamental information on the apple peel polyphenols. The high-resolution HPLC-ESI-MS TOF delivers the composition of the different biomolecules in DAPP (AB powder or JC-047 fraction). In the former, flavonols (composed of aglycone and glycosylated quercetin and dihydrochalcone) are the major subclasses of flavonoids present, while in the purified fraction, we mostly found the flavan-3-ols and their oligomers. Noteworthy, quercetin represents the preponderant flavonol in DAPP and, according to previous studies; it has exhibited anti-inflammatory and antioxidant activities, prevented platelet aggregation and promoted relaxation of cardiovascular smooth muscle. As a matter of fact, flavan-3-ols are a family of bioactive compounds and potent antioxidants as has been described in in vitro and in vivo studies. Importantly, in the current work, we have evaluated the antioxidant and antiinflammation power of both the crude extract (AB powder) and purified polyphenol fraction (JC-047) derived from DAPP since there was a need to prove that the beneficial effects are derived from the polyphenos contained in apple peels. In the present work, we used the Caco-2/15 cell line that undergoes a process of spontaneous differentiation leading to the formation of a monolayer of cells expressing several morphological and functional characteristics of the mature enterocyte. This remarkable intestinal model is regarded as the most appropriate for the investigation of gut absorption and interactions, nutrition, toxicology food microbiology, bioavailability tests, and screening of drug permeability in discovery programs. Multiple studies from our laboratory have shown that Caco-2/15 cell monolayers are fully appropriate for the study of OxS and inflammation. To produce OxS, we employed the Fe/Asc complex, a widely used oxygen-radical generating system since our laboratory reported the ability of iron to initiate strong lipid peroxidation, whereas ascorbic acid can amplify iron-oxidative potential by promoting metal ion-induced lipid peroxidation. The data of the present study clearly indicate that the Fe/Asc system functioned as a producer of lipid peroxidation given the production of MDA and the degradation of PUFAs and the production of pro-inflammatory eicosanoids. Additionally, with the Fe/Asc complex, the antioxidant/oxidative balance deteriorated the endogenous antioxidant enzymes. In this context, cosupplementation of iron and vitamin C worsens OxS in the gastrointestinal tract, thereby leading to ulceration in healthy individuals, and exacerbates chronic gastrointestinal inflammatory diseases, which may result in the development of cancers. Importantly, supplementation of DAPP by crude extract or its purified fraction significantly prevented lipid peroxidation and restored the depletion of some n-3 PUFA, likely by strengthening the endogenous antioxidant defense as illustrated, in our results, through SOD down-regulation and GPx up-regulation activities. For the induction of inflammation, we used LPS that has been extensively studied for the past two decades. This is a ubiquitous endotoxin mediator of gram-negative bacteria, which facilitates microbial translocation by a mechanism implicating physical perturbation of the gut mucosal barrier. LPS is also a potent inducer of the host's immune response via its capacity to stimulate the pro-inflammatory cytokine cascade. In our studies, LPS led to amplification of the inflammatory response in Caco-2/15 cells given the enhanced production of PGE2 and the raised protein expression of TNF-a and IL-6, probably due to elevated COX-2 and NF-kB, respectively. DAPP was effective in preventing the elevation of PGE2, TNF-a and IL-6 via the down-regulation of COX-2 and NF-kB, as evidenced by the co-administration of their specific inhibitors indomethacin heptyl ester and CAPE, respectively. The combination of CAPE and DAPP (either as crude extract or its purified JC-047 fraction) did not further antiinflammatory benefits, which suggests a common mechanism of action. On the other hand, compounding indomethacin heptyl ester and DAPP resulted in amplified anti-inflammatory effects, which argues in favor of synergetic mechanisms. Since the Keap1-Nrf2-antioxidant response element (ARE) is an integrated redox sensitive signaling system that regulates from 1% to 10% of our genes, we assessed the protein expression of Nrf2 and could document its significant increase. It is therefore possible that, upon exposure to AB powder or JC-047 fraction, Nrf2 was able to escape Keap1-mediated ubiquitin-dependent proteasomal degradation, translocate to the nucleus, and activate ARE-dependent gene expression of a series of antioxidative and cytoprotective proteins that include SOD and GPx. Our study went even further since it revealed the positive modulation of PGC-1a by DAPP. PGC-1a controls many aspects of oxidative metabolism, including mitochondrial biogenesis and respiration through the coactivation of many nuclear receptors. As an example, Nrf2 is a key target of the PGC-1a in mitochondrial biogenesis and important protective molecules against ROS generation and damage. It is therefore possible that PGC-1a activates NRF2 to induce the SOD and GPx that were altered by Fe/Asc-mediated lipid peroxidation. However, additional efforts are needed to understand the role of DAPP in PGC-1a and Nrf2 cross-talk. Noteworthy, in some experiments, Caco-2/15 cells were serumstarved for 24 h prior to the addition OxS or inflammation. The serum-depleted media were used to minimize the formation of adducts between DAPP and serum proteins, and to exclude the interferences originating from available factors present in fetal bovine serum, as described in previous studies with other types of antioxidants. The pre-incubation time of 24 h with DAPP was used to maximally strengthen the antioxidant and antiinflammatory defense before the addition of the iron-ascorbate oxygen radical-generating system or LPS that triggers inflammation. By allocating this period of time, we allow Caco-2/15 cells to deploy various powerful protection mechanisms via transcription factors and signaling pathways. The transport and processing of DAPP have been elaborated in the Discussion section. Following their consumption, polyphenols are extensively metabolized by hydrolyzing and conjugating enzymes. They are first conjugated in the small intestine to form Oglucuronides, sulphate esters and O-methyl ether before reaching the liver for further metabolism. The formation of anionic derivatives by conjugation with glucuronides and sulphate groups facilitates their urinary and biliary excretion and explains their rapid elimination. Non-absorbed polyphenols and the fraction re-excreted by the bile are extensively metabolized and transformed by the microbiota before absorption. The transformation by commensal bacteria via esterase, glucosidase, demethylation, dehydroxylation, and decarboxylation is often essential for absorption and modulates the biological activity of these polyphenols. In our intestinal model, no flora is present, which suggests an absorption via paracellular route of transport as suggested previously. However, additional studies are still needed to highlight the contribution of trans-membrane vs. intercellular absorption as well as the influence of polyphenols of enterocyte metabolism just by adherence to the brush border membrane. Previous studies investigated the preventive effectiveness of polyphenolic content of flesh apple in cultured gastric mucous cells under conditions independent of acid secretion or systemic factors. They identified the composition of phenolic compounds (chlorogenic acid, caffeic acid, catechin, epicatechin, rutin and phloridizin) in apple flesh extracts, which prevented OxS-induced injury to gastric epithelial cells by permeating cell membranes, increasing intracellular antioxidant activity, and inhibiting ROSdependent lipid peroxidation. In further studies, the same apple flesh extracts demonstrated prevention of aspirin-induced damage to the rat gastric mucosa and an anti-inflammatory effect on colonic injury in rats with trinitrobenzensulphonic acid-induced colitis. Even though these reports with apple flesh extracts, and ours with DAPP show anti-inflammatory and antioxidant effects, it is not possible to compare their effectiveness given the differences in the apple species, extraction methodology, experimental models and techniques. In conclusion, a plethora of studies demonstrates significant health benefits of nutrient rich fruits. If various studies have shown this relationship by indirect evidences, the present work demonstrated the presence of a nonpolar bioactivity in extracts of DAPP and their direct beneficial actions, which negated operational OxS and inflammation, both elicited by state-of-the-art techniques. Our results suggest that DAPP may represent a new strategy for the prevention of OxS and inflammation associated with IBD. Further studies are needed to investigate this hypothesis.
Clinico-Epidemiology Profile of Molar Pregnancy in Tertiary Care Centre: A Retrospective Review of Medical Records Background: Gestational trophoblastic disease (GTD) includes a series of disorders that are characterized by an abnormal proliferation of trophoblastic tissue with varying tendency to spontaneous remission, local invasion and metastasis. The incidence of GTD varies greatly in different parts of the world. Hydatidiform mole presents with amenorrhea, painless vaginal bleeding and spontaneous passage of grape-like vesicles, high serum and urinary human chorionic gonadotrophin (HCG) levels. Objective: To study epidemiology & clinical profile of Gestational Trophoblastic Disease and to evaluate its management and outcome. Material and Methods: A retrospective study was conducted over a period of five years in MDM Hospital, Jodhpur. A total of 39301 pregnancies were recorded during the period of five years. The demographic profile, clinical presentation, management and complications were studied. Results: There were 60 patients of GTD with an incidence of 1.52 per 1,000 pregnancies. Among these 60 cases, 45 (75%) cases had complete H mole. Most of the patients (63.3%) were in age group of 21-30 years and majority in nulliparous women (38.3%). Majority of the molar pregnancy cases 83.3% cases have been detected in the second trimester. The most common clinical presentation was bleeding per vaginum constituting 58.3% of cases. Majority (85%) of the patients were treated by suction and evacuation. Conclusion: Gestational Trophoblastic Disease requires early diagnosis, treatment and strict monitoring to be 100% curable. Routine check-up helps to timely management of the GTDs thereby preventing their progression to GTN.
Preparation of Polymer Powder Layer for Additive Manufacturing Applications Using Vibration Controlled Brass and Glass Nozzles Making scaffolds for bone repair is increasingly needed. The material used can be in the form of molten material or powder. For powder materials, Direct Laser Melting technology can be used so that the development of powder material deposition methods is needed. This is because there is a weakness in the deposition method which is assisted by gas pressure. This study uses two types of nozzles, the first type is made of brass with diameters 0.5, 0.8 and 1.0 mm, while the second type is made of glass with a nozzle mouth diameter of 1.0 mm. The powder material used is a polyester resin with a diameter of 5-15 microns in black, and diameter 7-75 microns in red. The nozzle containing the powder is vibrated so that a flow will occur. This flow characteristic will affect the form of deposition that occurs. Powder flow on the nozzle made of brass and glass shows a similarity to the 1.0 mm nozzle diameter. For nozzle diameter smaller than 1.0 mm, the effect of grain size affects flowability. The smoothness of the surface affects the nature of the powder flow. On a smooth glass surface, the friction force between the powder and the wall of the nozzle is small, so that for the small powder size the flow cannot be controlled. The best deposition form is obtained at a frequency of 950 Hz with a brass nozzle and 1.0 mm diameter.
Immunoelectron microscopic studies on the specific adhesion of Trypanosoma congolense to cultured vascular endothelial cells. Bloodstream forms of Trypanosoma congolense were cocultivated in vitro with vascular endothelial cells. The trypanosomes adhere specifically to the endothelial surfaces of the anterior part of their flagella, as shown by scanning and transmission electron microscopy. The interaction between parasite and host cell is very tight, and frequently the accumulation of endocytotic vesicles near the contact site is observed. Immunoelectron microscopy revealed a compound distributed over the total surface of the trypanosomes and reacting with antibodies against the beta 1 integrin chain, but no reaction was found with anti-alpha 1 or anti-alpha 2 antibodies. Integrins are typical adhesion molecules and are now shown to be present at the surface of T. congolense by electron microscopy and by immunofluorescence. A direct participation of this substance in the specific adhesion to endothelium, however, could not be proven.
Taming the Penguin in the B 0 d ( t ) → + − CP-Asymmetry : Observables and Minimal Theoretical Input Penguin contributions, being not negligible in general, can hide the information on the CKM angle coming from the measurement of the time-dependent B 0 (t) → + − CP-asymmetry. Nevertheless, we show that this information can be summarized in a set of simple equations, expressing as a multi-valued function of a single theoretically unknown parameter, which conveniently can be chosen as a well-defined ratio of penguin to tree amplitudes. Using these exact analytic expressions, free of any assumption besides the Standard Model, and some reasonable hypotheses to constrain the mod-ulus of the penguin amplitude, we derive several new upper bounds on the penguin-induced shift |2 − 2 eff |, generalizing the recent result of Grossman and Quinn. These bounds depend on the averaged branching ratios of some decays ( 0 0, K 0 K 0, K ± ∓) particularly sensitive to the penguin. On the other hand, with further and less conservative approximations, we show that the knowledge of the B ± → K ± branching ratio alone gives sufficient information to extract the free parameter without the need of other measurements, and without knowing |V td | or |V ub |. More generally, knowing the modulus of the penguin amplitude with an accuracy of ∼ 30% might result in an extraction of competitive with the experimentally more difficult isospin analysis. We also show that our framework allows to recover most of the previous approaches in a transparent and simple way, and in some cases to improve them. In addition we discuss in detail the problem of the various kinds of discrete ambiguities.
Zeroth Sound Modes of Dilute Fermi Gas with Arbitrary Spin Motivated by the recent success of optical trapping of alkali bosons, we have studied the zeroth sound modes of dilute Fermi gases with arbitrary spin-f, which are spin-S excitations ($0\leq S\leq 2f$). The dispersion of the mode (S) depends on a single Landau parameter $F^{(S)}$, which is related to the scattering lengths of the system through a simple formula. Measurement of (even a subset of) these modes in finite magnetic fields will enable one to determine all the interaction parameters of the system. shall show that in addition to the ordinary density mode, the system has additional modes corresponding to coherent inter-conversions of different spin species. These modes are the generalizations of the spin waves of spin 1/2 Fermi liquids. As we shall see, the dispersions of the zeroth sound modes contain the information of all the interaction parameters of the system, i.e. the set of s-wave scattering lengths {a J } of two spin-f atoms in the total spin J channel. Thus, observation of these modes will not only provide evidence of the degenerate nature of the system, but also information about the scattering lengths a J, and hence the existence of superfluid ground state as well as their transition temperatures. As in our previous study, we shall focus on the homogenous case, i.e. without external potential. This is a necessary step before studying trapped fermions. Moreover, it is conceivable that optical traps of the form of cylindrical boxes (rather than harmonic wells) be constructed in the future. In that case, the discussions here will be directly applicable. As in our previous work, our symmetry classification of the spin structure (which is a crucial step in our solution) also applies to arbitrary potentials. In addition to homogeneity, we shall also consider the weak magnetic field limit, i.e. when the Zeeman energy is much smaller than the kinetic energy of the system. These are the regimes where the spinor nature of the fermi gas is manifested most clearly. As demonstrated by the recent experiments at MIT, this limit can be easily achieved by specifying the total spin of the system. Since the low energy dynamics of the system is spin conserving, the specified spin can not relax. The system therefore sees an effective magnetic field with which its spin would be in equilibrium, a field which can be much smaller than the external field B ext. In the following, we shall refer to this effective field simply as "magnetic field" B, with the understanding that it is a Lagrange multiplier that determines the total spin of the system. (A) Zero magnetic field : We begin with the linearized kinetic equation for the distribution function matrix n p in the collisionless regime Our notations in eq. are the same as ref.. Here, n o p is the T = 0 Fermi function, v p = ∇ p p, n p is a (2f + 1) (2f + 1) matrix in spin-space, = dxe −ipx < + (r −x/2, t) (r + x/2, t) > where is the field operator. The energy matrix describes the change in the Hamiltonian due to n, where d means dp /(2) 3, and f, (p, p ) are the Landau parameters which can be extracted from the Hamiltonian of the system derived by one of us. It is shown in ref. that only the lowest hyperfine states (with spin f ) will remain in the optical trap and that the interactions between these spin-f atoms are spin conserving, of the form where O JM (r) = < JM|f f > (r) (r), and < JM|f 1 f 2 > are the Clebsch-Gordan coefficients for forming a spin-J object from a spin-f 1 and a spin-f 2 particle, g J = 4h 2 a J /M F, and M F is the mass of the atom. Pauli principle implies that only even J's exist in eq.. Evaluating < H int > in Hartree-Fock approximation, and using the fact which is momentum independent as a result of s-wave interaction. Note that if g J < 0, the system will have a superfluid instability towards spin-J Cooper pairs at a sufficiently low Our discussions for negative g J 's therefore applies to temperatures above T (J) c but low enough so that the Fermi gas is degenerate. Before proceeding, we simplify eq. by writing n p = − ∂n o p ∂p p, which turns eq. and into where N F = mk F /2 2h2 is the density of state of a single spin component at the Fermi surface, k F is the Fermi wavevector, and < (..) >≡ dp 4 (..) denotes the angular average over the Fermi surface. Note that the quasi particle energy p is isotropic in k-space as a consequence of the s-wave interactions between the particles. Next, we note that a rotation in spin space will cause a change a → D and, one can see that p transforms the same way, p →D (f ) pD (f )+. Sinc is made up of two spin-f objects, it can be decomposed into a sum of spin-S quantitie We then have the representation Substituting eq. into eq. and using the identity where W is the Racah coefficient, eq. becomes diagonal in the (S, M) modes, where we have used the fact that N F g J = 2k F a J / and (−1) 2f −J = −1 in obtaining eq.. Eq. and imply that which is precisely the equation for the ordinary zeroth sound mode with only ℓ = 0 spinsymmetric Landau parameter F s ℓ=0 non-zero and is given by F (S). The dispersion relations of modes described by eq. is well known. They are The properties of the modes depend crucially on the sign of the parameters F (S). When F (S) > 0, one has a well defined propagating mode. When −1 < F (S) < 0, the zeroth sound mode is Landau damped. When F (S) < −1, the system is unstable against spin-S distortions. Because of the dilute limit, k F a << 1 and hence |F (S) | < 1, stability against spin-S distortions is guaranteed. It is instructive to consider some special cases : (i) The density modes (S=0) for fermions with arbitrary spin-f : Using the fact that In particular, if there are no superfluid instabilities in all angular momentum J channel, then F (S=0) > 0 and the density mode will not be Landau damped. In this limit, eq. can be integrated to give, Since F (S) << 1, the exponential term in eq. will have little contributions. The frequencies of zeroth sound for all S are essentially given by qv F. As a result, it will be difficult to obtain information of the interaction parameters from zeroth sound frequencies in zero field. On the other hand, we shall see that even a small magnetic field will cause significant changes in the zeroth sound dispersions, which lead to many observable features and enable where ≡ B eff /h. With this additional term on the left hand side of eq. and repeating the procedure as before, we have find that eq. remains unchanged whereas eq. becomes Thus, the zeroth sound modes can still be classified by the quantum numbers (S, M) in the weak field limit. The equation for the dispersion now becomes which upon integration gives Since the collective modes are excitations above the ground state, we only need to study the > 0 solutions of eq.. In the following, we shall discuss only the zeroth sound modes that are not Landau damped, which requires | + M| > qv F in eq.. While many features of these propagating modes can be obtained analytically, we first display the numerical solutions of eq. for S = 3/2 with F (S) > 0 and F (S) < 0 in fig.1 and fig.2 respectively. The notable features of these modes are : (iv) Zeroth Sound modes near q = 0 : Near q = 0, qv F /|M| << F (S), it is easily seen from eq. that (S,M ) (q) = −M(1 + F (S) ) 1 + 1 Since (
Synthesis and functional integration of a neurotransmitter receptor in isolated invertebrate axons. Neurotransmitter receptors are considered an important class of membrane proteins that are involved in plasticity-induced changes underlying learning and memory. Recent studies, which demonstrated that the mRNAs encoding for various receptor proteins are localized to specific dendritic domains, allude toward the possibility that these membrane bound molecules may be synthesized locally. However, direct evidence for the local axonal or dendritic synthesis and functional integration of receptor proteins in either vertebrates or invertebrates is still lacking. In this study, using an invertebrate model system we provide the first direct evidence that isolated axons (in the absence of the soma) can intrinsically synthesize and functionally integrate a membrane-bound receptor protein from an axonally injected mRNA. Surgically isolated axons from identified neurons were injected with mRNA encoding a G-protein-coupled conopressin receptor. Immunocytochemical and electrophysiological techniques were used to demonstrate functional integration of the receptor protein into the membrane of the isolated axon. Ultrastructural analysis of axonal compartments revealed polyribosomes, suggesting that some components of the protein synthesizing machinery are indeed present in these extrasomal compartments. Such axonal propensity to locally synthesize and functionally insert transmitter receptors may be instrumental in plasticity induced changes, for instance those that underlie learning and memory.
Working memory and inferences: Evidence from eye fixations during reading. Eye fixations during reading were monitored to examine the relationship between individual differences in working memory capacity-as assessed by the reading span task-and inferences about predictable events. Context sentences predicting likely events, or non-predicting control sentences, were presented. They were followed by continuation sentences in which a target word represented an event to be inferred (inferential word) or an unlikely event (non-predictable word). A main effect of reading span showed that high working memory capacity was related to shorter gaze durations across sentence regions. More specific findings involved an interaction between context, target, and reading span on late processing measures and regions. Thus, for high- but not for low-span readers, the predicting condition, relative to the control condition, facilitated reanalysis of the continuation sentence that represented the inference concept. This effect was revealed by a reduction in regression-path reading time in the last region of the sentence, involving less time reading that region and fewer regressions from it. These results indicate that working memory facilitates elaborative inferences during reading, but that this occurs at late text-integration processes, rather than at early lexical-access processes.
On reconciling quantum mechanics and local realism A necessary and natural change in our application of quantum mechanics to separated systems is shown to reconcile quantum mechanics and local realism. An analysis of separation and localization justifies the proposed change in application of quantum mechanics. An important EPRB experiment is reconsidered and it is seen that when it is correctly interpreted it supports local realism. This reconciliation of quantum mechanics with local realism allows the axiom sets of quantum mechanics, probability, and special relativity to be joined in a consistent global axiom set for physics. INTRODUCTION The apparent nonlocality of quantum correlation (entanglement) has confounded all attempts to reconcile it with other basic laws, such as Lorentz invariance. It is not hard to see why. The accepted quantum mechanics (QM) prediction for the probability of measuring both photons as 'up', for separated measurements of the correlated spin-1 singlet state at stations A and B, is believed to be given by a joint distribution, which we call represents statistics from the measurement of projected photons. The reductio is dissolved, but at a terrible price: we have to accept superluminal effects of a strange kind for which no physical mechanism is known. We also have to ask which photon projects the other. Is it the first one to be measured? That is impossible to decide without a preferred reference frame when events have different orders in different reference frames. Even from an engineering perspective, it is challenging to conceive a protocol ensuring that one and only one of the photons projects the other, and the resulting mechanism has to be regarded as fanciful. We see then that the axioms of QM (when interpreted in the conventional way that applies the joint probability prediction to separated measurement situations) conflict with those of special relativity (SR) because QM nonlocal influences are not Lorentz invariant and they require a preferred reference frame, and/or those of probability (P) because QM implies that a joint probability can be measured via the marginals. We cannot try to live with these conflicts because the resulting combined axiom set that we use to describe nature is not self-consistent, and when axioms are inconsistent contradictions are easily generated and we have no way to distinguish sense from nonsense. We assume the axioms of QM alone are consistent, or a consistent set can be defined, although it has been sporadically argued that QM itself is inconsistent. Nonlocality appears to be the only aspect of QM preventing the consistent combining of the axiom sets of QM, SR, and P. Schrdinger 1 famously stated "I would not call one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought." Feyman 2 discusses simulation of quantum mechanics by a classical computer but stumbles when addressing nonlocal correlations: "That's all. That's the difficulty. That's why quantum mechanics can't seem to be imitable by a local classical computer." Spreeuw 3 demonstrated classical analogs of local entanglement but reported he was unable to demonstrate a classical analog of nonlocal entanglement. Finally, Orlov 4 was able to demonstrate many aspects of quantum computation using classical building blocks but failed to imitate nonlocal quantum effects using his classical blocks. It has been argued that negative Wigner functions (or other metrics) associated with entanglement have no classical counterpart but negativity is associated with contextuality rather than entanglement 5. Classically contextual systems are abundant, so there is no fundamental conflict due to negativity. Quantum nonlocalists may also argue that contextuality requires a new quantum probability theory because a sample space is said not to exist for contextual systems, and that standard Kolmogorovian probability therefore cannot be applied. However, any contextual system will be represented by a set of sample spaces, one per context. That is not the same as not having a sample space! Contextuality is not a problem for standard probability theory, and contextuality can be treated identically in the classical and quantum representations. We seek, therefore, a way to eliminate nonlocality from QM, and we do that by simply accepting that a joint distribution cannot be sampled by means of separated (marginal) measurements. One must use for predicting the measured correlation. This prediction can be made via partial traces or reduced density matrices in a manner completely analogous to that of marginalization and conditional probabilities in standard probability theory. If this is accepted, then a very small reinterpretation of QM can reconcile QM and local realism: using the marginals versus the joint probability in separated measurement situations (exactly as in classical probability). Specification of what are separated measurements is a delicate matter but has a satisfactory answer developed in this paper. Of course the experiments are obstacles for advancing the reconciliation program. Modern consensus is that the results of EPRB and other experiments confirm that the While attempting to deconstruct all of the experiments can quickly turn into an exasperating game of whack-a-mole, we will see that the mechanism in effect in the Weihs experiment has broad applicability, so a plausible local realistic account of the Weihs experiment goes a long way toward clearing the path for reconciliation of QM and local reality. In consequence of the foregoing, this paper's goal is twofold, first, to characterize separated measurements and show that the joint probability formula cannot be applied to separated measurements, and second, to show that the experiments have been misinterpreted and that nonlocal entanglement is an error. As a result, locality is restored and our global axiom set for physics becomes fully consistent. SEPARATED MEASUREMENTS It is easy to devise a local system that embodies the sampling of a joint PDF and generates quantum statistics. Consider first a simple physical system that embodies a preparation followed by two dichotomic measurement results A and B as shown in Figure 1. A disk of unit radius is prepared by partitioning it into four sectors according to a parameter. Each resulting sector is labeled with outcomes for A and B as shown in Figure 1. We may think of spinning an arrow attached to the center of the disk and noting the location on the circumference that the arrow points to when it comes to rest. Effectively we have generated a random number in the range 0-2 and used it to determine the location. The outcomes for A and B are then read off from the sector containing the arrow position as shown in the figure. When the measurements are repeated, a sequence of outcomes is obtained for A and B and we can correlate them in the usual way. For the preparation shown in Figure 1, we obtain statistics that exactly reproduce the quantum correlations of the anticorrelated singlet state. The physical system clearly embodies the joint PDF for the A and B outcomes, and the quantum statistics are obtained simply by sampling the joint PDF. This simple model should serve as a useful tonic for those who mistakenly believe that correlations of dichotomic functions cannot be harmonic, and thus cannot conform to the predictions of QM. Figure 1. Measurement results for A and B are produced by the sampling of a joint PDF embodied as a physical disk. The preparation can be considered to be the construction and setup of the disk. The physical system can be separated in space and time without destroying the ability to successfully sample the joint PDF. Consider a refinement of the system as shown in Figure 2. The original disk is split into two, retaining the respective outcome labels for A and B. As long as shared randomness is used for the sampling (think of each disk being indexed by a single spinning arrow), the correct joint PDF defined in Figure 1 is still successfully sampled. The disks can be separated in space and the measurements can be made at different times, using the shared random variable, without affecting the results. We conclude counter intuitively that the nave view that simple physical separation in space or time defines a separated measurement is incorrect, and we must look further for the essential features of separated measurements. Figure 2. The original disk has been split into two physically separated disks. Shared randomness is used for sampling the disks. The measurements at each side may also occur separated in time. The joint PDF of Figure 1 is still successfully sampled even though the measurements at A and B are physically separated in space and time. If shared randomness is not available to the two sides of the separated system, then the joint PDF cannot be sampled. Consider a further refinement of the system as shown in Figure 3. The measurements are now generated without shared randomness, i.e., different random variables are used at A and B (think of each disk having its own spinning arrow). The joint PDF now cannot be successfully sampled and the result is a function of the marginal probabilities. It is tempting to assume that when systems are physically separated in space they no longer have access to shared randomness, but this is not necessarily so. The source events themselves could be distributed in a random distribution and this randomness would be transmitted to both sides. Shared randomness from the source is not the only possibility; for example, suppose the systems are separated by several kilometers but they both observe a specific location on the sun and read off the intensity variation and treat that as a shared random variable. So we need to have a good understanding of the system and its physics before we can know if shared randomness is available and if it is indeed relevant to the sampling. For example, we might suppose that a light packet ('photon') actually is represented by one of the disks in Figure 3 spinning at a fixed rate (this conception brings to mind Feynman's idea of little stopwatches in his description of quantum electrodynamics 7 ), and then the measurement is indexed by a fixed arrow at 0 degrees. Now, if as is almost certainly the case, we take our measurements at independent times at A and B, i.e., the measurement times are not synchronized, then we cannot successfully sample the joint PDF. The idea of broken shared randomness cannot be definitively applied to the EPRB experiments, however, for at least two reasons. First, there is still uncertainty about the actual nature of light, so asserting such an interpretation of the experiments may rely upon assumptions not yet evident. Exhaustive identification of all the shared randomness in an experiment and its relevance to the detection processes is difficult. Second, it is doubtful that current light detectors are sensitive to the phase of the light, because the time scale of the energy integration period leading to a detection event is large compared to the wavelength of the light. One could try to build a case that the experiments involve separated measurements due to lack of shared randomness, but at this time it would be speculative and inconclusive. Fortunately, we need not assert that this mechanism is in play, although we certainly don't exclude the possibility, because a second mechanism that leads to separated measurements that we can unquestionably assert is in play. When parameters affecting the measurements are available to both sides, the joint PDF can be successfully sampled. Consider a further refinement of the system as shown in Figure 4. The use of shared randomness has been restored because we cannot definitively show that shared randomness is not available to the two sides in the EPRB experiments. The system is now the same as the one in Figure 2, but with the single parameter replaced by the difference of two parameters and. The two disks are identical so each disk must have been prepared with knowledge of both and. For this arrangement the joint PDF is still successfully sampled, and so, like the system shown in Figure 2, we cannot call this a separated measurement situation. Figure 1 is replaced by the difference between two parameters and. The joint PDF can still be successfully sampled. Now the fun begins. When parameters affecting the measurements are not available to both sides, the joint PDF cannot be successfully sampled. We suppose that A does not know about, and B does not know about, as depicted in Figure 5. It is clear that the original joint PDF of Figure 1 cannot be sampled. Of course each side could be very lucky and guess the other side's parameter every time, but logically that is the same as each side knowing the other side's parameter. If we were tasked with predicting the correlations for such an experiment, we would have several possibilities for treating the unknown parameters. We could for example ignore the unknown parameters completely (equivalent to assuming they are 0), as shown in Figure 5. We could also assume any fixed values, we could assume random values, or we could integrate over the possible values. But none of these options leads to a proper sampling of the original joint PDF. The parameters and of course correspond to the measurement angles at the two sides of the EPRB experiments, and so we must conclude that the EPRB experiments involve separated measurements. The quantum joint formula is customarily stated as cos 2 rather than cos 2 (-), which amounts to surreptitiously sharing angle parameters, and so one might understand how an uncritical perusal of the function cos 2 may not expose the definitive measurement separation inherent in this situation. There is a special case of interest that successfully samples the original joint PDF, but which appears not to share parameters between the sides. If side B chooses to be 0, and side A assigns its area as cos 2 , then the original joint PDF is successfully sampled for any value of parameter. However, this removes freedom of choice of the measurement angle from side B, or it involves a surreptitious communication of a redefinition of the origin of the angular frame of reference (to define B's chosen parameter as 0) equivalent to sharing of parameters. The account of measurement separation here involves only considerations in probability, not any localization in time and space. Time and space enter only through their constraints on the availability of required shared randomness and parameters. For example, suppose side A does her measurement and shouts to side B revealing her measurement angle. As long as side B is within hearing distance (and they have some shared randomnessside A can shout that too if necessary), side B can easily generate results yielding quantum correlation. If side A and side B move too far apart, side B can no longer hear the shouts and so cannot generate quantum results. Space has constrained the possibilities for sharing. A similar effect can occur in time. With the addition of space and time in this manner, we arrive at a satisfying and useful way to define and distinguish separation and localization, and to see how they interact. The considerations thus far adduced allow us to characterize and identify separated measurement situations and to recognize that the EPRB experiments involve such separated measurements. That being so, the standard quantum joint prediction Stated simply, the standard quantum prediction for correlation of separated measurements is incorrect and it leads to all of the difficulties previously recounted. As noted earlier, however, the experiments appear to show that the joint prediction results are obtained. We turn therefore to an analysis of an important exemplar of the EPRB experiments 6, and show that the experiments have been misinterpreted, that in fact the joint prediction is not obtained, and that nonlocal entanglement is an error. Framing the debate as quantum mechanics versus local realism is a false, misleading, even inflammatory apposition; the true debate is over whether the joint prediction can be applied to separated measurements or not. As we have seen, basic classical probability theory tells us that it cannot. Making this small but needed adjustment for separated measurement leaves the essence of quantum mechanics intact. Figure 5. When A doesn't know and B doesn't know, the original joint PDF cannot be successfully sampled. THE EXPERIMENTS In an earlier paper by this author 8 a local realistic account of the Weihs EPRB experiment 6 was described. To limit this conference paper to a reasonable length and to new material, readers are referred to the cited paper for a full description of the local realist account and its correspondence to the important and pioneering Weihs experiment. We focus here only on the important concepts and the results of simulations of the local realist account. Readers should also refer to the references contained in the cited paper to properly appreciate the important pioneers in this field. The author here acknowledges particular indebtedness to (in addition, of course, to the classical giants Schrdinger and Einstein) Weihs et al, Caser, Marshall, Santos, Fine, Larrson, Hofer, Khrennikov, Adenier, and De Raedt et al. The latter two are presenting at this conference and we can be sure that they will advance our understanding in important ways, as they have done many times in the past. The primary reports on the results of EPRB experiments all neglect to consider, report, and account for the choices for calibration of important parameters of the light detection apparatus, specifically and importantly, the detection thresholds used during analysis and processing of the analog detection traces to produce the dichotomic measurement outcomes. In the cited local realist account 8, when the thresholds are properly and independently calibrated, only classical correlations are observed. When one of the thresholds is miscalibrated, a full range of results spanning classical to super-quantum correlations (with full rotational invariance) can be obtained, depending on the extent of the miscalibration. The mechanism for this effect is straightforward unfair sampling of a deterministic device subject to Malus's Law as shown in the cited paper. One of the earliest careful EPRB experiments by Holt and Pipkin 9 failed to show quantum correlations and therefore weighed strongly in favor of the local realist account. Instead of some unidentified systematic error being responsible for their results, as is usually suggested, it is easier to believe that Holt and Pipkin properly calibrated their apparatus. Holt and Pipkin were subjected to strong scientific peer pressure, and they never formally published their results. According to this view, the subsequent experiments were inadvertently miscalibrated. The experimenters sought to calibrate their experiments with the aim of verifying hypothesized nonlocality, and they mistakenly believed that the choice of threshold parameters is uncritical (as long as the chosen thresholds exclude the background noise). The claims here are bold, so readers are again referred to the cited paper for support. All the experiments that appeal to detection thresholds are potentially subject to the mechanism described. The recent Giustina et al experiment 10, which claims to fully exclude all unfair sampling, opens an important and interesting new line of defense against the detection loophole. But unfair sampling is not needed to violate Giustina et al's single-channel variant of the Eberhard inequality. The Eberhard derivation relies upon two crucial assumptions: the first obeys the law of large numbers and becomes statistically true for a large number of events, and the second is false (in real experiments) and is not subject to the law of large numbers. Indeed, a following paper will demonstrate violation of the Giustina et al inequality consistent with the experiment using a simple semiclassical model. Let us recall the results of the computer simulation of the local realist account 8. Before proceeding let us realize that although the account uses dual-channel detectors at each side, as does the Weihs experiment, the described effect also applies to single-channel detectors. For any given source event in the dual-channel case, we see at a given side either a miss by both detectors, a double hit by both detectors, or a single hit at one detector. We cannot have doubles in the single-channel case, but we can still have misses. Significantly, the account here of the Weihs data relies on misses. When both sides are correctly calibrated classical results are obtained. Figure 6 shows the match probability curves resulting from the local realist simulation when the detection thresholds at both sides are each correctly calibrated to one half the light pulse energy (0.5 in a normalized scale). To generate the curves, the measurement angle at side A is set to 0 and the measurement angle at side B is scanned over the range 0-. The threshold does two things. It excludes low-level noise when it is set higher than the noise level. But just being above the noise is not sufficient. The threshold must ensure fair sampling. Only a threshold of 0.5 both excludes the noise and performs fair sampling by not discarding significant detection events in a pattern governed by Malus's Law. Readers have undoubtedly noticed that the results in Figure 6 are classical and various inequalities are not violated. When one side is grossly miscalibrated, super-quantum results are obtained. Figure 7 shows the match probability curves resulting from the local realist simulation when the detection threshold at side A is correctly calibrated to 0.5, while side B is miscalibrated to 0.92 (normalized to the signal energy). The system is still fully rotationally invariant. As long as one side is correctly calibrated, the system delivers full rotational invariance. If both are miscalibrated, then rotational invariance is destroyed and modulation of the number of total coincidences is seen as B is scanned over 0-. One can thus go astray when trying to draw inferences from the presence or absence of rotational variance if one lacks an understanding of the effects of the detection thresholds on the rotational symmetry. Between the two extremes of classical and super-quantum calibration, we can easily find a threshold value of 0.75 for side B (with side A left correctly calibrated) that produces the quantum results shown in Figure 8. The curves have high visibility and there is full rotational invariance. The glaring give-away to miscalibration is the large and predictable difference between singles counts between sides A and B seen in Figure 8. Notice that in Figure 6, where both sides are correctly calibrated, the singles counts are approximately equal at the two sides. The point to notice is that the greater the extent of the miscalibration, the greater the difference between singles counts at the two sides. In the super-quantum calibration (Figure 7) there is a very large counts asymmetry. This artifact is clearly seen in the Weihs experimental data 6. Adenier and Khrennikov 11 present an analysis of the Weihs data in which the observed counts asymmetry is seen to be close to that observed in the local realist model when it is calibrated for 0.5/0.75 as described (Figure 8). Although the EPRB experiments are incomplete due to their failure to report on different calibration domains, they remain capable in principle of testing whether the joint probability formula applies or not. Here we make no reference to testing quantum mechanics versus local realism, a false apposition, nor to deciding whether quantum mechanics is complete, a red herring. As long as the detectors are all properly and independently calibrated, the experiment can be decisive. Even low detection efficiency (5% in the Weihs experiment) is not a problem because nothing in quantum mechanics allows one to say that randomly discarding events can change the qualitative results. It is as if one simply performs a shorter experiment. So if classical results are seen with symmetric, correctly calibrated sides, then the joint probability formula is not correct. If fully rotationally invariant quantum results are seen, then the joint probability formula is correct. But we have shown earlier that the joint probability formula cannot be correct. DISCUSSION Readers are directed to the extensive discussion contained in the previous paper 8. Some speculations germane to a further interesting line of thought are reported here. The crucial role of detector thresholds and proper apparatus calibration has been demonstrated, but the model presented here remains open to a charge that it is unphysical in an important respect. Local realist accounts of EPRB experiments, including this one, typically appeal to a random isotropic distribution of some source light property, such as phase of the electric field, polarization, etc. In the model here, the paired photon polarizations in EPRB are assumed to be emitted orthogonal to each other with the pair randomly oriented over 0-2. In the EPRB experiments with parametric down conversion (PDC) light sources, however, the polarization of the source photon pairs is constrained to a single fixed H/V basis (in the classical interpretation and for a simplistic account of PDC with a linearly polarized pump laser), providing only 4-fold rotational symmetry, rather than the fully isotropic light source assumed in the local realist models. The local realist models therefore arguably fail for this PDC light, and indeed, simulations show pathology when side A's measurement angle is set to an odd multiple of /4 (bisecting the H/V basis) while side B's angle is scanned. The Weihs experiment, however, displays clean 8-fold rotational symmetry. Ironically, quantum mechanics cannot even formulate this objection, because the emitted photons are described by the singlet state, which is fully rotationally invariant and represents all that can be known or said about the photons. That doesn't get local realists off the hook, however, because a plausible local realist account must address this important line of analysis. Some progress has been made. A paper in progress by this author shows that the local realist account described here functions for reduced rotational symmetries. For example, the pump laser intensity can be calibrated such that the pair number statistics of the PDC light contain 1-and 2-pair events. 2-pair events contribute an additional component of rotational symmetry at /4, and a computer simulation shows that 8-fold symmetry results. The Weihs scanblue data reports only results at 8-fold angles (for one side while the other is scanned). Importantly, the simulation with 8-fold symmetry shows the same dependence on the detector thresholds as the fully isotropic model, so our identification of the smoking gun in the Weihs experiment demonstrated for the isotropic model remains valid for models with reduced symmetries. The Weihs experiment is again incomplete due to its failure to report on the rotational spectrum of the source pairs and their light pulse (photon) number statistics. The study of rotational symmetries created by different photon pair creation statistics and other sources of rotational symmetry is an important further direction for study of local realist models like the one presented here, and recognition of these mechanisms obliges us to perform proper tomography of the light source with sufficient rotational granularity to properly interpret the experiments. Another potentially interesting area to explore is the role of parasitic optical effects that might affect observable symmetries, such as flare (Figure 9), glare, internal reflections, etc. We know essentially nothing about any of these things in the Weihs experiment, and so the local realist account presented here simply marginalizes them away by assuming a fully isotropic light source distribution. Finally, the argument of this paper is strengthened and complemented by recent important foundational work of De Raedt et al. 12 Because QM and classical local realism are not in conflict, as shown in this paper, the applicability of QM as an optimal form of inference can apply equally to classical situations, because those situations also require robust logical inference under uncertainty, as described by De Raedt et al. A person modeling or explaining a physical effect has the discretion to apply QM and/or classical representations in combination as needed. Due to the axiomatic consistency we have shown, one can place the Heisenberg cuts wherever they are needed. There are no paradoxes, mysteries, or difficulties there. CONCLUSION Five important themes have been manifested: 1. A plausible local account of the Weihs experiment is presented and demonstrated with a computer simulation. 2. Failure of rotational invariance is shown not to be a necessary outcome of unfair sampling. The Weihs experiment shows clear rotational invariance, and no plausible model had duplicated that. The model described here does so and identifies a 'smoking gun' in the experiment.
Influence of the degree of ionization and molecular mass of weak polyelectrolytes on charging and stability behavior of oppositely charged colloidal particles. Positively charged amidine latex particles are studied in the presence of poly(acrylic acid) (PAA) with different molecular masses under neutral and acidic conditions by electrophoresis and time-resolved dynamic light scattering. Under neutral conditions, where PAA is highly charged, the system is governed by the charge reversal induced by the quantitatively adsorbing polyelectrolyte and attractive patch-charge interactions. Under acidic conditions, where PAA is more weakly charged, the following two effects come into play. First, the lateral structure of the adsorbed layers becomes more homogeneous, which weakens the attractive patch-charge interactions. Second, polyelectrolyte adsorption is no longer quantitative and partitioning into the solution phase is observed, especially for PAA of low molecular mass.
Executive Editor Introduction his issue of JAMLS on cultural policy in Australia, under the supervision of guest editor Jo Caust of the University of South Australia, represents another distinguished contribution to new understanding of international perspectives on public support for the arts and culture. Caust has assembled articles by a distinguished array of Australian analysts who provide a diverse range of foci and views that illustrate the varied issues involved in cultural policymaking. Some of these concerns will appear familiar to an American readerfor example, the increasing diminution of public supportothers, such as the concerns for a distinct cultural identity and protection of national media industries, lack a comparable salience in the American system of cultural patronage. Perhaps most interesting in several of these essays is the eagerness of the contributors to argue for the intrinsic value of culture as distinct from its utilitarian value. In their article Nationalism and Art in Australia: Change in a Time of Conservatism, 19481968, Katya Johanson and Ruth Rentschler remind us of how recent and successful Australias development of a distinct and distinctive national culture has been. Essentially, Australia transcended both cultural isolation because of its geographic remoteness and a cultural inferiority complex because of the international prestige of British and American artists and arts institutions. By valorizing its settler arts as well as a long-suppressed aboriginal culture, Australia represents a successful example of postcolonialism with the creation of a unique identity. Jock Given, in his contribution, From Wellington to Washington: Australias Bilateral Trade Agreements and Cultural Policy, provides a regional example T
Microbial acetone oxidation in coastal seawater Acetone is an important oxygenated volatile organic compound (OVOC) in the troposphere where it influences the oxidizing capacity of the atmosphere. However, the air-sea flux is not well quantified, in part due to a lack of knowledge regarding which processes control oceanic concentrations, and, specifically whether microbial oxidation to CO2 represents a significant loss process. We demonstrate that 14C labeled acetone can be used to determine microbial oxidation to 14CO2. Linear microbial rates of acetone oxidation to CO2 were observed for between 0.75-3.5 h at a seasonally eutrophic coastal station located in the western English Channel (L4). A kinetic experiment in summer at station L4 gave a Vmax of 4.1 pmol L-1 h-1, with a Km constant of 54 pM. We then used this technique to obtain microbial acetone loss rates ranging between 1.2 and 42 pmol L-1 h-1.(monthly averages) over an annual cycle at L4, with maximum rates observed during winter months. The biological turnover time of acetone (in situ concentration divided by microbial oxidation rate) in surface waters varied from ~3 days in February 2011, when in situ concentrations were 3 ± 1 nM, to >240 days in June 2011, when concentrations were more than twofold higher at 7.5 ± 0.7 nM. These relatively low marine microbial acetone oxidation rates, when normalized to in situ concentrations, suggest that marine microbes preferentially utilize other OVOCs such as methanol and acetaldehyde. INTRODUCTION Acetone is a ubiquitous oxygenated volatile organic compound (OVOC) in the troposphere , and is thought to play an important role in the chemistry of the atmosphere by sequestering nitrogen oxides, and by providing HO x radicals through photolysis (;), thus influencing the oxidizing capacity and ozone formation (). The composition of OVOCs in the troposphere and lower stratosphere is dominated by acetone, acetaldehyde, and methanol, e.g., Read et al.. Total global sources of acetone range between 37 and 95 million tons per year ((Singh et al.,, 2001(Singh et al.,, 2004). Primary terrestrial, e.g., pasture and forest emissions and secondary anthropogenic sources (including biogenic propane oxidation) account for approximately half of known acetone sources (). The oceans are thought to play a major role in controlling atmospheric acetone levels (), although whether the oceans currently act as a net source or sink to the atmosphere is not clear (;;;;). However, recent data suggest that the North and South oligotrophic gyres of the Atlantic Ocean are a source of acetone to the atmosphere, whilst near airsea equilibrium conditions dominates over equatorial waters, and temperate open ocean regions (high northern and southern latitudes) show a flux from the atmosphere to the oceans (). Acetone is thought to be produced photochemically in seawater from chromophoric dissolved organic matter (Mopper and Stahovec, 1986;;;de ;a), with strong diurnal variability (Zhou and Mopper, 1997). Acetone production due to photochemical processes was recently estimated at 48-100% of gross production for remote Atlantic Ocean surface waters (a). Biological production of substantial amounts of acetone (up to 8.7 mM) by cultured marine Vibrio species during degradation of leucine has also been reported (). Acetone is also an intermediate in the metabolism of propane, and is converted, via acetol to either acetaldehyde (+formaldehyde), acetic acid (+formaldehyde) or ultimately to pyruvic acid by a number of bacteria such as Rhodococcus and Mycobacterium. As both of these species are widespread in terrestrial and marine environments (Hartmans and de Bont, 1986;), biological production of acetone is considered likely in agreement with recent marine incubation experiments (a). Acetone losses in seawater are less well understood. Previous bacterial culture experiments have shown microbial uptake of acetone (;Sluis and Ensign, 1997) with insignificant losses due to direct photolysis in fresh and riverine waters (). Loss of acetone in seawater samples from a coastal station in the Pacific Ocean (33.6N, 118W) have recently suggested a short half-life of 5.8 ± 2.4 h with significant diurnal and seasonal variability (higher loss rates observed during winter and earlier in the day, de ). However, this contrasts with estimates from surface open ocean Atlantic waters where a comparison of in situ acetone concentrations with microbial oxidation rates from incubation experiments suggest much longer biological lifetimes ranging between 3 and 82 days (;a). Acetone oxidation rates have been shown to linearly positively correlate with bacterial production (a), and an inverse linear relationship has also been observed between acetone seawater concentrations and bacterial production (). Thus, despite relatively low microbial acetone oxidation rates (compared to other OVOCs like methanol and acetaldehyde, aDixon et al.,,b, 2013aDixon and Nightingale, 2012) these relationships suggest that as bacterial production increases, so does the rate of microbial acetone oxidation, leading to a reduction in the in situ concentration of acetone. The aim of this study was to make a comprehensive assessment of the range and significance of microbial acetone oxidation rates over an annual cycle at a coastal observatory situated in the western English Channel. MATERIALS AND METHODS We have used a radiochemical technique with pico-molar additions of 14 C labeled acetone ( 14 CH 3 CO 14 CH 3 ) to seawater to determine the microbial transformation (oxidation) of acetone to carbon dioxide, in a similar approach to that of Dixon et al. (2011a) for 14 C labeled methanol. SAMPLE COLLECTION Surface water samples (≤10 m) were collected from a long term monitoring station, situated approximately 10 nautical miles south-west of Plymouth, called L4 (50.3N, 04.22W, water depth ∼55 m, ). Samples were pumped directly into acid-washed quartz Duran bottles and stored in the dark for the 2-3 h transit back to the laboratory. Labeled 14 C acetone was purchased from American Radiolabeled Chemicals, Inc with a specific activity of 30 Ci mmol −1 (ARC0469, neat liquid in sealed ampoule). Primary stocks were made by diluting 1 mCi into 40 mls of 18 M Milli Q water (0.025 mCi mL −1 ) and were stored in gas-tight amber vials in the dark at 4 C. Stability and storage trials suggested a loss in activity of <5% over 12 months. Addition volumes of 14 C acetone to seawater samples were always <1% of the sample volume and typically ≤5% of the label was used during incubations ≤3.5 h. TIME COURSE EXPERIMENTS Time course experiments were initially carried out to determine the period of linear incorporation of the 14 C label. Labeled acetone ( 14 C) was added to seawater samples to yield final concentrations of 40-90 pM (2700-6100 disintegrations per minute mL −1 ) depending on the experiment (Figure 1). Samples were incubated in acid washed polycarbonate bottles in the dark for between <1-6.5 h at in situ sea surface temperature. At selected times, triplicate sub-samples were taken to assess microbial oxidation to 14 CO 2. Oxidation of 14 C labeled acetone to 14 CO 2 was determined by pipetting 1 ml samples into 2 ml micro centrifuge tubes and adding 0.5 ml of SrCl 2.6H 2 O (1 M), to precipitate the 14 CO 2 as Sr 14 CO 3, 20 l of NaOH (1 M), to neutralize the HCl produced, and 100 l of Na 2 CO 3 (1 M), to ensure adequate pellet formation (;). After centrifugation the supernatant was aspirated, the pellet washed twice with ethanol (80%), resuspended in 1 ml of concentrated NaOH solution (∼ 10 nM) that had been adjusted to a pH of 11.7, before addition of Optiphase HiSafe III to create a slurry. The samples were vortex mixed and stored in the dark for >24 h before being analyzed on a scintillation counter (Tricarb 3100 or 2910, Perkin Elmer). This period ensures that any chemiluminescence arising from interactions between NaOH and Optiphase scintillant subside (Kiene and Hoffmann Williams, 1998). KINETIC DETERMINATIONS The kinetics of microbial acetone oxidation were investigated at L4 during February and June 2011 using 1.0 ml surface seawater samples. Surface samples received an addition of 14 C-labeled acetone, and a series of tubes for microbial oxidation were treated to yield a range of 14 C concentrations between 2 and 47 nM (∼2.5% of added 14 C acetone was oxidized) during February and between 6 and 1006 pM (1.4-5.5% of added 14 C acetone was oxidized) during June 2011. Samples were incubated in screw topped, O-ring sealed micro tubes in the dark at in situ temperature. Three replicates from each acetone concentration were processed, as detailed above, after approximately 1 h incubation period. ACETONE OXIDATION RATES Triplicate seawater samples (1 ml) were amended with 14 C labeled acetone as detailed previously. Microbial acetone oxidation rates (pmol L −1 h −1 ) were calculated by multiplying the sample counts (nCi mL −1 h −1, where 1 Ci = 3.7 10 10 Bq) by the specific activity of 14 C acetone (30 Ci mmol −1 ). All rates were corrected by subtracting killed sample counts (Trichloroacetic acid, TCA, 5% final concentration) to correct for non-biological processes. TCA is regularly used for killed controls, e.g., when measuring bacterial production indirectly via 3 H-leucine incorporation (Smith and Azam, 1992), and does not lyse cells. SEAWATER ACETONE CONCENTRATIONS Surface seawater was collected in Niskin bottles, and transferred into brown glass sample bottles with gas-tight stoppers using Tygon TM tubing. Acetone concentrations were determined using a Frontiers in Microbiology | Terrestrial Microbiology membrane inlet system coupled to a proton transfer reaction mass spectrometer (). BACTERIAL PRODUCTION, CHLOROPHYLL A CONCENTRATION, AND COMMUNITY COMPOSITION Rates of bacterial protein production (BP) and the numbers of heterotrophic bacteria, Synechococcus spp and picoeukaryotes were also determined to investigate any trends. BP was determined by measuring the incorporation of 3 H-leucine (20 nM final concentration) into bacterial protein on 1.7 ml seawater samples following the method of Smith and Azam. The numbers of bacterioplankton cells were determined by flow cytometry on SYBR Green I DNA-stained cells from 1.8 ml seawater samples fixed in paraformaldehyde (0.5-1%, final concentration), flash frozen in liquid nitrogen immediately after fixation, and stored frozen at −80 C (). Numbers of Synechococcus spp and picoeukaryotes were analyzed on unstained samples by flow cytometry (). Chlorophyll a samples were determined by fluorometric analysis of acetone-extracted pigments (). LINEAR TIME COURSE EXPERIMENTS When pico-molar concentrations of 14 C labeled acetone were added to surface waters from station L4, radioactive carbon was expired to 14 CO 2 (Figure 1) suggesting that acetone was used as a microbial energy source. At this coastal station, acetone oxidation was linear for up to ∼3.5 h, after which between 1 and 3.6% of the added label had been oxidized to 14 CO 2. Microbial acetone oxidation rates were highest in December 2011 (9.5 pCi mL −1 h −1, R 2 = 0.997, n = 4) and lowest during July 2011 (2.5 pCi mL −1 h −1, R 2 = 0.999, n = 4). UPTAKE KINETICS The microbial oxidation of 14 C labeled acetone displayed nonsaturation type kinetics for nano-molar additions of acetone between 2 and 47 nmol L −1 during February 2011 (Figure 2A), which, when plotted as a modified Lineweaver-Burke plot (Figure 2C, ), showed a constant fraction of added label (f = 0.025 ± 0.001) had been oxidized to CO 2, irrespective of the initial addition concentration. Pico-molar 14 C-acetone additions (6-1006 pmol L −1 ) were made in the following June which resulted in saturation kinetics (Figure 2B), where the fraction of acetone oxidized reduced from 5.5 to 1.4% with increasing addition concentrations (Figure 2C; ). Saturation kinetics displayed during June 2011 allowed the first estimates of V max and K m to be determined from an Eadie-Hofstee plot ( Figure 2D) of 4.1 pmol L −1 h −1 and 54 pmol L −1, respectively, for surface coastal waters of station L4. SURFACE SEASONAL TRENDS IN MICROBIAL ACETONE OXIDATION The average monthly rates of microbial oxidation of acetone in surface waters at station L4 varied between 1.2 and 42 pmol L −1 h −1 Figure 3B) and showed significant changes with season. Oxidation rates were highest during winter (January and February 2011) at 36.2 ± 8.7 pmol L −1 h −1 and were 15-fold lower during the summer (June, July, and August 2011) at 2.4 ± 1.7 pmol L −1 h −1, with intermediate spring (March, April, May) and autumn (September, October, November) rates averaging 7.5 ± 4.0 and 4.5 ± 0.4 pmol L −1 h −1, respectively. When in situ seawater acetone concentrations are divided by microbial oxidation rates, biological turnover times are estimated, ranging between just over 3 days in February to ∼243 days in June during 2011 ( Figure 3C). This suggests a clear seasonal trend of longer microbial turnover times in spring and summer months compared to autumn and winter. Corresponding monthly averaged changes in low nucleic acid containing bacteria are also shown in Figure 3C ranging between 0.44 and 3.9 10 5 cells mL −1, which show an opposite trend to microbial acetone turnover times (r = −0.589, n = 16, P < 0.02). Sea surface temperature at station L4 varied between 8.5 and 16.4 C, with typical low chlorophyll a values of ∼0.4 g L −1 during winter months rising fourfold to 1.6 g L −1 in July 2011 ( Figure 3A). Additionally, average monthly numbers of high nucleic acid containing bacteria (1.3-5.8 10 5 cells mL −1 ), Synechococcus sp. (0.7-36 10 3 cells mL −1 ), pico-(0.6-16 10 3 cells mL −1 ), and nano-(0.2-1.5 10 3 cells mL −1 ), phytoplankton cell, and bacterial leucine incorporation rates (8-96 pmol leucine L −1 h −1 ), are summarized in Table 1. DEPTH VARIABILITY IN MICROBIAL ACETONE OXIDATION The variability of microbial acetone oxidation rates with depth at the relatively shallow (∼55 m) coastal station L4 was investigated during June 2011, when surface rates were at their lowest, but the water column was seasonally stratified (see Figure 4). Microbial acetone oxidation rates were lowest (0.78 ± 0.02 pmol L −1 h −1 ) in the shallow surface layer (<10 m), which showed enhanced surface warming and relatively lower salinity. Rates were on average, more than 30% higher at greater depths (average of 1.07 ± 0.04 pmol L −1 h −1 ). DISCUSSION This study shows that 14 C labeled acetone can be used successfully to determine microbial oxidation rates (to 14 CO 2 ) in seawater samples. We report the first estimates of V max (4.1 pmol L −1 h −1 ) and K m (54 pmol L −1 ) for surface coastal waters during summer, when in situ surface oxidation rates were at their lowest (1.2 ± 0.39 pmol L −1 h −1, Figure 3B), despite relatively high average in situ acetone concentrations of 7.5 ± 0.7 nmol L −1. When nano-molar (2-47 nM) 14 C acetone additions were made during winter months, first order kinetics were observed, but Figure 2C shows that a constant fraction of added label was oxidized to CO 2, suggesting that any microbial enzyme systems involved in the conversion of acetone to CO 2 were saturated. Pico-molar additions made during the summer, when acetone concentrations had more than doubled, showed first order reaction kinetics for approximately <100 pM acetone additions ( Figure 2B). Both sets of data combined in a modified Lineweaver-Burke plot (Figure 2C, which assumes that if pico-molar additions had been made during winter, similar first order kinetics to summer would be observed) suggest in situ enzyme system saturation of 1-2 nM of mixed natural communities. Although the microbial composition of surface waters at L4 are highly likely to be different between the two seasons (e.g., Gilbert et al.,, 2012, it is unknown which microbes actively respire acetone to CO 2. However, it is noteworthy that seasonal changes in bacterial structure have been linked to change in day length () and other environmental variables (e.g., temperature, ) rather than trophic interactions. The microbial acetone oxidation kinetics observed during February for nano-molar additions does not show rate limitation with increasing substrate concentration, and thus does not comply with Michaelis-Menten kinetics (Wright and Hobbie, 1966), which could indicate no active microbial enzyme transport systems for acetone oxidation. These authors also showed that the slope of such a linear relationship between uptake rates and added substrate concentration (as in Figure 2A) was identical to the kinetics of simple diffusion. In addition, when samples were killed with TCA (5% final concentration), acetone oxidation did not increase over time, suggesting that, despite a possible lack of active transport systems, the uptake was nevertheless due to microbial metabolic activity. Wright and Hobbie suggested that at very low concentrations of added substrate, most glucose was incorporated using active bacterial transport systems, while at higher concentrations diffusion across algal cells dominated. Our results suggest that when pico-molar additions are made (June 2011) active transport systems dominated with a resultant mixed community V max of 4.1 pmol L −1 h −1 and a K m of 54 pmol L −1. However when nano-molar additions are made (February 2011) non saturation kinetics were observed, with possible diffusion across cell walls dominating (cf. methanol a). Acetone oxidation by natural marine microbial communities could also be due to mixotrophic and heterotrophic phytoplankton in addition to heterotrophic bacteria. For rates of microbial acetone oxidation during February, which increased linearly with substrate concentration (y = 0.031x − 0.003, n = 9, R 2 = 0.999 for 1.7 h incubation period, Figure 2A) a diffusion constant (K d ) can be calculated from the slope of the linear relationship (Wright and Hobbie, 1965). This constant assumes that organisms oxidize the acetone as rapidly as it diffuses in (Wright and Hobbie, 1965). A K d of 0.003 h −1 is equivalent to a turnover time of ∼1.4 days (Wright and Hobbie, 1965) which is comparable to the average estimate of 3.2 days for February 2011 determined in Figure 3C. This also compares well with the turnover of other organic compounds like DMS (e.g., 0.3-2.1 days, ) and methanol (e.g., 7 days in productive shelf waters, a). Despite the faster (i.e., hours) estimated acetone turnover times of de Bruyn et al., they also reported higher loss rates during the winter compared to other times of the year. However, the acetone turnover times reported by de Bruyn et al. originate from riverine and very near-shore costal environments (average salinity of 25.8 ± 2.1), that experience much less seasonal variability (average surface temperature of 17.5 ± 1.2 C) and higher average in situ acetone concentrations (59 ± 56 nM) compared to L4 waters (average salinity of 35.2 ± 0.1, average surface temperature of 12.5 ± 2.8 C, FIGURE 3 | Monthly variability in surface waters at station L4 for (A) chlorophyll a (bars) and sea surface temperature ( ), (B) acetone oxidation rates (bars) and in situ seawater acetone concentrations ( ) and (C) resulting microbial turnover times (bars) with corresponding changes in the numbers of low nucleic acid containing bacteria (, LNA where there is a significant linear correlation between the microbial turnover time of acetone and the numbers of low nucleic acid containing bacteria, r = −0.589, n = 16, P < 0.02). The error bars represent ±1 standard deviation based on three replicates. www.frontiersin.org average surface acetone concentrations of 5.6 ± 2.3 nM). Furthermore, de Bruyn et al. report higher acetone loss rates after rain events, which could suggest faster microbial removal associated with less saline waters, although this is not reflected in Figure 4. Acetone production in seawater is largely thought to be a photochemical process (;Zhou and Mopper, 1997;de ;a), possibly related to UV breakdown of chromophoric dissolved organic matter (CDOM) originating from eukaryotic cells (a). Given the relatively high microbial acetone oxidation rates found during January/February 2011 (in this study and in de ), with turnover times estimated at 1.4-3.2 days, it is not presently understood what process maintains acetone levels during winter months, when average acetone concentrations are 3.4 ± 1.1 nM. Typically, during winter at L4, UV levels and phytoplankton biomass are relatively low (e.g., ). However, the water column is fully mixed and more influenced by riverine waters, i.e., maximum river flows and re-suspension events of bottom sediments (). Thus during these periods it is probable that the dissolved organic matter is dominated by terrestrial sources and re-suspended sediments rather than phytoplankton. Relationships between microbial oxidation and turnover of acetone with other biogeochemical variables (see Table 1) have been explored, and reveal statistically significant negative linear relationships between acetone oxidation rates and both sea surface temperature and concentration of chlorophyll a (r = −0.604 and −0.543, respectively for n = 21, P ≤ 0.02). This is largely because the highest acetone oxidation rates, were found during winter when sea surface temperatures and phytoplankton biomass were at their minima. A statistically significant inverse relationship was also found between biological acetone turnover times and the numbers of low nucleic acid bacteria (LNA, r = −0.589, n = 16, P < 0.02). As previously noted, we do not know which marine microbes are capable of utilizing acetone, or the enzyme system(s) involved in the conversion of acetone to CO 2, but this relationship indicates that low nucleic acid containing bacteria could be responsible for marine acetone consumption in surface coastal waters. SAR11 Alphaproteobacteria, are often significant components of the LNA () and are the most abundant heterotrophs in the oceans. SAR11 cells are believed to play a major role in mineralizing dissolved organic carbon () being efficient competitors for resources (). Whilst in culture, Sun et al. found that Candidatus Pelagibacter ubique (a subgroup of SAR11) have the genome encoded pathways for the oxidation of a variety of one-carbon compounds, including the OVOC compound methanol. We found that the SAR11 clade were the second most numerically dominant bacterial order of surface bacterial populations found at station L4 during the annual sampling period 2011-2012, and contributed between 16 and 46% during winter months. Alphaproteobacteria were also the most abundant bacterial Class found at station L4 over a 6 year study (). This study further reported that members of the Rickettsiales (SAR11) and Rhodobacteriales were the most frequently recorded operational taxonomic units, with the abundance of Rickettsiales reaching a maxima in winter (), coincident with relatively fast acetone turnover times of ∼3 days, found in this study. The acetone biological turnover times determined here should be considered as conservative, because it is possible that some heterotrophic bacteria also assimilate acetone carbon into particulate carbon biomass cf. methanol, Dixon et al. (2013b). Furthermore, microbial acetone uptake that gets transformed and excreted as more refractory DOC compounds (as in the microbial carbon pump, e.g., ;Jiao and Azam, 2011), possibly via some overflow metabolism strategies as previously suggested for methanol (a) will also not be revealed via the experimental approach of this study. Coastal surface water microbial acetone oxidation rates have been normalized to in situ concentration as a function of season, and are compared to other biologically utilized OVOC compounds (acetaldehyde and methanol, e.g., a) in Table 2. Acetone is a less preferred organic compound for marine microbes compared to methanol and acetaldehyde, although acetone oxidation rates shows a much more pronounced seasonality. In addition, the one depth profile undertaken during summer suggests near-surface reduction in microbial acetone oxidation rates associated with a less saline, warmer tongue of water in the top 10 m. The kinetic characteristics of microbial acetone oxidation can be compared to those of other substrates commonly used by bacteria, so that the ecological significance of acetone to marine microbial metabolism can be evaluated. Both V max and K m are more than 2 orders of magnitude smaller for acetone oxidation compared to methanol oxidation (a), which if compared further with proteins and carbohydrates gives the following order; proteins >>carbohydrates ≈ methanol>>acetone (refer to a for protein, carbohydrate, and methanol V max and K m data). This research offers the first comprehensive seasonally resolved study combining microbial acetone oxidation rates with in situ concentrations in order to derive biological turnover times that ranged between ∼3 days in winter to >240 days in summer. We have experimentally derived the first V max and K m estimates of microbial acetone oxidation. We have also highlighted that there must be an unrecognized production mechanism for acetone during winter in coastal regions, possibly relating in some way, to enhanced dissolved organic matter from terrestrial sources. Further research should investigate possible winter acetone production mechanisms, identify which microbial species are utilizing acetone in marine environments, and characterize what enzyme systems are involved in the oxidation process. ACKNOWLEDGMENTS We wish to thank Denise Cummings for chlorophyll a analysis at L4, which is provided by the Plymouth Marine Laboratory Western Channel Observatory (www.westernchannelobservatory.org.uk), www.frontiersin.org and is funded by the NERC national capability. This work was funded by OCEANS 2025, Plymouth Marine Laboratory NERC funded core research programme.
Rapport Management during the Exploration Phase of the SalespersonCustomer Relationship Trust in the salesperson is one of the primary antecedents of customer satisfaction. However, trust is a function of time and is virtually nonexistent during the exploration phase of the buyerseller relationship. The link between trust and conflict within the sales context has a long history. During the exploration phase of the relationship, buying objections are obvious sources of conflict between sales representatives and prospective customers. Success in managing rapport during such conflict means the sales representative moves the relationship forward. Failure undermines the future relationship. Our goal in this paper is to focus specifically on the critical role of sociolinguistic behaviors described by the theory of rapport management for allowing sales representatives to move beyond the exploration phase in relationships while overcoming customer objections. The result is a simple yet powerful basis for sales training and a theoretically motivated basis for future personal selling research.
Dynamic control of robot perception using multi-property inference grids An approach to dynamic planning and control of the perceptual activities of an autonomous mobile robot equipped with multiple sensor systems is considered. The robot is conceptually seen as an experimenter. The author discusses the explicit characterization of task-specific information requirements, the use of stochastic sensor models to determine the utility of sensory actions and perform sensor selection, and the application of information-theoretic models to measure the extent, accuracy, and complexity of the robot's world model. It is shown how the loci of interest of relevant information and the corresponding loci of observation can be computed, allowing the robot to servo on the information required to solve a given task. The use of these models is outlined in the development of strategies for perception control, and in the integration of perception and locomotion. Some illustrations of the methodology are provided.<<ETX>>
The influence of acrylic cement on the femur of the dog. A histological study. Using dogs as experimental animals, polymerizing methylmethacrylate was inserted into the marrow cavity of the femur. The influence on bone over a period of 21 months was studied by means of histological techniques and microradiography. To distinguish the effect of the methacrylate proper from the circulatory disturbance resulting from the operation, control experiments were performed in which the marrow cavity was emptied, but no acrylic cement was inserted. Polymerization of the methacrylate in vivo resulted in a local rise in temperature to about 58 degrees C. In the femurs containing the acrylic cement a consistent picture developed, consisting of: a) necrosis and removal of the central part of the cortex and b) apposition of a thick layer of bone on the outer surface of the cortex, c) deposition of a cylindrical bone sleeve in contact with the methacrylate. In the control experiments only a minimal resorption at the inside surface of the cortex and the deposition of a thin layer of bone at the outside of the cortex were observed. It is concluded inter al. that circulatory distrubance contributes only slightly to the total reaction of bone to the insertion of methacrylate.
Extension Delivery System in a Layer and Swine-Based Farming Community: The Case of San Jose, Batangas Public agricultural extension systems face increasing pressure resulting from diverse demands for services coupled with problems in declining funds. This study explores the role of a municipal agricultural extension system in the development of a robust agriculture-based municipality and their possible thrusts in helping achieve the countrys bid for agricultural modernization. San Jose, Batangas is a layer and swine-led economy. Sandwiched between Batangas and Lipa City, San Jose is easily accessible to investors and traders. The phenomenal growth of the layer and swine industry is largely attributed to increased local demand and adoption of technological innovations in breeding and nutritional management practices encouraged by private suppliers of vaccines and feeds and feed millers who provide extension and credit terms. With the active participation of private extension providers, government extension system needs to improve its partnership with private and non-government extension providers and re-evaluate its financing scheme to address negative externalities and equity concerns. The lack of a common vision among the key players in the government and the absence of knowledge management plan impair the impact of the government agricultural extension systems. Partnerships and good management practices remain as major areas for improvement.
Investigation of Hot-Carrier-Induced Degradation Mechanisms in p-Type High-Voltage Drain Extended MetalOxideSemiconductor Transistors Hot-carrier-induced degradation in p-type drain extended metaloxidesemiconductor (DEMOS) devices is investigated. The gate voltage biased at the second substrate current peak produces the most device degradation. The generation of interface state (Nit) in the channel region, Nit in the drift region under poly-gate, and negative oxide-trapped charge (Not) in the drift region outside poly-gate are responsible for device parameter degradation. Nit in the channel region causes threshold voltage and maximum transconductance degradation. Not in the drift region outside poly-gate leads to the increase of linear drain current (Idlin) at the beginning of stress. Nit in the drift region under poly-gate results in the turnaround behavior of |Idlin| shift as the stress time is longer.
Emission and Sequestration of Carbon in Soil with Crop Residue Incorporation The decomposition rate of incorporated crop residues and its implications on CO2 emission, carbon mineralization, carbon density and carbon sequestration in agricultural lands were studied during the period March to June, 2009 and 2010 under field condition. The amount of CO2 evolved during decomposition of various crop residues varied depending on C:N ratio theirin. The rate of CO2 emission was highest at 30 days of incorporation in horse gram residue, at 45 days for sesamum, niger, toria and buckwheat residues while rice and wheat residue showed maximum CO2 evolution at 60 days of incorporation and gradually declined subsequently up to 90 days irrespective of all crop residues. The percentage of carbon oxidized was maximum with rice residue (32.8%) followed by wheat (27.7%), sesamum (26.0%), niger (20.0%), toria (19.8%), buckwheat (17.9%) and horse gram (16.8%) residues. Horse gram residue increased carbon density by 31.2% over control. Maximum carbon sequestered (1057 g m−2) was found in case of horse gram residue followed by buckwheat over control. The maximum correlation co-efficient between CO2 emission and days of incorporation was found with sesamum residue (r = 0.972), rainfall with buckwheat (r = 0.826), maximum atmospheric temperature in rice (r = 0.871) and soil temperature with toria (r = 0.975) residue.
Sex Differences in the Survival of Patients Undergoing Maintenance Hemodialysis: A 10-year Outcome of the Q-Cohort Study Background: A survival advantage of women is observed in the general population. However, inconsistent ndings have been reported regarding this advantage in patients undergoing maintenance hemodialysis. The aim of this study was to compare the risk of mortality, especially infection-related mortality, between male and female hemodialysis patients. Methods: A total of 3065 Japanese hemodialysis patients aged ≥ 18 years old were followed up for 10 years. Primary outcome was all-cause and infection-related mortality. The association between the sex and these outcomes were examined using Cox proportional hazards models. Results: During the median follow-up of 8.8 years, 1498 patients died of any cause, and 387 died of infection. Compared with men, the multivariable-adjusted HRs (95% CIs) for all-cause and infection-related mortality in women were 0.51 (0.450.58) and 0.36 (0.270.47), respectively. This association remained signicant even when the propensity score-matching or inverse probability of treatment weighting adjustment methods were employed. Furthermore, even when the non-infection-related mortality was considered a competing risk, the infection-related mortality rate in women was still signicantly lower than that in men. Conclusions: A female survival advantage over men is observed in Japanese patients undergoing maintenance hemodialysis. The fully adjusted model included the following covariates: age, diabetic nephropathy, history of CVD, dialysis vintage, systolic blood pressure, body weight, nPCR, single-pool Kt/V for urea, blood hemoglobin, serum concentration of urea, creatinine, total cholesterol, albumin, CRP, albumin-corrected serum Ca, phosphate, alkaline phosphatase, and PTH, dose of ESAs, use of antihypertensive drugs, phosphate binders, and VDRAs. The Fine & Gray model with non-infection-related deaths as a competing risk was used to consider the competing risk. The PS-matching model was adjusted for body weight and serum creatinine. The IPTW model weighted patients by PS and adjusted for body weight and serum creatinine. Introduction Women have a longer life expectancy than men in the general population 1. The World Health Organization's analysis of global health statistics according to sex clearly show that women have better longevity prospects than those of men 2. The biological differences between men and women are, amongst others, related to genetic and physiological factors such as the progressive skewing of X chromosome inactivation 3, telomere attrition 4, mitochondrial inheritance 5, hormonal and cellular responses to stress 6, and immune function 7,8. These factors may partly explain the longer life expectancy for women. Regarding patients undergoing maintenance hemodialysis (HD), there has been con icting data on the survival advantage of women over men. Some studies reported that men tend to be more susceptible than women to uremia and in ammation-induced anorexia 9. Furthermore, in ammatory and nutritional variables may deteriorate over time in men 10. Women with in ammation undergoing HD have lower mortalities than those of men with in ammation undergoing HD 11. Conversely, other studies have indicated similar mortalities between women and men undergoing HD 15,16. Additionally, several studies showed that infection-related mortality was higher in women than that in men undergoing HD 12,13,14. Hecking et al. hypothesized that the general survival advantage for women over men may be nulli ed because of the high prevalence of HD catheter use and resulting high infection-related mortality in women 15. Considering that the prevalence of HD catheter use in Japan is relatively lower than that in other countries, it would be reasonable to examine the female survival advantage in Japanese HD patients by focusing on infection-related mortality 17. The current study aimed to investigate whether there is a sex difference in the risk of mortality, especially infection-related mortality, among HD patients. For this, we analyzed the dataset of the Q-Cohort Study, a multicenter, observational cohort study of Japanese patients undergoing maintenance HD 18, by using conventional Cox proportional hazards models and propensity score (PS)-based statistical analysis. Baseline characteristics of the patients strati ed by sex The baseline characteristics of the patients strati ed by sex are shown in Table 1. Women were signi cantly (P < 0.05) older and had a longer dialysis vintage, higher frequency of diabetic nephropathy, and lower frequency of CVD history. The cardiothoracic ratio, nPCR, single-pool Kt/V for urea, serum concentrations of total cholesterol, albumin-corrected Ca, and alkaline phosphatase were signi cantly (P < 0.05) higher in women than those in men. Conversely, the body weight, blood hemoglobin level, serum concentrations of urea nitrogen, creatinine, and albumin, and frequency of antihypertensive agent and VDRA use were lower in women than those in men. all-cause mortality was signi cantly associated with a decrease in women (P < 0.001) (Fig. 1A). Women had a lower risk of all-cause death than that of men after adjustment for full variables: the HR (95% CI) was 0.51 (0.45-0.58), P < 0.001 (Table 2). The fully adjusted model included the following covariates: age, presence of diabetic nephropathy, history of CVD, dialysis vintage, systolic blood pressure, body weight, cardiothoracic ratio, nPCR, single-pool Kt/V for urea, blood hemoglobin, serum concentration of urea, creatinine, total cholesterol, albumin, CRP, albumin-corrected serum Ca, phosphate, alkaline phosphatase, and PTH, dose of ESAs, and use of antihypertensive drugs, phosphate binders, and VDRAs. The PS-matching model was adjusted for body weight and serum creatinine. The IPTW model weighted patients by PS and adjusted for body weight and serum creatinine. Next, we determined the association between sex and infection-related death. The unadjusted 10-year incidence rate of infection-related death in women signi cantly decreased compared with that in men (P < 0.001) (Fig. 1B). Women had a lower risk of infection-related death than men after adjustment for full variables: the HR (95% CIs) was 0.36 (0.27-0.47) ( Table 3). Furthermore, even when the competing events of non-infection-related deaths were considered, the infection-related mortality rate in women was signi cantly lower than that in men: the HR (95% CI) was 0.46 (0.35-0.60). The risk of all-cause and infection-related deaths analyzed by the PS-matching method and IPTW adjustment method The logistic regression model used in the PS analysis for all-cause and infection-related deaths showed a high discriminatory power with area under the receiver operating characteristics curve values of 0.86 and 0.84, respectively. The imbalances of baseline covariates in the pre-matching cohort were well balanced after adjusting with the PS-matching method (Supplementary data, Table S1 and S2). Serum creatinine and body weight were not included in the creation of the PS; however, these two covariates are regarded as inherent characteristics of gender differences and were thus balanced across sex after using the PS methodology. Importantly, the survival advantage of women remained statistically signi cant even when the PS-matching and IPTW methods were employed (Tables 2 and 3). Subgroup IPTW analyses strati ed by baseline clinical characteristics To assess whether the survival bene t of women is consistent across a variety of baseline clinical backgrounds, the effects of modi cation by subgroups strati ed by potential confounders at baseline were examined using the IPTW method ( Fig. 2 and Fig. 3). The association between women and a lower rate of all-cause death was enhanced in patients with diabetic nephropathy or higher serum creatinine or albumin concentrations. Also, the protective effect of being female in reducing infection-related death tended to be attenuated in older patients, patients with shorter dialysis vintage, patients with diabetic nephropathy or a history of CVD, or lower levels of blood hemoglobin, serum creatinine or albumin, or with higher levels of serum total cholesterol. Discussion In the present study, by employing various statistical approaches, we clearly demonstrated a survival advantage of women over men independent of all-cause and infection-related deaths in patients undergoing HD. Regarding all-cause mortality, the effect of being female was smaller in patients with diabetic nephropathy or higher serum levels of creatinine or albumin. Moreover, in the subgroup analysis of infection-related mortality, the impact of being female was smaller in younger patients or patients with diabetic nephropathy, history of CVD, higher blood hemoglobin, and higher serum levels of creatinine, total cholesterol, or albumin. Taken together, our results suggest a potential survival bene t for female patients undergoing maintenance HD. The present study has provided evidence that women have a survival advantage during HD. To the best of our knowledge, our study is the rst to demonstrate that the female survival advantage is consistent regarding infection-related mortality in HD patients. This relationship remains statistically signi cant even after adjustment for potential confounding factors, PS-matching, or IPTW adjustment. Furthermore, when non-infection-related death was considered a competing risk, the infection-related mortality rate in women was signi cantly lower than that in men. As for all-cause death, a report from the Dialysis Outcomes Practice Patterns Study (DOPPS) demonstrated that the HR (95% CI) of all-cause mortality in men (versus women) was 1.09 (1.04-1.14) after adjusting for age and time on dialysis 16, consistent with our current observations. Taken together, our data and previous reports strongly suggest that women have a survival advantage over men during maintenance HD. Several potential mechanisms might explain the survival advantage of women over men undergoing HD. Previous studies reported that, in comparison to female patients undergoing HD, men might be more susceptible to in ammation-induced anorexia and can exhibit more severe symptoms (e.g., handgrip strength decline 9 ) and deterioration over time, as evidenced by nutritional and in ammatory variables such as albumin, body weight, CRP, and interleukin-6 10. It has also been demonstrated in regards to in ammation that women have better outcomes than men 11. These results suggest that men are more vulnerable than women in the HD population. In the general population, mounting evidence has also shown a survival advantage of women that is related to genetic and physiological factors. Inactivation of the disadvantageous X chromosome 3, longer telomeres 4, a lower resting metabolic rate 20, estrogen 21, and mitogenome-nuclear genome interactions 6 might play a role in the longer longevity of women. These factors could partly explain the underlying mechanism of our observations. Furthermore, the heightened immune response in women is generally considered to make them more resistant to infections 7,8,22. Our study con rmed a similar relationship in patients undergoing maintenance HD. The subgroup analysis of all-cause mortality revealed that the effect of being female was enhanced in patients with diabetic nephropathy or higher serum levels of creatinine or albumin. Additionally, the subgroup analysis of infection-related mortality revealed that the effect of being female was attenuated in older patients, or patients with diabetic nephropathy, a history of CVD, lower blood hemoglobin, serum levels of creatinine, or albumin or higher levels of total cholesterol. In our analysis, the protective effect of being female in diabetic nephropathy was different in each outcome. Previous studies have shown that the age-related decline of immune cells and in ammatory mediators were slower in women than in men 7,8. Furthermore, sex hormones might reduce antioxidants 20, and women are more resistant to anorexia and lower malnutrition 9, 10. However, recent observational studies demonstrated an inverse association between sex and a high death rate in younger patients undergoing HD 14,23. Hence, further studies are necessary to elucidate whether the effects of the baseline factors observed in the current study are present across a variety of HD populations and whether their underlying mechanisms are related to sex hormones. Despite the accumulation of these ndings to date, the advantage of being female regarding human life expectancy of patients undergoing HD is still controversial. Sex-dependent differences in the proportion of types of vascular access might partially explain the inconsistency. The results from the DOPPS have revealed that the selection of vascular access showed sex-dependent differences, with less frequent catheter use in male HD patients (12.2%) than that in female HD patients (18.4%); subgroup analyses indicated that HD catheter use was associated with a higher risk of all-cause mortality in female patients undergoing HD 16. As catheter users are likely to develop catheter-related infections and resulting persistent in ammation followed by malnutrition, it is possible that they are at increased risks of infection-related and all-cause death. In this regard, sex differences in the proportion of types of vascular access may be important confounders that might have nulli ed the natural speci c advantage of women. Importantly, a national survey conducted in 2008 in Japan reported that more than 90% of the patients undergoing maintenance HD used arteriovenous stula while only 0.5% used a catheter 17. Additionally, there was no sex discrepancy in the proportion of types of vascular access in Japanese patients in the DOPPS 16. This result suggests that catheter users were presumed to be minor in our study and that there is no sex discrepancy in the proportion of types of vascular access. In the present study, even when PSmatching or IPTW adjustment was employed, the survival advantage of women was statistically signi cant, particularly regarding all-cause and infection-related death. This indicates that catheter use during HD might diminish the natural speci c advantage of women. However, it is impossible to assess this in the present study, because we had no direct data regarding the type of vascular access. Therefore, further studies with data regarding the type of vascular access are necessary to determine whether women undergoing HD have a survival advantage regardless of the type of vascular access. The strength of our study was its large-scale and wide-ranging inclusion criteria. As such, our results are generalizable to real-world HD patients. However, some limitations in our study should be noted. First, the measurements of baseline parameters might have been insu cient. For instance, data regarding the use of steroid or immunosuppressive agents and the acceptance rate of renal replacement therapy were missing. Recent studies indicate that elderly women are more likely to choose conservative care than renal replacement therapy, and the female survival advantage diminished among HD patients 24. However, our results obtained with PS-based methodologies for adjusting this selection bias revealed a female survival advantage. Second, we had no data on the serum levels of sex hormones. A previous study showed that women undergoing HD had lower serum estradiol levels than those in the general population 25. Thus, the activity of sex hormones might hardly explain the discrepancy in mortality. The length of exposure to female hormones before HD initiation may determine the impact of the female advantage on survival. Third, the participating patients in this study were all Japanese, and thus our results might not be applicable to other ethnic groups. Despite these limitations, we believe that this study provides further evidence that women have a survival advantage over men during HD. In conclusion, our ndings on patients undergoing maintenance HD suggest that women have a survival advantage over men. Further studies are required to con rm this female survival advantage and its underlying mechanisms during HD. Study design and population The details regarding the design of the Q-Cohort Study were described previously18. We recruited 3598 outpatients aged 18 years or older that were receiving maintenance HD in 39 HD facilities between 31 December 2006 and 31 December 2007. Participants were followed up until 31 December 2016. The participants' health status was checked annually by local physicians at each dialysis facility. When patients moved to other HD facilities in which collaborators of this study were not present, we conducted follow-up surveys by mail or telephone. We excluded 533 participants with missing data on one or more baseline characteristics and whose outcome information could not be obtained. We enrolled the remaining 3065 patients in the nal study population. De nition of outcomes The primary outcomes were all-cause and infection-related deaths. The events were determined based on the patients' medical records. Statistical analysis Group differences in continuous variables were determined using the t-test; categorical variables were compared using the chi-square test. The incidence rates and 95% con dence intervals (95% CIs) for allcause and infection-related mortality were calculated using the person-year method. The unadjusted, ageadjusted, and fully adjusted hazard ratios (HRs) with 95% CIs of all-cause and infection-related mortality according to sex were calculated using a Cox proportional hazards model. The fully adjusted model for all-cause mortality was adjusted for the above-mentioned potential confounders. The fully adjusted model for infection-related mortality was adjusted for the same factors except the cardiothoracic ratio and use of antihypertensive agents. To adjust the selection bias by sex, we used the PS methodology 19. The PS was calculated for each patient using a multivariable-adjusted logistic regression model with sex as the dependent variable. To analyze all-cause and infection-related mortality and calculate the PS, the same covariates as the above-mentioned potential confounders were selected. The discriminatory power of the PS was evaluated by calculating the area under the receiver operating characteristics curve. A PSmatching model with adjustment for body weight and serum creatinine was employed to compare the impact of sex on mortality independently of potential confounders. The inverse probability of treatment weighting (IPTW) model was applied to weigh patients by the PS and was adjusted for body weight and serum creatinine. Statistical analyses were performed using R version 3.6.1 (http://www.r-project.org). A two-tailed P-value of <0.05 was considered statistically signi cant.
Neuropsychological Deficits in Adult HIV Infected Postnatally: a Pilot Study in Patients with Hemophilia Despite advances in the management of HIV infection with the introduction of combination antiretroviral therapy (cART), it is well known that HIV can directly infect the central nervous system (CNS) and, as a result neuropsychological impairments can be manifested. However, in literature there are contrasting results on which cognitive functions are mainly affected, especially when different HIV seropositive populations are considered. In this study, we seek to determine whether seropositivity is associated with a poor neuropsychological performance in patients infected postnatally, namely haemophilic patients. The results suggest that HIV infection is associated with deficits in attention, short term spatial memory, phonemic fluency, abstraction and visual recognition. Such results have important implications for day-to-day functioning, as the level of impairment detected may cause difficulties in completing common everyday tasks.
The DNA Ligase IV Syndrome R278H Mutation Impairs B Lymphopoiesis via Error-Prone Nonhomologous End-Joining Hypomorphic mutations in the nonhomologous end-joining (NHEJ) DNA repair protein DNA ligase IV (LIG4) lead to immunodeficiency with varying severity. In this study, using a murine knock-in model, we investigated the mechanisms underlying abnormalities in class switch recombination (CSR) associated with the human homozygous Lig4 R278H mutation. Previously, we found that despite the near absence of Lig4 end-ligation activity and severely reduced mature B cell numbers, Lig4R278H/R278H (Lig4R/R) mice exhibit only a partial CSR block, producing near normal IgG1 and IgE but substantially reduced IgG3, IgG2b, and IgA serum levels. In this study, to address the cause of these abnormalities, we assayed CSR in Lig4R/R B cells generated via preassembled IgH and IgK V region exons (HL). This revealed that Lig4R278H protein levels while intact exhibited a higher turnover rate during activation of switching to IgG3 and IgG2b, as well as delays in CSR kinetics associated with defective proliferation during activation of switching to IgG1 and IgE. Activated Lig4R/RHL B cells consistently accumulated high frequencies of activation-induced cytidine deaminasedependent IgH locus chromosomal breaks and translocations and were more prone to apoptosis, effects that appeared to be p53-independent, as p53 deficiency did not markedly influence these events. Importantly, NHEJ instead of alternative end-joining (A-EJ) was revealed as the predominant mechanism catalyzing robust CSR. Defective CSR was linked to failed NHEJ and residual A-EJ access to unrepaired double-strand breaks. These data firmly demonstrate that Lig4R278H activity renders NHEJ to be more error-prone, and they predict increased error-prone NHEJ activity and A-EJ suppression as the cause of the defective B lymphopoiesis in Lig4 patients.
Going on a Turtle Egg Hunt and Other Adventures: Education for Sustainability in Early Childhood THIS PAPER REPORTS OUTCOMES for Early Childhood (EC) students after engagement in an Education for Sustainability (EfS) program. The research was conducted at an independent school located in the Perth metropolitan area of Western Australia. Three student-driven EfS projects, on issues of concern to young children, are examined. These projects are located at the school and in the nearby wetlands: biological survey, reed planting and turtle nest-watch. Findings indicated that participation in EfS projects was an effective, meaningful approach to achieving potent, enjoyable, hands-on action in real-life local contexts.