content
stringlengths 1
311k
|
---|
Heat Transfer Enhancement by Shot Peening of Stainless Steel In heat exchange applications, the heat transfer efficiency could be improved by surface modifications. Shot peening was one of the cost-effective methods to provide different surface roughness. The objectives of this study were to investigate the influences of the surface roughness on the heat transfer performance and to understand how the shot peening process parameters affect the surface roughness. The considered specimens were 316L stainless steel hollow tubes having smooth and rough surfaces. The computational fluid dynamics (CFD) simulation was used to observe the surface roughness effects. The CFD results showed that the convective heat transfer coefficients had linear relationships with the peak surface roughness (Rz). Finite element (FE) simulation was used to determine the effects of the shot peening process parameters. The FE results showed that the surface roughness was increased at higher sandblasting speeds and sand diameters. Introduction Stainless steels have been used in endless applications ranging from construction, transportation, medical, nuclear, and chemical industries due to their excellent properties. Mainly because of its ability to resist corrosion, this material has long been used in virtually all cooling waters and many chemical environments. One of the most common uses of stainless steel is as a heat exchanger, because it works well in high-temperature conditions (resistance to corrosion, oxidation, and scaling). Generally, stainless steel surfaces are also easy to clean. Most importantly, this material is economical in terms of cost and long-term maintenance service. The heat transfer efficiency of exchangers can be improved by modifying the geometry of the heat exchange tube/plate and altering fluid flows patterns. Recently, surface modification or texturing has shown its potential in many technological developments, such as friction reduction, biofouling, artificial parts, and even stem cell research. As a result, the trend in using surface modification/texturing in heat transfer enhancement has been on the rise, and some of the reviewed literature is presented below. The influences of dimple/protrusion surfaces on the heat transfer were investigated by Jing et al.. Chen et al. found out that the asymmetric dimple with skewness downstream was better than the symmetric shape in heat exchange. The asymmetric flow structures were numerically evaluated by Turnow et al., and the heat transfer was found to be improved with the asymmetric vortex structures. Du et al. discovered that the dimple location significantly affected the flow structure and heat transfer. The enhanced heat transfer by dimples was also found in the work of Zheng et al.. observed that a bleed hole in a dimpled channel helped improve the heat transfer. numerically studied the 3D turbulent flow and convective heat transfer of the dimpled tube. Both dimples and protrusions were investigated by the same research group, and they found both the dimpled and protruded surface flow mixing, which provided a better heat transfer rate. The authors went on to study the effect of the teardrop surface and found flow mixing improvement. Some other dimples and surface textured shapes for heat transfer enhancement in various conditions can also be found in many research studies. By looking at the effects of surface roughness on heat transfer, many studies have also shown similar findings. Dierich and Nikrityuk observed that the roughness influenced the surface-average Nusselt number, and the authors also introduced the heat transfer efficiency factor. Pike-Wilson and Karayiannis did not find a clear relationship between the heat transfer coefficient and surface roughness. Ventola et al. proposed the heat transfer model, taking into account the size of the surface roughness and turbulent fluid flow. Tikadar et al. looked at heat transfer characteristics in a roughed heater rod. They found that there was an abrupt increase in the heat transfer coefficient at the transition region from the smooth to the surface roughness area. Several manufacturing processes can create surface textures on metals; for instance, laser texturing, rolling, elliptical vibration texturing, and extrusion forging and extrusion rolling processes. Shot peening has been widely used to create surface irregularities (surface roughness) on metal parts. Most of the research studies on the shot peening of stainless steels were focused on residual stress, fatigue and corrosion, surface characteristics, and tribology. The main interest of this research is to determine the efficiency of heat transfer of a shot peened surface in stainless steel tube. Most of the literature mentioned above used the finite volume method (FVM) to determine the heat transfer characteristics of textured surfaces. In addition, many studies utilized the finite element method (FEM) to understand the effects of shot peening process parameters. The FVM was carried out in this study to analyze the heat convection performance of a considered shot peened surface in comparison with the smooth one. Then, the FEM was used to predict the shot peening parameters that would provide the enhanced heat transfer surface. Heat Transfer of Pinned Fins Since the heat exchanger of interest in this study was a 316L stainless steel tube, the heat transfer testing apparatus, as shown in Figure 1, was used. Table 1 presents the material properties of the considered tube in this study. The primary purpose of the heat transfer experiment was to determine the heat convection performance of the tube surfaces: smooth surface and rough surface. The fin specimens were hollow tubes 21.34 mm in diameter, 100.00 mm in length, and 2.87 mm in thickness. Note that both ends of each fin were covered with thin circular plates. The smooth surfaces were prepared by machining and polishing to obtain the peak surface roughness (R z ) of 0.015 m. The rough surfaces were prepared by machining and shot peening to obtain the peak surface roughness of (R z ) of 25 m by using the sand diameter of 350 m. In each test, four fins were pinned to the heater. The fan was installed below the heater at the Air Inlet location. Three thermocouples were attached to the following locations: Inlet Temperature (T in ), Mid-Point Temperature (T mid ), and Outlet Temperature (T out ) through the Air Outlet. The thermocouples (N-type) with the ±1% C accuracy over the measured temperature span were used to record the temperatures. The temperature of the tested fins could be varied by changing the Heat Input. Once the desired temperature of the fins was set, the fan was turned on to provide the airflow passing the heated fins. The carried heat would flow past both T mid and T out, and the measured temperatures would be used to calculate the heat convection performance of the tube surfaces. (Tout) through the Air Outlet. The thermocouples (N-type) with the ±1% °C accuracy over the measured temperature span were used to record the temperatures. The temperature of the tested fins could be varied by changing the Heat Input. Once the desired temperature of the fins was set, the fan was turned on to provide the airflow passing the heated fins. The carried heat would flow past both Tmid and Tout, and the measured temperatures would be used to calculate the heat convection performance of the tube surfaces. The finite volume method (FVM) was applied in this research to evaluate and predict the heat convection performance of various surfaces. The main components and dimensions of the FVM simulation are illustrated in Figure 2. The descriptions, symbols, and units of the necessary parameters used in this research are presented in Table 2. The finite volume method (FVM) was applied in this research to evaluate and predict the heat convection performance of various surfaces. The main components and dimensions of the FVM simulation are illustrated in Figure 2. The descriptions, symbols, and units of the necessary parameters used in this research are presented in Table 2. Coatings 2020, 10 The equation of the convective heat transfer coefficients (hconv) must be developed to evaluate the heat transfer performance of different surfaces. According to the energy balance: The equation of the convective heat transfer coefficients (h conv ) must be developed to evaluate the heat transfer performance of different surfaces. According to the energy balance: The total air mass flow was:. The power of the heater could be described as: Since T in = T air, the temperature of the heater was obtained from: The heat transfer rate was found to be: Thus, the convective heat transfer coefficient could be determined by the following equation: If the new constant (C) was set to be: then the convective heat transfer coefficient could be rewritten as: Coatings 2020, 10, 584 of 15 In addition, Q in could be obtained from the following equation: Q heater dt ≈ Q avg t heat As a result, the convective heat transfer coefficient of the pinned fins could be determined from: Note that the C value could be calculated by using the variable values in Table 3. The variables Q avg, V air, t heat, and T in were input variables. In addition, T out = T mid if the measured temperature point was at the Midpoint Temperature (T mid ). Computational Fluid Dynamics (CFD) Modeling The common tool used in FVM to analyze the heat transfer performance was computational fluid dynamics (CFD). ANSYS CFX 2020 was the commercial software used to evaluate the heat transfer characteristics in this study, as shown in Figure 3. Although there were four fins in the experiments, only two fins of the right half were modeled because of the symmetric fluid flow along the horizontal direction. The half-geometry was modeled in two-dimensional (Half 2D), and the surface roughness along the circumference of each fin was modeled as shown. Figure 4 illustrates the enlarged cross-sectional view of the circumference that represented the actual surface roughness values. The total number of meshing elements was in the 10 million range to accurately capture the fidelity of the surface roughness variations. The k-epsilon turbulent model was used to provide airflow characteristics. The total energy assumption and the transient analysis were performed. Computational Fluid Dynamics (CFD) Modeling The common tool used in FVM to analyze the heat transfer performance was computational fluid dynamics (CFD). ANSYS CFX 2020 was the commercial software used to evaluate the heat transfer characteristics in this study, as shown in Figure 3. Although there were four fins in the experiments, only two fins of the right half were modeled because of the symmetric fluid flow along the horizontal direction. The half-geometry was modeled in two-dimensional (Half 2D), and the surface roughness along the circumference of each fin was modeled as shown. Figure 4 illustrates the enlarged cross-sectional view of the circumference that represented the actual surface roughness values. The total number of meshing elements was in the 10 million range to accurately capture the fidelity of the surface roughness variations. The k-epsilon turbulent model was used to provide airflow characteristics. The total energy assumption and the transient analysis were performed. In the actual heat transfer experiment, the heater was turned on for 20 min (theat) to keep the uniform initial temperature of the fins. Afterward, the fan was turned on according to the set Air Velocity (Vair), and the temperatures on Tin, Tmid, and Tout were recorded every minute over the 10-min period. The same process was also set up in the CFD modeling, and the validation conditions of the CFD model were presented in Table 4. The average values of the recorded temperatures were calculated and compared to determine the validity of the CFD model. Convective Heat Transfer Coefficients of Different Surface Roughness Then, the prediction of the convective heat transfer coefficients (hconv) of different surface roughness was carried out by using the validated CFD model. The primary factors considered here were surface roughness (Rz) and airflow speed (Vair). Table 5 presents the considered conditions of the convective heat transfer coefficients prediction. Then, the CFD results were used to calculate the hconv value of each condition. In the actual heat transfer experiment, the heater was turned on for 20 min (t heat ) to keep the uniform initial temperature of the fins. Afterward, the fan was turned on according to the set Air Velocity (V air ), and the temperatures on T in, T mid, and T out were recorded every minute over the 10-min period. The same process was also set up in the CFD modeling, and the validation conditions of the CFD model were presented in Table 4. The average values of the recorded temperatures were calculated and compared to determine the validity of the CFD model. Convective Heat Transfer Coefficients of Different Surface Roughness Then, the prediction of the convective heat transfer coefficients (h conv ) of different surface roughness was carried out by using the validated CFD model. The primary factors considered here were surface roughness (R z ) and airflow speed (V air ). Table 5 presents the considered conditions of the convective heat transfer coefficients prediction. Then, the CFD results were used to calculate the h conv value of each condition. Shot Peening Finite Element (FE) Simulation Once the relationships between the surface roughness and heat transfer characteristics of 316L stainless steels were established, the tube surfaces must be modified to provide the precise surface roughness as desired. Nevertheless, controlling the shot peening process to achieve such precision was generally tricky, because the influences of the process parameters were not well understood and quantified, particularly on tube (curved) surfaces. As a result, the FE simulation of the shot peening process was carried out to observe the effects of Sand Diameter (D S ), Impact Angle (), and Impact Velocity (V I ), as displayed in Figure 5. The commercial MSC.DYTRAN 2019 was used in the FE analysis. The tube surface was modeled as a 3D hexahedral mesh (587,664 nodes and 566,272 elements). The geometry of the tube was based on the pinned fin from the heat transfer experiment. Note that the tube surface was smooth. The prescribed tube model was deformable, and the properties of the tube are shown in Table 1. The material model of the tube was elastic-plastic. The shot peening sand was modeled as a rigid circular (ball) shape having a 2D quadrilateral mesh (4648 nodes and 4646 elements). Initially, a sand ball was located on top of the tube surface. Afterward, the ball was blasted with a set impact angle and impact velocity. Then, the sand ball was removed, which showed a deformed or dimpled surface on the tube. Note also that this study only considered the single-shot sandblasting impact to determine the influences of the shot peening parameters. Table 6 presents the range and values of the considered shot peening parameters. The deformation results obtained FE simulations that were then used to obtain the surface roughness values (R z ) of each condition. Shot Peening Finite Element (FE) Simulation Once the relationships between the surface roughness and heat transfer characteristics of 316L stainless steels were established, the tube surfaces must be modified to provide the precise surface roughness as desired. Nevertheless, controlling the shot peening process to achieve such precision was generally tricky, because the influences of the process parameters were not well understood and quantified, particularly on tube (curved) surfaces. As a result, the FE simulation of the shot peening process was carried out to observe the effects of Sand Diameter (DS), Impact Angle (), and Impact Velocity (VI), as displayed in Figure 5. The commercial MSC.DYTRAN 2019 was used in the FE analysis. The tube surface was modeled as a 3D hexahedral mesh (587,664 nodes and 566,272 elements). The geometry of the tube was based on the pinned fin from the heat transfer experiment. Note that the tube surface was smooth. The prescribed tube model was deformable, and the properties of the tube are shown in Table 1. The material model of the tube was elastic-plastic. The shot peening sand was modeled as a rigid circular (ball) shape having a 2D quadrilateral mesh (4648 nodes and 4646 elements). Initially, a sand ball was located on top of the tube surface. Afterward, the ball was blasted with a set impact angle and impact velocity. Then, the sand ball was removed, which showed a deformed or dimpled surface on the tube. Note also that this study only considered the single-shot sandblasting impact to determine the influences of the shot peening parameters. Table 5 presents the range and values of the considered shot peening parameters. The deformation results obtained FE simulations that were then used to obtain the surface roughness values (Rz) of each condition. Effects of Surface Roughness to Heat Convection The results of the CFD model validation with the heat transfer experiment are presented in Figure 6. Note that the results of the CFD calculations were mesh-independent. In Figure 6a, the temperature-velocity plot at T out of the experimental and simulation surfaces was illustrated. The errors between the experiments and simulations at T out ranged from 1% to 7% (Figure 6b). The temperature-velocity plot at T mid was also displayed in Figure 6c, and the errors at T mid were illustrated in Figure 6d. The highest error value at T mid was approximately 3%. Considering the error values both at T out and T mid, the CFD model was considered valid in this study. In Figure 7, the temperature contour maps of the smooth and rough surfaces at varying flow speeds are shown. At V = 0 m/s, this condition was considered natural or free convection because the fan was not running, allowing the heated airflow to pass the measured temperature points naturally. If the air velocities were set to be 1.21 or 2.42 m/s, the conditions were forced convection. In the free convection conditions, the rough surfaces provided higher temperatures at both T mid and T out than the smooth surfaces did. In the forced convection scenarios, the temperatures at T mid and Tout significantly dropped in comparison to those of the rough surfaces. The results implied that the rough surfaces provided a better heat transfer performance. The velocity contour maps in Figure 8 could also be used to explain the phenomena. In the natural convection conditions (no influence of air velocity), the heated airflows in the rough surfaces were already higher than those of the smooth surfaces, Coatings 2020, 10, 584 9 of 15 leading to the higher temperatures in T mid and T out. The main reason was due to the increased surface areas of the pinned fins, which allowed more air to exchange heat. With the influences of air velocity (forced convection), the increased in airflow speed generally led to reduced temperatures, which could be clearly noticed in the smooth surfaces. At higher airflow velocities, the small vortex (air swirling) around the dimpled areas accumulated to the large swirling around at the downstream side of the rough surfaces. This large air circulation area (vortex shading) had a lower air velocity, allowing more air to exchange heat with the fins. As a result, more airflow could carry heat from the rough surfaces to the measured temperature points. On the contrary, the smooth surfaces did not develop the large vortex sharing area downstream. Thus, only a fraction of low-velocity airflows downstream was generated around the smooth fins. As a result, there was a lower amount of air to exchange heat at high airflow velocities. The predicted temperatures and the convective heat transfer coefficients (hconv) of varying surface roughness are presented in Figure 9. The CFD results of the predicted temperatures were used to calculate the hconv of each condition. The higher value of hconv represented a higher heat transfer efficiency. According to the figure, the hconv values of the smooth surfaces were equal to zero The predicted temperatures and the convective heat transfer coefficients (h conv ) of varying surface roughness are presented in Figure 9. The CFD results of the predicted temperatures were used to calculate the h conv of each condition. The higher value of h conv represented a higher heat transfer efficiency. According to the figure, the h conv values of the smooth surfaces were equal to zero in the natural convection cases. In the forced convection cases, the h conv values increased with airflow velocity and surface roughness. Comparing the h conv values between T mid and Tout, the h conv values at T out were lower, since the measured location was further away from the heat source. Increasing the airflow velocities from 1.21 to 2.42 m/s caused the h conv values to double. If the surface roughness values were increased up to 250 m, the h conv could be increased up to 155% at V = 1.21 m/s and 192% at V = 2.42 m/s. Most importantly, if the rough surface conditions were considered (R z = 25 to 250 m), the linear relationships between the h conv and R z values could be observed. Although the h conv values linearly increased with R z, the surface roughness could not be increased without limits to enhance the heat transfer efficiency further. The most critical issues were the manufacturability of such a high-depth surface and the heat-transfer limits. In this study, the ratio of surface roughness (R z = 250 m) to pipe diameter (21.34 mm) or R SD was approximately 0.01. In the forced convection conditions, the convective heat transfer coefficient's ratios between rough and smooth surfaces or h conv-RS ranged from 1 to 2.9. Thus, the ratio between h conv-RS and R SD ranged from 100 to 290, which could be used to compare with other tubes having different diameters and surfaces. The Reynold (Re) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation. where dair is the density of air, Vair is the velocity of air, D is the tube diameter, and air is the dynamic viscosity of air. The Re numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the Re numbers. In the free convection conditions, the increased surface roughness did not affect the Re numbers. Note that the Re numbers at 250 m were not zero (approximately 32). The Reynold (R e ) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation. Re = dairVairD/air R e = d air V air D/ air where d air is the density of air, V air is the velocity of air, D is the tube diameter, and air is the dynamic viscosity of air. The R e numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the R e numbers. In the free convection conditions, the increased surface roughness did not affect the R e numbers. Note that the R e numbers at 250 m were not zero (approximately 32). The Reynold (Re) numbers of the considered conditions in this study to determine the fluid behaviors could be calculated by Equation. where dair is the density of air, Vair is the velocity of air, D is the tube diameter, and air is the dynamic viscosity of air. The Re numbers of the investigated surfaces at various airflow speeds can be seen in Figure 10. In the forced convection conditions, higher surface roughness led to higher friction and higher pressure loss (quite turbulent flow), which generally reduced the Re numbers. In the free convection conditions, the increased surface roughness did not affect the Re numbers. Note that the Re numbers at 250 m were not zero (approximately 32). Effects of Shot Peening Parameters to Surface Roughness The effects of the shot peening parameters on surface roughness are presented in Figure 11. Note that the results of the FE calculations were mesh-independent. The R z values were plotted against the impact angles at various impact velocities in Figure 11a,b. The R a values were plotted against the impact angles at different impact velocities in Figure 11c,d. The FE results demonstrated that increasing sand diameters and impact velocities led to an increase in both R a and R z values in all the cases. Larger sand diameters and higher blasting speeds created higher momentum, leading to higher impact energy and causing higher deformed (dimpled) areas. However, increasing the impact angles led to the reduced impact areas and thus decreasing R a and R z values. Based on the results, it could be noticed that using the sand diameters of 100 m up to 350 m resulted in the R a and R z values ranging from 1 m up to 18 m, which would provide the increased h conv values up to 50% depending on the airflow velocities. However, sand diameters larger than 200 m would be recommended due to the more considerable impact on dimpled areas. If only small sand diameters were available, increasing the sandblasting speeds would help increase the surface roughness values. led to the reduced impact areas and thus decreasing Ra and Rz values. Based on the results, it could be noticed that using the sand diameters of 100 m up to 350 m resulted in the Ra and Rz values ranging from 1 m up to 18 m, which would provide the increased hconv values up to 50% depending on the airflow velocities. However, sand diameters larger than 200 m would be recommended due to the more considerable impact on dimpled areas. If only small sand diameters were available, increasing the sandblasting speeds would help increase the surface roughness values. Discussion The results of this work were divided into two parts: the effects of surface roughness on heat transfer efficiency (convective heat transfer coefficient), and shot peening parameters on the surface roughness. The results of these two parts connected the actual performance of the heat exchange surface to the processing parameters. This connection has not yet been well established for heat exchanger manufacturers until this study. The linear relationship found between the convective heat transfer coefficient and peak surface roughness values (Rz) provided a general guideline of how a heat exchanger surface could be developed to meet the higher performance requirements. The effects of the shot peening process parameters on the Rz values were helpful in determining its manufacturability and production costs. In summary, this research work provided a clear linkage of the desired heat transfer performance to surface dimensional control and processing. Since the shot peening process considered in this study was a single-shot sandblasting, it was intended only to observe the primary influences of the considered factors. The multiple-shot Discussion The results of this work were divided into two parts: the effects of surface roughness on heat transfer efficiency (convective heat transfer coefficient), and shot peening parameters on the surface roughness. The results of these two parts connected the actual performance of the heat exchange surface to the processing parameters. This connection has not yet been well established for heat exchanger manufacturers until this study. The linear relationship found between the convective heat transfer coefficient and peak surface roughness values (R z ) provided a general guideline of how a heat exchanger surface could be developed to meet the higher performance requirements. The effects of the shot peening process parameters on the R z values were helpful in determining its manufacturability and production costs. In summary, this research work provided a clear linkage of the desired heat transfer performance to surface dimensional control and processing. Since the shot peening process considered in this study was a single-shot sandblasting, it was intended only to observe the primary influences of the considered factors. The multiple-shot sandblasting process investigating mixed sand diameters at controlled speeds is currently ongoing. The results of this future work should be more applicable to the actual shot peening processes to obtain any desired surface roughness values, particularly on tube profiles. The ultimate impact of this continuous research work would provide more significant benefits to the energy-saving systems. Conclusions The project investigated the effects of surface roughness on the convective heat transfer coefficients of the 316L stainless steel tubes and the effects of the shot peening parameters on the surface roughness. The CFD simulation was carried out to study the influences of the surface roughness at varying airflow velocities. In the free convection conditions, the pinned fins' increased surface areas allowed more air to exchange heat. In the forced convection conditions, more airflow could carry heat from the rough surfaces to the measured temperature points. The linear relationships between the peak surface roughness (R z ) and the convective heat transfer coefficients were found. The influences of the shot peening process parameters (sand diameter, impact angle, and impact velocity) on the tube surfaces were investigated by using the FE simulation. The results showed that the increase in sand diameter and impact velocity increased surface roughness. The increased in convective heat transfer coefficient values (up to 50%) could be obtained by using the sand diameters of 100 m up to 350 m, resulting in the shot peened surface having the R a and R z values ranging from 1 m up to 18 m. The established links among the heat transfer efficiency, surface roughness, and shot peening parameters in this study could be used to enhance the heat transfer efficiency. |
The oligosaccharides of the glycoprotein pheromone of Volvox carteri f. nagariensis Iyengar (Chlorophyceae). The sexuality-inducing glycoprotein of Volvox carteri f. nagariensis was purified from supernatants of disintegrated sperm packets of the male strain IPS-22 and separated by reverse-phase HPLC into several isoforms which differ in the degree of O-glycosylation. Total chemical deglycosylation with trifluoromethanesulphonic acid yields the biologically inactive core protein of 22.5 kDa. This core protein possesses three putative binding sites for N-glycans which are clustered in the middle of the polypeptide chain. The N-glycosidically bound oligosaccharides were obtained by glycopeptidase F digestion and were shown by a combination of exoglycosidase digestion, gaschromatographic sugar analysis and two-dimensional HPLC separation to possess the following definite structures: (A) Man beta 1-4GlcNAc beta 1-4GlcNAc; (B) (Man alpha)3 Man beta 1-4GlcNAc beta 1-4GlcNAc Xyl beta; (C) (Man alpha)2 Man beta 1-4GlcNAc beta 1-4GlcNAc; (D) (Man)2Xyl(GlcNAc)2. Xyl beta Two of the three N-glycosidic binding sites carry one B and one D glycan. The A and C glycans are shared by the third N-glycosylation site. The O-glycosidic sugars, which make up 50% of the total carbohydrate, are short (up to three sugar residues) chains composed of Ara, Gal and Xyl and are exclusively bound to Thr residues. |
Canine leptospirosis in Canada, test-positive proportion and risk factors (2009 to 2018): A cross-sectional study Over the past decade, there has been an apparent increased frequency and widened distribution of canine leptospirosis in Canada, however, this has been minimally investigated. Availability and clinical uptake of Leptospira polymerase chain reaction (PCR)-based testing of dogs in Canada may provide important insight into the epidemiology of this canine and zoonotic infectious disease. Study objectives were to evaluate clinical canine Leptospira PCR test results from a large commercial laboratory to determine temporal and spatial distribution in Canada and identify dog, geographic and temporal risk factors for test-positive dogs. This cross-sectional study analyzed data obtained from IDEXX Laboratories, Inc. on 10,437 canine Leptospira PCR tests (blood and/or urine) submitted by Canada-based veterinarians (July 2009 to May 2018). Multivariable logistic regression was used to identify risk factors for test-positive dogs. Test-positive proportion varied widely annually (4.814.0%) and by location. Provinces with the highest test-positive proportion over the study period were Nova Scotia (18.5%) and Ontario (9.6%), with the prairie provinces (Manitoba and Alberta combined) having the lowest proportion (1.0%); the northern territories could not be evaluated due to limited testing. In the final model, dog age, sex, breed, month, and year test performed, and location (urban/rural, province) of the practice submitting the sample were significant predictors of a positive Leptospira PCR test. Dogs less than one year of age (OR = 2.1; 95% CI: 1.62.9), male sex (OR = 1.3; 1.11.5), toy breed (OR = 3.3; 2.54.4), and samples submitted from an urban practice (OR = 1.3; 1.01.8) had the greatest odds of a positive Leptospira PCR test as compared to referent groups. Significant two-way interactions between province-month and year-month highlight the complex spatial and temporal influences on leptospirosis occurrence in this region. Our work suggests a high incidence of canine leptospirosis regionally within Canada. Identifiable dog and location factors may assist in future targeted prevention efforts. Introduction Leptospirosis is a globally important zoonotic disease. Spread primarily by the urine of animal host species, historically leptospirosis was predominately diagnosed in dogs that had rural lifestyles (e.g., live on livestock farms, take part in rural outdoor activities such as field trials). The epidemiology of canine leptospirosis has evolved over recent years, identifying five serovars (each with varying reservoir host species) that appear to be most important for canine health in North America. Peridomestic wildlife species (e.g., rodents, raccoons), as well as dogs are reservoirs of key Leptospira serovars, supporting the increased recognition of leptospirosis as an important disease of dogs residing in strictly urban environments. Different Leptospira serovars are present in different areas of North America, likely reflecting regional variations in the epidemiology of the disease. The incidence and seroprevalence of Leptospira spp. in dogs appear to be increasing, particularly in North America. Clinical disease in dogs may be severe, and therapy frequently entails costly treatment and long-term monitoring. Further, infected dogs may serve as a source of infection for other animals and people, and common environmental exposure may allow dogs to serve as a sentinel for human risk. Despite the importance of leptospirosis for canine health, the epidemiology of this disease in dogs is poorly understood. This knowledge gap is particularly evident in Canada, where canine leptospirosis has been minimally studied, with existing studies limited by region and date. Further, recent anecdotal data suggest increased disease incidence in eastern and Atlantic Canada, with a large, suspected outbreak in Nova Scotia, Canada in 2017. Multiple diagnostic methods have been developed to identify Leptospira-infected dogs. Unfortunately, diagnosis can be challenging with some testing methodologies based on antibody response, making it difficult to differentiate clinical disease from prior exposure or vaccination. Most prior studies of leptospirosis in dogs have used such antibody-based tests (e.g., microscopic agglutination test; MAT) . In recent years, polymerase chain reaction (PCR) leptospirosis testing has become increasingly used in clinical veterinary medicine. The PCR test may reduce the interpretation challenges commonly encountered with antibodybased tests, extending such benefits to population-based studies of the disease. At present, diagnosis is typically confirmed through consistent clinical signs, suggestive clinicopathologic changes (thrombocytopenia, renal and/or liver enzyme elevations, dilute urine), response to appropriate antimicrobials, and either PCR (urine, blood, or both) and/or serology testing (ideally, paired acute and convalescent microscopic agglutination test (MAT)) serology or inclinic ELISA (IgG or IgM). Prevention of disease is most effectively accomplished through avoidance of contaminated environments. However, the ability to completely avoid contaminated areas is challenging and typically impractical. Leptospira vaccination is generally considered non-core, and administration relies on practitioners' level of awareness and ability to make an appropriate risk assessment of the dog based on location, lifestyle, and other factors. Given the scarcity of Canadian specific publications on leptospirosis, anecdotal information regarding increased disease incidence in eastern and Atlantic Canada, and importance of reliable data to inform dog owner and veterinarian risk assessment and targeted prevention, there is a clear need for further work in this area. Similar habitat risks and approaches to prevention may be applicable to dog and human disease, and dogs may serve as sentinels for human health risks. Thus, addressing research gaps for dogs may have applications to human leptospirosis prevention. The objectives of our study were to evaluate a clinical dataset of canine Leptospira PCR positive test results and determine temporal (month, annual) and spatial distribution in Canada, and to identify dog, geographic, and temporal risk factors for PCR-positive test dogs. Material and methods This was a cross-sectional study that used 10,437 PCR test results for canine leptospirosis. Tests were submitted between July 1, 2009 and May 1, 2018 to IDEXX Laboratories, Inc. Data were obtained from the reported results of routine clinical tests (IDEXX Real-PCR1 Test) performed on blood and/or urine samples from dogs submitted from veterinary clinics in Canada. Permission to access and use data was obtained from IDEXX Laboratories, Inc. The PCR test used has been validated in dogs, with reported high sensitivity and specificity (92% and 99%, respectively using MAT as the gold standard). In summary, the PCR test is based on IDEXX's proprietary real-time PCR oligonucleotides (IDEXX Laboratories, Westbrook, Maine). Hap-1 gene sequences were aligned, a region was selected for primer and hydrolysis probe design, and real-time PCR was run with standard primer and probe concentrations using the Roche LightCycler 480 Probes Master mastermix (Version 3.0, Applied Biosystems). The test detects Leptospira spp. DNA from only the recognized pathogenic strains due to the presence of the hap1 gene, including L. interrogans and L. kirschneri. The same test was used over the study period. All dogs were assumed to be client-owned, and diagnostics and treatments were at the discretion of the client. Specific clinical data were unavailable, but it was presumed that most dogs were tested due to presence of clinical signs suggestive of leptospirosis. Data were available in electronic database format, with data on month and year test performed, dog signalment (age in days, sex/reproductive status, breed), Canada Post forward sortation area (FSA; first three postal code characters) for the submitting veterinary clinic, test result, and dog unique identifier. Dog unique identifiers were created by IDEXX Laboratories, Inc. based on a combination of dog name, owner name and clinic ID and verified by study author (JWS) for entries with the same unique identifier based on signalment and FSA information. Postal code for the dog's residence and vaccination status were unknown. Repeat entries were removed. An entry was considered a repeat if the same dog (based on unique identifier, signalment, and FSA) was tested more than once in a given calendar year or, for December and January entries, spanning two calendar years. If the test outcomes for a set of repeat entries were the same, the most recent entry was retained in the dataset and additional entries were removed. If the test outcomes differed for a set of repeat entries, a single positive (most recent) entry was retained. From these data, variables were derived for the dog's age in years at the time of testing (� 1.0, 1.1-4.0, 4.1-7.9, � 8.0), AKC breed group (sporting, herding, hound, non-sporting, terrier, toy, working, mixed; based on breed listed, if more than one breed listed, then categorized as mixed), month when testing was performed (Jan-Feb. Although geographically NS is included within Atlantic Canada, it was separated for analysis due to anticipated differences in the epidemiology of canine leptospirosis between these regions. Data maps and analysis Data mapping. To visualize the spatial distribution of testing and positive canine Leptospira test results, the frequency of tests performed, frequency of test-positive dogs, and test-positive proportion of dogs were separately mapped by FSA for all years combined using FSA boundary files from Statistics Canada and ArcGIS version 10.2.2 (Environmental Systems Research Institute). Calculating incidence-type measures is challenging with companion animals, as the owned canine population in Canada is unknown. In this circumstance, the human population was used as a proxy for the canine population; we calculated this measure by dividing the number of positive canine Leptospira PCR tests over the study period for a given FSA by the 2016 human census population for that FSA (reported as test-positive dogs per 100,000 people). Test-positive dogs per 100,000 people by FSA was mapped as described above. Potential 'hot spots' of canine Leptospira in Canada were identified visually as areas (FSAs) with relatively increased number of cases, increased test-positive proportion, and increased number of cases per human capita as compared to surrounding FSAs. Data analysis. Test-positive proportion of leptospirosis at the dog-level was calculated overall and for subgroups (province/region, year, month) by dividing the number of positive canine Leptospira PCR tests by total number of tests. Ninety-five percent Clopper-Pearson confidence intervals were calculated. Years for which only partial year data were available (2009 and 2018), were excluded from annual descriptive statistics (test-positive proportion) but were included in all model building. The association between dog, temporal, and spatial variables and a positive Leptospira PCR test was explored using logistic regression models. The main outcome of interest was a positive canine Leptospira PCR test. Descriptive statistics, Odds Ratios (OR), and 95% confidence intervals (CI) for the ORs were calculated for all variables. Univariable logistic regression models were built and variables with a likelihood ratio test P-value < 0.2 were eligible to be tested for inclusion in the final multivariable model. Spearman's rank correlation (Phi coefficient for two dichotomous variables) was performed between all predictors eligible for multivariable analysis. When predictors were highly correlated (correlation coefficient � |0.80|), one variable was retained based on perceived importance/relevance for drawing conclusions from the analysis. A final multivariable logistic regression model was built using a backwards stepwise approach. Confounding was assessed when removing variables from the multivariable model. Variables were kept in the model as confounders if their removal changed the coefficients of one or more retained terms by �20%. Statistical significance was based on a likelihood ratio test P-value < 0.05. Biologically relevant 2-way interactions between variables retained in the final multivariable model were assessed for significance using a likelihood ratio test. Predictive probabilities and associated 95% CIs for a positive test result were graphed to visualize interaction terms. Model fit was assessed with the Hosmer-Lemeshow goodness of fit test. Stata 16 (StataCorp, College Station TX) was used for analysis. Results A total of 19,066 PCR test results were available over the study timeframe. Removal of 8,629 repeat entries was performed, the vast majority of which (7,882; 91%) were exact repeats except for sample source (urine, blood), resulting in 10,437 Leptospira PCR test results used in the analyses. Most records (8,454; 81%) were complete, with AKC breed group being the most frequently missing data element (8,807; 84% present) ( Table 1). The population of dogs tested was 52% male and had a mean age of 6.9 years (SD 3.9; range 0.1-20). The number of PCR tests submitted increased each year (full calendar years 2010: 223; 2017: 2,581), with the greatest annual increase between 2010 and 2011 (263% positive change). Of the total 1,620 FSAs in Canada, samples were reported from 788 FSAs (48.6%; from which there were a median of 6 samples/FSA, range 1-283/FSA; Fig 1). In the univariable analysis, dog signalment (age, sex, AKC breed group), location (province, rural/urban status), and time of testing (month, year) were significant predictors for a positive Leptospira test result (all P < 0.02) and retained in the final multivariable model (all P < 0.05; Table 1). In addition, the two-way interactions of province � month (odds of a positive test result reported in each month depended on the province where the test was performed) and year � month (odds of a positive test result reported in each month depended on the year the test was performed) were both significant predictors (each P < 0.04) when added to the main effects model and thus retained in the final model. The final model fit the data (Hosmer-Lemeshow P = 0.88). In the multivariable model, younger dogs were at significantly increased odds of being Leptospira-positive as compared to elderly dogs (referent � 8.0 yr), with dogs less than or equal to one year of age having the greatest odds of infection (OR = 2.1; 95% CI 1.6-2. Province, month, and year were included in interaction terms and therefore visualized with margins plots (Figs 6 and 7). From January through August, the predicted probabilities of dogs testing positive for Leptospira was relatively low (generally < 10%) with minimal annual deviations (exception July-August 2017), while in the latter half of the calendar year (Sept-Dec), the predicted probabilities were generally higher (> 10%) with more pronounced annual deviations (Fig 6). Differing effects of time of year (month) were noted among the provinces. Ontario, British Columbia, Quebec, and Nova Scotia revealed an increased predicted probability of dogs testing positive for Leptospira in the fall/winter (September-December; Fig 7). This was most pronounced in Ontario and Nova Scotia, which had the greatest peak predicted probabilities (~40%), while a mild increased peak predicted probability was noted in British Columbia. Limited data made it difficult to accurately predict the probabilities of dogs testing positive for Leptospira in the prairie and Atlantic provinces. Discussion There have been few studies on canine leptospirosis in Canada. As such, the epidemiology of the disease in the country remains poorly defined and limited to a single geographical area (Ontario). In the United States, recent MAT-positive prevalence for canine leptospirosis was estimated to be 14% between 2000 and 2014. Another US study, evaluating canine Leptospira PCR tests submitted through a commercial diagnostic laboratory (2009 to 2016) found an overall test-positive proportion of 5.4%. Our PCR-based work identified an overall Canadian canine leptospirosis test-positive proportion of 8.4%. While it is important to acknowledge that clinical data were not available for our study, preventing us from confirming that test-positive dogs were clinically affected leptospirosis cases, we presume that the results of the PCR testing likely reflect clinical disease. This is because Leptospira testing would be predominantly performed in dogs with signs of disease. As the PCR test used only detects Leptospira spp. nucleic acid of pathogenic strains, a positive test result in a dog with clinical disease supports recent infection. The analysis of PCR-based data lessened challenges commonly observed with leptospirosis MAT interpretation (e.g., interference related to vaccination and exposure). Prior studies have consistently noted annual and geographic fluctuations in the occurrence of canine leptospirosis. Similarly, we noted pronounced annual variation in the proportion of test-positive tests (4.8-14%). One of these variations was consistent with an anecdotally reported outbreak of canine leptospirosis in the Halifax region of Nova Scotia. Additionally, we noted marked variation in the canine leptospirosis test-positive proportion across the Canadian regions . These regional variations may reflect "hot spots" for canine leptospirosis with consistently elevated disease risk and locations likely to experience future elevated risks. However, regional variation in clinician awareness and testing patterns (e.g., only test dogs with a high suspicion for leptospirosis, test dogs along the continuum of suspicion) may also be responsible for these variations. Multiple factors are considered to influence test-positive prevalence of canine leptospirosis [1,2,11,. These reported factors have included dog location (i.e., urban vs. rural), month of testing, monthly rainfall at time of testing, and use of prevention strategies (e.g., increased vaccination efforts). The Canadian provinces with the highest test-positive proportion for canine leptospirosis in our study were Ontario and Nova Scotia. Similarly, developed case maps visualized likely areas of leptospirosis 'hot spots' in these two regions (high number of cases and high number of cases per human capita in given FSAs). These findings align with previous studies from the United States that observed clusters of cases and increased seroprevalence/test-positive proportion in specific regions. These US-based studies have indicated that increased rainfall, flooding, and proximity to bodies of water in these regions, along with the presence of reservoir hosts could explain the observed regional distribution of canine leptospirosis. It is likely that similar environmental factors are associated with (perhaps responsible for) the noted Canadian distribution we observed; however, further investigation is needed to confirm this. Similar to other studies, we observed dogs from urban areas were at increased odds for testing positive for Leptospira as compared to those from rural regions. This could be due to encroaching wildlife populations, or other factors such as veterinary healthcare seeking behaviors or socioeconomic status, leading to exposure to area wildlife or domestic cats, which may act as Leptospira carrying reservoir hosts in these regions. These urban wildlife (e.g., rodents, raccoons) and feline reservoirs have been identified as purported risk factors for canine leptospirosis [24,. Further work identifying the regional distribution of serovars and serovar-reservoir relationships, perhaps targeting wildlife and feline reservoirs, would be useful to further guide prevention efforts, potentially including vaccine development. Risk factor evaluation in our work shared similarities with the recent US-based study evaluating Leptospira PCR data. Significant predictors of a positive leptospirosis test were younger age and male sex. Male sex has been repeatedly identified as a risk factor for canine leptospirosis, as demonstrated in a recent systematic review/meta-analysis. Our current work adds to this 'higher risk canine profile' toy and terrier breeds, a finding suggested in a previous study (i.e., dogs weighing <15 pounds (6.8 kg) had the greatest odds of being diagnosed with leptospirosis). Further work examining vaccine coverage in smaller dogs in Canada, especially from urban centers, would be useful to determine if lower vaccination coverage may be playing a role in leptospirosis risk in these breeds. Historically, there have been concerns with increased adverse events in these breeds following leptospirosis vaccination and while recent data suggest these fears are largely unfounded with the current canine leptospirosis vaccines, anecdotally concerns persist. Seasonal variation in canine leptospirosis has been observed in prior studies, with an increase in prevalence/proportion test-positive from late summer to early fall. Potential explanations for such seasonal variations include changes in precipitation or temperature that impact survival of Leptospira, or seasonal canine activities or Leptospira reservoir host behaviors/movements that increase exposure risk for dogs. This seasonal effect was observed in our work; we noted a trend that dogs were more likely to be test-positive September through December in zones with a greater seasonal temperature variation (e.g., Quebec and Ontario) as opposed to those without this variation (e.g., British Columbia). This finding is consistent with the recent US-based PCR study, but contrasts with previous MAT-based work. Similar to other observational studies of this type, there are limitations inherent to our work. Leptospirosis testing was performed based on clinician-owner decision, the result of which may have introduced various biases, including temporal, regional, and canine signalment-related testing approaches. Signalment data was provided by testing veterinarians or support staff, which may include data entry errors as well as potential biases for documentation varying with breed listed in the data (e.g., listed as a single breed when in fact mixed breed). Another limitation is our data were acquired from a single commercial laboratory, possibly leading to regional under-representation, and thus they might not be representative of the population as a whole. This could lead to locations for which canine leptospirosis testing data were not available (e.g., few or no test results in certain regions of Canada), resulting in regions with unmeasured Leptospira occurrence and a potential lack of generalization of our work. However, it is likely that these regions are of limited consequence to the overall conclusions of our work due to the historic and widespread sample submission coverage of the country (as observed by FSA test submissions, especially considering human and dog distribution in the country). Further, maps were created to provide estimates of risk levels of canine leptospirosis across Canada. These estimates are subject to various errors and biases including likely changes in test use/availability over the study period and regional and temporal differences in at-risk canine populations. Another limitation of the dataset and other studies of this type is the lack of dog clinical data and recent travel history. We assumed that samples were received from dogs presenting to veterinary practices with clinical disease consistent with leptospirosis and from an exposure relatively close to the submitting veterinary practice. In conclusion, this work identified focal regions of canine leptospirosis in Canada, with the highest test-positive proportion (and related hot spots) in Ontario and Nova Scotia. The case maps and identified risk factors will allow practitioners and dog owners to identify areas of high risk for leptospirosis exposure and occurrence where dogs live, visit, and perform, which will allow for targeted prevention efforts. |
Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics. Introduction The unsupervised generation of data is a dynamic area of machine learning and an active research frontier. Besides generating a desired distribution, one often wants to guide the generative model towards certain desired criterion. For example, when generating music one might wish to reward the model for choosing certain melodic patterns. In the case of molecular design, besides generating valid molecules, one may want to optimize molecular properties to screen for their potential in solar cells or batteries or OLEDS. One way to impose arbitrary objectives to generative models is via naive reinforcement learning (RL), where we define hard coded rewards and treat the model as a player taking actions in a game-like setting. Unfortunately, depending on the objective, this approach may lead to unphysical or uninteresting samples. Following the chemistry example, compounds can be represented as SMILES strings -text sequences that encode the connectivity graph of arbitrary molecules. SMILES have grammar rules based on chemical bonding, which can lead to valid expressions such as "N#Cc1ccn1" or invalid such as "[C[[[N", encoding a non-plausible molecule. In this setting a simple objective fucntion such as "molecules should be valid" might skew our model to create monotonous repetitions of valid characters and generate strings such as "CCCN", "CCCCN", "CCCCCN", which are valid but not very interesting molecules in terms of chemical diversity. Previous work has relied on specific modifications of the objective function to reach the desired properties. For example, in order to increase the number of generated valid molecules, Ranzato et al added penalties for molecules with unrealistically large carbon rings (size larger than 6), molecules shorter than any training set sample, or molecules with less carbons than any molecule in the training set.Without penalty or reward terms, RL can easily get stuck around local maxima which can be very far from the global maximum reward. This type of reward optimization depends highly on experimentation as well as domain specific knowledge. In the Objective-Reinforced Generative Adversarial Network (ORGAN) introduced in this work, we explore the addition of an adversarial approach to the reinforcement learning setting using generative adversarial networks (GANs) to bias the generative process. GANs are a family of generative models proposed by Goodfellow et al which are able to generate compelling results in a number of image-related tasks. The proposed ORGAN model adds a GAN discriminator term to the reinforcement learning reward function. The generator is trained to maximize a weighted average of two rewards: the "objective," which is hard coded and unchanging, and the discriminator, which is dynamically trained along with the generator in an adversarial fashion. While the objective component of the reward function ensures that the model selects for traits that maximize the specified heuristic, the changing discriminator part does not let the model lock on certain modes and therefore to generate uninteresting or repetitive data. In order to implement the above idea, we build on SeqGAN, a recent work that successfully combines GANs and RL in order to apply the GAN framework to sequential data. While our implementation uses recurrent neural networks (RNNs) to generate sequential data, in theory our model can be adapted to generate any type of data, as long as the GAN is trained via RL. We implement our model in the context of molecules and music generation, optimizing several different metrics. Our results show that ORGAN achieves better objective scores than maximum likelihood estimate (MLE) and SeqGAN, without sacrificing the diversity of generated data 1. Model As illustrated in Figure 1, the informal idea of ORGAN is that the generator is trained via policy gradient to maximize two rewards at the same time: one that improves the hard-coded objective and another that tries to fool the discriminator in a GAN setting. More formally, the discriminator D is a Convolutional Neural Network (CNN) parameterized by. We feed both real and generated data to it and update D like we would any classifier The generator G is an RNN parameterized by using Long Short Term Memory (LSTM) cells that generates T length sequences Y 1:T = (y 1,..., y T ). Let R(Y 1:T ) be the reward function defined for full length sequences. We will define it later in this section. We treat G as an agent in an reinforcement learning context. Its state s t is the currently produced sequence of tokens Y 1:t and its action a is the next token y t+1 to select. The agent's stochastic policy is given by G (y t |Y 1:t−1 ) and we wish to maximize its expected long term reward where s 0 is a fixed initial state. Q(s, a) is our action-value function that represents the expected reward at state s of taking action a and following our current policy G to complete the rest of the sequence. For any full sequence Y 1:T, we have Q(s = Y 1:T −1, a = y T ) = R(Y 1:T ) but we also wish to calculate Q for partial sequences at intermediate timesteps, considering the expected future reward when the sequence is completed. In order to do so, we perform N -time Monte Carlo search with the canonical rollout policy G represented as where Y n 1:t = Y 1:t and Y n t+1:T is stochastically sampled via the policy G. Now Q becomes Following the original SeqGAN work, in order to apply reinforcement learning to an RNN, an unbiased estimation of the gradient of J() can be derived as Finally in ORGAN we simply define the reward function for a particular sequence Y 1:t as where D is the discriminator and O is the objective representing any heuristic that we like. When = 0 the model ignores D and becomes a naive RL, whereas when = 1 it is simply a SeqGAN model. Algorithm 1 shows pseudocode for the proposed model. Highlighted in blue are the specific differences between SeqGAN and our model. All the gradient descent steps are done using the Adam algorithm. Note that our implementation on top of SeqGAN is merely because we are working with sequential data. In theory, the ORGAN model can be used with most types of GAN. Pre-train G using MLE on S; Generate negative samples using G for training D ; Pre-train D by minimizing the cross entropy; repeat for g-steps do Generate a sequence Y 1: Experiments We compare ORGAN with three other methods of training RNNs: SeqGAN, Naive RL, and Maximum Likelihood Estimate (MLE). Each of the four methods is used to train the same architecture. All training methods involve a pre-training step of 240 epochs of MLE. The MLE baseline simply stops right after pre-training, while the other methods proceed to further train the model using the different approaches. For each dataset, we first build a dictionary mapping the vocabulary -the set of all characters present in the dataset -to integers. Then we preprocess the dataset by transforming each sequence into a fixed sized integer sequence of length N where N is the maximum length of a string present in the dataset along with around 10% more characters increase flexibility. Every string with length smaller than N is padded with "_" characters. Thus the input to our model becomes a list of fixed sized integer sequences. Molecules In this work, we used three different chemistry datasets consisting of SMILES strings representing molecules in pharmaceutical contexts: Drug-like -A random subset of 15k drug-like molecules from ZINC database of 35 million commercially-available compounds for virtual screening, typically used for drug discovery. The maximum sequence length is 121 and the alphabet size is 37. Small mols -A random subset of 15k molecules from the set of 134 thousand stable small molecules. This is a subset of all molecules with up to nine heavy atoms (CONF) out of the GDB-17 universe of 166 billion organic molecules. The maximum sequence length is 31 and the alphabet size is 25. Tiny mols -A smaller subset of the Small mols dataset containing all molecules with less than 12 atoms. The maximum sequence length is 29 while the alphabet size is 22. When choosing reward metrics we picked qualities that are normally desired for drug discovery: Novelty: A function that will return a value of 1 if a SMILES encoded is a valid molecule that is outside of the training set. It will return 0.3 if it is only valid, and 0 if not. Each plot is optimized for a particular objective (bold line). Visibly these rewards present the most growth in each case signifying the generated data is indeed receiving bias from the metric. Diversity: A function from 0 to 1 that measures the average similarity of a molecule with respect to a random subset of molecules for a training set. The closer to 0, the less diverse this molecule is. Solubility (Log(P)): A function that measures the solubility of a molecule in water normalized to the range 0 to 1 based on experimental data. The value is computed via RDKit's LogP function. Synthetizability: A normalized version of the synthetic accessibility score as implemented in RDKit, a measure based on molecular complexity and fragment contributions that estimates how hard or how easy it is to make a given molecule. Figure 3 shows some of the generated molecules. In Figure 2 we can observe that indeed the reward is inducing a bias in the generated data since each particular reward is growing the fastest before plateauing to a maxima. We also find that some metrics will improve after time along with the reward. This behavior is expected since many of the rewards are not independent. We can also observe that novelty presents the same pattern. This is again not Naive RL Table 2: SMILES strings for the molecules illustrated in 3, each column is from a different training algorithm. Upon a glance, the naive RL molecules seem much less interesting than the SeqGAN and ORGAN ones because they are not as diverse and posses repetitive substructures." surprising, since novelty is essentially counting valid sequences which are necessary for all other rewards. Meanwhile, Figure 3 illustrates the role that plays in the generative process. While the maximum of the reward druglikeliness lies in the RL setting we can see that the generated molecules are quite simple. By increasing we are end up with a lower (but still high) druglikeness while generating more acceptable molecules. This pattern is perceived in almost all metrics, sometimes the maxima will also be in the intermediate regime. In our experiments we also noted that naive RL has different failure scenarios. For instance, when trained on the Small mols dataset, it learned to generate longer sequences with monotonous patterns like "CCCCCCCCCCCCCCCCC" or "CCCCOCCCCCOCCCCCC". When trained on the drug-like dataset, however, the naive RL model learned to generate sequences significantly shorter than those in the training set such as the single atom molecule "N." The GAN component can easily prevent any of these failure scenarios since the discriminator can learn to penalize string batches that do not look like the training set (in this case they do not have the same average length) as seen in average lengths of sequences generated by each of the models in Table 1. Music To further demonstrate the applicability of ORGAN, we extended its application to music. We used ABC notation, which allows music to be expressed in a text format, and facilitates reading the dataset and analyzing its contents. In this work we use the Nottingham dataset, filtering out sequences longer than 80 notes. We generate songs optimizing three different metrics: Tonality: This measures how many perfect fifths are in the music that is generated. A perfect fifth is defined as a musical interval whose frequencies have a ratio of approximately 3:2. These provide what is generally considered pleasant note sequences due to their high consonance. Melodicity:In order of decreasing consonance, we have the following intervals: perfect fifth, perfect fourth, major sixth, major third, minor third, minor sixth, major second, minor seventh, and minor second. We decided that, for an interval to be considered melodic, it must be one of the top three in the above list. Note that tonality is a subset of melodicity, as maximizing tonality also helps maximize melodicity. Ratio of Steps: A step is an interval between two consecutive notes of a scale. An interval from C to D, for example, is a step. A skip, on the other hand, is a longer interval. An interval from C to G, for example, is a skip. By maximizing the ratio of steps in our music, we are adhering to conjunct melodic motion, Our rationale here is that by increasing the number of steps in our songs we make our melodic leaps rarer and more memorable. Moreover we calculate diversity as the average pairwise edit distance of the generated data following the approach of Habrard et al. We do not attempt to maximize this metric explicitly but we keep track of it to shed light on the trade-off between metric optimization and sample diversity in the ORGAN framework. Table 3 shows quantitative results comparing ORGAN to other baseline methods optimizing for three different metrics. ORGAN outperforms SeqGAN and MLE in all of the three metrics. Naive RL achieves a higher score than ORGAN for the Ratio of Steps metric, but it underperforms in terms of diversity, as Naive RL would likely generate very simple rather than diverse songs. In this sense, similar to the molecule case, although the Naive RL ratio of steps score is higher than ORGAN's, the actual generated songs can be deemed much less interesting. By tweaking, the ORGAN approach allows one to explore the trade-off between maximizing the desired objective and maintaining diversity or "interestingness." Besides the expected correlation between Tonality and Melodicity, we also noticed an inverse relationship between Ratio of Steps and any of the other two. This is because two consecutive notes, what qualifies as a step, do not have the frequency ratios of perfect fifths, perfect fourths, or major sixths, which are responsible for increasing Melodicity. Figure 4 shows the distribution of the tonality of data sampled from ORGAN and from the training set. As increases, the curves become more smooth since the discriminator forces the model to approach the structure of the training set. Without the discriminator component, naive RL ( = 0) Figure 4: Tonality distributions of the sampled data from ORGAN with different values of as well as from the training set. The x axis represents the tonality metric while the y axis represents the frequency of that particular tonality in the sampled data. creates a distribution that does not seem very realistic because of its lack of diversity. In all cases, however, the RL component of ORGAN successfully skews the model towards data with higher values of tonality. Conclusion and Future Work In this work, we have presented a general framework which builds on the recent advances of Generative Adversarial Networks to optimize an arbitrary objective in a sequence generation task. We have shown that ORGAN improves desired metrics achieving better results than RNNs trained via either MLE or SeqGAN. More importantly, we are able to tune the data generation towards a particular reward function while using the adversarial setting to keep data non-repetitious. Moreover, ORGAN is much easier to use as a black box than similar objective optimization models, since one does not need to introduce multiple domain-specific penalties to the reward function: many times a simple objective "hint" will suffice. Future work should attempt to formalize ORGANs from a theoretical standpoint in order to understand when and how they converge. It is crucial to understand when GANs converge in general, which is still an open question. Future work should also do more to understand the influence of the choice of heuristic on the performance of the model. Finally, future work should extend ORGANs to work with data that is not sequential, such as images. This requires framing the GAN setup as a reinforcement learning problem in order to add an arbitrary (not necessarily differentiable) objective function. We believe this extension to be quite promising since real valued GANs are currently better understood than sequence data GANs. |
Permutation Separations and Complete Bipartite Factorisations of K_{n, n} Suppose p<q are odd and relatively prime. In this paper we complete the proof that Kn,n has a factorisation into factors F whose components are copies of Kp,q if and only if n is a multiple of pq(p+q). The final step is to solve the "c-value problem" of Martin. This is accomplished by proving the following fact and some variants: For any 0 k n, there exists a sequence (1,2,..., 2k+1) of (not necessarily distinct) permutations of {1, 2,...,n } such that each value in { k,1 k,...,k } occurs exactly n times as j(i) i for 1 j 2k 1a nd 1 i n. |
The Global Trade in Live Cetaceans: Implications for Conservation Cetaceanssmall whales, dolphins and porpoiseshave long been popular performers in oceanaria. Captive cetaceans have also been used for research and employed in military operations. In some jurisdictions cetacean display facilities have been phased out or prohibited, and in the US and Hong Kong a high proportion of the whales and dolphins now in captivity have been captive-bred. A large, growing and increasingly opportunistic trade in dolphins and small toothed whales nevertheless exists, its centres of supply having shifted away from North America, Japan, and Iceland to the Russian Federation and developing nations in Latin America, the Caribbean, West Africa, and Southeast Asia. Demand for live captures is being driven by: a new wave of traditional-type oceanaria and dolphin display facilities, as well as travelling shows, in the Middle East, Asia, Latin America, and the Caribbean; increasingly popular programs that offer physical contact with cetaceans, including the opportunity to feed, pet, and swim with them; and the proliferation of facilities that offer dolphin assisted therapy to treat human illness or disability. Rigorous assessment of source populations is often lacking, and in some instances live capture is adding to the pressure on stocks already at risk from hunting, fishery bycatch, habitat degradation, and other factors. All too often, entrepreneurs appear to be taking advantage of lax (or non-existent) regulations in small island states or less developed or politically unstable countries to supply the growing global demand for dolphins and small whales. The regulation of trade in live cetaceans under CITES is fraught with problems, not least the poor quality of reporting and the lack of a rigorous mechanism for preparation, review, and evaluation of non-detriment findings. Preparation of this article was supported by WDCS, the Whale and Dolphin Conservation Society. The authors are especially grateful to Cathy Williamson and Philippa Brakes for contributing data, ideas, and helpful critical comments. |
Imaging sparse scatterers through a multi-frequency CS approach In this paper an inverse scattering technique based on the multi-task Bayesian Compressive Sensing is presented within a multi-frequency framework. After recasting the problem in a probabilistic sense, the solution to the imaging problem is determined by means of an efficient Relevant Vector Machine coupled with a contrast source inversion procedure. Selected numerical results are discussed to assess and compare the efficiency and robustness of the proposed strategy with respect to the state-of-the-art techniques. |
Hypoconnectivity networks in schizophrenia patients: a voxelwise meta-analysis of rs-fMRI In the last few years, the eld of brain connectivity has focused on identifying biomarkers to describe different health states and to discriminate between patients and healthy controls through the characterization of brain networks. A particularly interesting case, because of the symptoms' severity, is the work done with samples of patients diagnosed with schizophrenia. This meta-analysis aims to identify connectivity networks with different activation patterns between people diagnosed with schizophrenia and healthy controls. Therefore, we collected primary studies exploring whole brain connectivity by functional magnetic resonance imaging at rest in patients with schizophrenia compared to healthy people. Thus, we identied 25 high-quality studies that included a total of 1285 people with schizophrenia and 1279 healthy controls. The results indicate hypoactivation in the right precentral gyrus and in the left superior temporal gyrus of people with schizophrenia compared with the control group. These regions have been linked to decits in gesticulation and the experience of auditory hallucinations in people with schizophrenia. A study of heterogeneity demonstrated that the effect size was inuenced by the sample size and type of analysis. These results imply new contributions to the knowledge, diagnosis, and treatment of schizophrenia both clinically and in research. Introduction Schizophrenia is the most important severe mental health disorder and implies an extraordinary health problem. Risk factor identi cation and etiological studies remain unresolved in the scienti c response. The genetic and neurobiological factors that have been associated with schizophrenia are quite heterogeneous. Several studies have focused on the dopaminergic dysfunction hypothesis concerning schizophrenia. For example, dopaminergic hypoactivity has been associated with the prefrontal cortex with negative symptomatology (Buckley and Castle, 2015). Currently, there is no biological marker for the diagnosis of schizophrenia. Thus, the early identi cation of people with a high risk of schizophrenia poses a major public health challenge before they begin to manifest the symptomatology of the disorder. Consequently, a better understanding of this disorder's neurological substrates can help to identify better strategies for early diagnosis and psychological and individualized pharmacological treatment (Nickl-Jockschat and Abel, 2016). Functional magnetic resonance imaging in the resting state (rs-fMRI) has proven to be a promising tool to contribute to the diagnosis of several disorders, such as autism (), attention and hyperactivity de cit disorder (), major depression disorder (Craddock, Holtzheimer, Hu and Mayberg, 2009) and schizophrenia Chyzhyk, Savio and Graa, 2015;;;Qureshi, Oh and Lee, 2019). Thus, it is a noninvasive technique and does not require the active collaboration of the patient (Lee, Smyser, and Shimony, 2013), which is especially important for evaluating brain activity in those populations that have affected their cognitive performance. In the study of connectivity in rest and schizophrenia, evidence has shown signi cant differences in patients compared to healthy populations. More speci cally, the disconnection hypothesis has been studied, a framework in which the diversity of symptoms typical of schizophrenia is conceptualized as a result of disconnections in neural networks (Friston and Frith, 1995). In line with this hypothesis, alterations in the default mode network (DMN), the most prominent resting network (Woodward, Rogers and Heckers, 2011), have been shown. In addition, a reduction in precuneus connectivity with other areas was noted in patients with schizophrenia compared to the control group. Connectivity intensity in this area is negatively correlated with the severity of negative symptoms, more speci cally with the apathy domain (). Gangadin et al. also demonstrated a reduction in the connectivity of the hypocampal-mesencephal-striate network in patients with schizophrenia. In addition, patients showed increased long-range positive connectivity in the right middle frontal gyrus (MFG) and short-range positive connectivity in the right MFG and right superior medial prefrontal cortex, which are brain regions in the anterior DMN. Hua et al. noted decreased connectivity between the thalamus and prefrontal cortex and cerebellum but an increase in the connectivity of the thalamus and the motor cortex in patients with schizophrenia. In addition, studies of bilateral asymmetric connectivity have noted that patients with a predominance of positive symptomatology showed signi cantly more asymmetry to the left hemisphere. Instead, the predominance group of negative symptoms showed more asymmetry to the right. These results suggest that predominantly positive and predominantly negative schizophrenia may have different neural bases and that certain regions in the frontal and temporal lobes, as well as the gyrus and precuneus, play an essential role in mediating the symptoms of this disorder (). Additionally, Chen et al. showed evidence that cerebellum disconnection is network-speci c; that is, the group of patients with schizophrenia showed decreased cerebellum connectivity with the prefrontal lobe and more corticocerebellar connectivity with regions involved in sensory-motor processing, which may be indicative of the de ciencies in inhibition observed in people with schizophrenia. In addition, Li et al., also with schizophrenia patients, showed reduced insula connectivity with the sensory cortex and putamen compared to people with a high risk of psychotic disorder. Complementarily, schizophrenia patients have shown increased connectivity between the posterior cingulate cortex and the inferior left gyrus, mid-left frontal gyrus, and mid-left temporal gyrus. Conversely, schizophrenia patients have shown decreased connectivity in the executive control network and the dorsal attention network. These results show that resting-state network connectivity is altered in patients with schizophrenia, so the alterations are characterized by reduced segregation between the DMN and the executive control networks in the prefrontal cortex and temporal lobe (Woodward, Rogers and Heckers, 2011). This study found no statistically signi cant distinctions in the connectivity of the salience network; instead, Huang et al. showed evidence of hyperconnectivity of the salience network and the prefrontal cortex and cerebellum, as well as hypoconnectivity between the cortico-striatal-thalamic-cortical subcircuit and the salience network. In recent years, some meta-analyses have explored rs-fMRI in patients with schizophrenia compared to control groups. For example, Xiao et al. showed evidence that people with schizophrenia had increased connectivity, estimated with regional homogeneity (ReHo), in the right superior frontal and right superior temporal gyrus, as well as decreased ReHo connectivity in the right fusiform gyrus, left superior temporal gyrus, left postcentral gyrus, and right precentral gyrus (focused on ReHo studies). Dong et al. conducted a meta-analysis showing that patients with schizophrenia presented hypoconnectivity in the DMN, affective network (AN), ventral attentional network (VAN), thalamic network (TN), and somatosensory network. They also showed hypoconnectivity between van and TN, VAN and DMN, VAN and the frontoparietal (FN), between FN and TN, and between FN and DMN. Only hyperconnectivity was found between the AN and VAN (focused on seed-based analysis studies). The abovementioned Li et al. showed evidence through a meta-analysis that supports hypoconnectivity in certain brain networks in schizophrenic patients. More speci cally, the self-referential network (superior temporal gyrus) and DMN (right medial prefrontal cortex and left precuneus and anterior cingulate) focused on independent component analysis (ICA) studies. Finally, Gong et al. showed that people with schizophrenia presented a decreased amplitude of low frequencies (ALFF) in the bilateral postcentral gyrus, bilateral precuneus, left inferior parietal gyrus, and right occipital lobe. In addition, they found an increased ALFF in the right-handed, left inferior frontal gyrus, left inferior temporal gyrus, and right anterior cingulated cortex. To our knowledge, no meta-analysis includes rs-fMRI studies involving the whole brain, as well as studies that use different analysis techniques (ICA, ReHo, ALFF, falFF, etc.). Moreover, given the incongruences between the studies in this eld, the aim of this meta-analysis is to identify functional connectivity networks of the whole brain using a paradigm of rs-fMRI in patients with schizophrenia compared to healthy people (without any neurological or psychiatric disorder). Thus, it is expected that patients diagnosed with schizophrenia will show statistically signi cant differences in functional connectivity compared to healthy people. In addition, a secondary objective is to analyze the relationship between the effect size and mediator variables, such as sample size, age, gender, etc. Methods Study selection. Two independent investigators performed a bibliographic search using the following databases: PubMed, Web of Science (WoS), Psycinfo, Google Scholar, and Scopus. Additionally, the Boolean algorithm with the keywords used is presented in Supplementary Appendix 1. We included studies published until February 28, 2021. This meta-analysis was conducted according to the "Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA)" guidelines. The inclusion criteria for the studies were as follows: 1) they were published in English or Spanish; 2) the full text was available; 3) they were a primary study in a human population; 4) they included a patient group diagnosed with schizophrenia following the DSM-IV criteria or the structured clinical interview for DSM-IV (SCID); 5) they compared brain activation between schizophrenia patients and healthy people; 6) they used rs-fMRI; or 7) the studies reported Montreal Neurological Institute (MNI) or Talairach coordinates of the whole brain contrast comparing schizophrenia persons and control subjects. The exclusion criteria were as follows: 1) systematic reviews or meta-analysis; 2) methodologic studies; 3) patients with schizophrenia with other psychiatric or neurological disorders; or 4) studies focused on dynamic connectivity or use graph analysis or any other technique that does not identify coordinates. The studies were screened out as shown in Figure 1. Our search yielded a total of 3563 studies . Subsequently, 2106 duplicate papers were removed through Mendeley, and 64 duplicate papers were removed via Rayyan. A total of 1234 studies were excluded after title/abstract screening because they did not meet the inclusion criteria. Later, 159 articles were sought to read the full text, but 8 studies could not be found. Thus, after the full-text screening, 73 studies were excluded because they did not provide information about the peak activation coordinates, 16 because they did not report on the statistics associated with the coordinates, and 37 because they did not analyze the whole brain. Finally, only 25 studies matching our inclusion criteria were included and are marked with an * in the reference list. In addition, we obtained a 100% rate of agreement between the two investigators for the study search and selection. Voxel-Wise Meta-analysis. We used seed-based d mapping (SDM) software (available at http://www.sdmproject.com) to analyze the differences between schizophrenia patients and healthy subjects. The approach details have been described in Radua and Mataix or Mller et al.. First, the reported peak coordinates of all functional differences, which were statistically signi cant at the whole brain level in these studies, were chosen. We ensured that all included studies used the same statistical threshold throughout the whole brain to avoid possible bias toward regions with liberal thresholds. Thus, we considered the minimum threshold to be de ned by a.001 signi cance value and Student's t reference value with the degrees of freedom of each study estimated by the conventional expression (n 1 +n 2 -2). Second, we recreated peak coordinates for each study with a standard MNI map of the group difference effect size based on their peak t value by means of a nonnormalized Gaussian kernel to the voxels near the peak, which assigns higher values to the voxels closer to peaks. Third, the mean map was obtained by voxelwise calculation of the study map random-effects mean, weighted by the sample size. Fourth, to correctly balance the sensitivity and speci city, we used the p value of 0.05 as the main threshold with an additional peak height of z = 1. Jackknife sensitivity analysis was performed to test the replicability of the results. After the calculation of Cohen's d and the con dence interval (CI) analysis of the different papers was performed, a descriptive analysis of every paper result was resumed in different images to clarify the results obtained in every included study. Quality Assessment. We assessed the quality of the included studies using a checklist consisting of 11 items that focused on the clinical characteristics of the participants, the neuroimaging and data analysis methodology, the results, and the conclusions of the studies. The quality assessment scale is shown in Supplementary Appendix 2. This checklist was based on previous metanalyses and has been described elsewhere (Shepherd, Matheson, Laurens and Green, 2012;). One author reviewed the included studies and determined a complete rating. The resulting scores were discussed between two investigators, and a consensus quality score was obtained. Results Studies included in the meta-analysis. Supplementary Appendix number 3 shows the data obtained in each study to describe each mediator variable for each analyzed paper. Table 1 shows the basic descriptive statistics of the mediator variables. We want to highlight that the total number of patients analyzed assumes a signi cant number (n = 1285) and a control group (n = 1279). With regard to the techniques used to estimate connectivity networks, the most common are ALFF (20.68%) and ReHo (34.48%). Meta-analysis Result. In Figures 2 and 3, the forest plot shows the effect size of each study, as well as the total mean of the effect size and a con dence interval of 95%. Conversely, if the effect size was negative, the patient group showed decreased activation compared to the healthy group. Therefore, we can see that the most signi cant positive effect size found is 1.715, with the upper limit being 2.448. On the other hand, the highest negative effect size is -1.673, with the lower limit being -2.361. Notably, in the case of negative differences, the work of Turner et al. and Fryer et al. show a lower width of the con dence interval, and the same work by Turner et al. shows the most accurate interval in the case of positive differences. In this meta-analysis, people with schizophrenia did not show any hyperactivation compared to controls. However, they show decreased activation in the right precentral gyrus, speci cally in the Brodmann area (BA) 4. In addition, they also show hypoactivation in the left superior temporal gyrus corresponding to BA 22. Thus, two clusters were found, one with 640 voxels and one with 150 voxels. The results are displayed in Table 2. In addition, Figure 4 shows the graphical representation of the areas that are hypoactivated (visualized with BrainNet Viewer; Xia, Wang and He, 2013; http://www.nitrc.org/projects/bnv/). Note that the size of the node is proportional to the voxels it represents. That is, the larger the node, the more voxels there are in that area and the larger the region it represents. It is also worth noting that the nodes represented in blue correspond to the right precentral gyrus (BA 4), and the yellow nodes refer to the left superior temporal gyrus (BA 22). (Table 3) was carried out to check the replicability of the results. This analysis revealed that the right precentral gyrus (with coordinates of 50, -10, 40) was replicable in all 29/29 datasets (each dataset with one study left out) and that the left superior temporal gyrus (-58, -20, 4) was replicable in 26/29 datasets (the results were not con rmed only in 3 of the simulations performed, that is, 89.66% reliability). Therefore, we can establish a very high reliability of the results obtained. Reliability analysis. A jackknife sensitivity analysis Publication bias analysis. In Figures 5 and 6, we can see funnel plots wherein publication bias is graphically displayed. As we can see, the graphs suggest that there is no publication bias since the points (which correspond to the effect sizes of each study) are distributed uniformly on one side and on the other side of the value 0 of the abscises axis (effect size). Therefore, in our case, we can see how studies have been published with increasingly signi cant effects. In addition, Table 4 shows the values of Z statistics, as well as their signi cance (p =.504 and p =.751), which indicates that there is no bias. Heterogeneity analysis. To determine the possible heterogeneity between the studies included in the present meta-analysis, Q and I 2 statistics were estimated. The results obtained are in (Table 5), where the values of tau () representing the variance of the effect size distribution are displayed. In addition, we can see that Q statistics, from both positive and negative peaks, are statistically signi cant (p <.001). In this way, there is heterogeneity between the different studies, so it is appropriate to explore the mediator variables that could explain this phenomenon. On the other hand, it must be pointed out that the degree of heterogeneity is calculated by the I 2 index, as we can see are values that indicate a moderate degree of heterogeneity. As mentioned above, the fact that there is heterogeneity between the studies included in the metaanalysis leads us to perform an analysis of possible mediator variables that could explain the variability between the effect sizes (here considered in absolute value). Since more than one statistically signi cant effect is present in some works, the effect size of each article analyzed is estimated by the mean of the effect size included in each paper. The following variables were explored: type of data analysis used in studies (ReHo, ALFF, fALFF, etc.), sample size, age, and sex of the patient group and control group, total, general, positive, and negative PANSS scores applied to the group of patients with schizophrenia, illness duration, and quality of the studies analyzed. Categorical variables. First, Welch's t-test was used to analyze the relationship between the effect size and the type of analysis (ReHo or ALFF, the rest of the techniques were underrepresented and were eliminated from this analysis). The results indicate that there was a statistically signi cant relationship between the effect size and the type of analysis used (t = -2.381; df = 13.927; p uni =.016; r =.538). In fact, the mean effect size in ReHo is d = 1.242, and in ALFF studies, d =.964, which suggests that studies using ReHo can obtain a greater effect size than those using ALFF, and this effect has high intensity according to Cohen's criteria. Quantitative variables: Meta-regression analysis. A meta-regression was performed to analyze whether the quantitative variables described above (Table 1) had a statistically signi cant impact. Thus, Table 6 presents the minimum quadratic estimates of each meta-regression to evaluate the effect of each mediator variable on the estimate of the effect size. From the above table, it follows that only sample sizes are statistically signi cant predictors of the effect size heterogeneity. The negative value of the regression coe cients indicates that works with higher sample sizes obtain lower values of the effect. In summary, the analysis performed with mediator variables indicates that, in the studies analyzed, the effect size is higher in ReHo estimated with smaller samples. Discussion To our knowledge, this is the rst meta-analysis to study rs-fMRI in the whole brain in patients with schizophrenia compared to healthy people, including studies that use different analysis techniques. The results indicate that there was hypoactivation of the right precentral gyrus and left superior temporal gyrus in patients with schizophrenia compared to the control groups. These results are congruent with other ndings. Dong et al. also showed evidence in his metaanalysis (only ALFF studies) of a reduction in connectivity in the left superior temporal gyrus in patients with schizophrenia. Similarly, another meta-analysis also showed evidence of decreased ReHo in this area (). Additionally, in a systematic review, connectivity alterations were found in this area in studies performed with both rs-fMRI and task fMRI (). In addition, children of schizophrenic patients also show reduced activation of the left superior temporal gyrus during hearing comprehension (). It should be noted that dysfunction in this region has been related to the presence of auditory hallucinations in patients with schizophrenia (;Hugdahl, Lberg and Nygrd, 2009). More speci cally, Plaze et al. showed evidence that the anterior area of the left superior temporal gyrus is part of the brain network associated with the perception of auditory hallucinations in patients with schizophrenia, indicating that activity in this cortical region may be related to the severity of hallucinations. Activation of this area has also been demonstrated during the experience of auditory verbal hallucinations (). Consistent with our results, Gong et al. showed evidence of ALFF alteration in the right precentral gyrus. In addition, Li et al. found hypoconnectivity between the right precentral gyrus, which is involved in motor function, and the postcentral and precentral gyrus and cerebellum. Additionally, Xiao et al. showed evidence of decreased ReHo in the right central gyrus. In addition, hypoactivation in this area has been shown in relatives of people with schizophrenia compared to the control group (Scognamiglio and Houenou, 2014). It should be noted that dysfunctions in praxis networks in patients with schizophrenia, which includes the right precentral gyrus, correlate with de cits in the gesticulation of these patients (). Our results are congruent with the meta-analysis of Gong et al., and we found that the duration of illness was not related to the effect size. Nevertheless, unlike the meta-analysis of Gong et al., we did not nd that any of the PANSS scores were related to the effect size. There are some limitations in this meta-analysis. First, we did not include those studies that did not report the coordinates or those that did not report the associated statistics. In addition, we have not taken into account the different subtypes of schizophrenia, which would be interesting for future research. Additionally, some analysis techniques used by the studies that we included were underrepresented and could not be taken into account when evaluating whether they could predict the effect size. There were several missing values regarding PANSS scores. |
Throughput maximization of ad-hoc wireless networks using adaptive cooperative diversity and truncated ARQ We propose a cross-layer design which combines truncated ARQ at the link layer and cooperative diversity at the physical layer. In this scheme, both the source node and the relay nodes utilize an orthogonal space-time block code for packet retransmission. In contrast to previous cooperative diversity protocols, here cooperative diversity is invoked only if the destination node receives an erroneous packet from the source node. In addition, the relay nodes are not fixed and are selected according to the channel conditions using CRC. It will be shown that this combination of adaptive cooperative diversity and truncated ARQ can greatly improve the system throughput compared to the conventional truncated ARQ scheme and fixed cooperative diversity protocols. We further maximize the throughput by optimizing the packet length and modulation level and will show that substantial gains can be achieved by this joint optimization. Since both the packet length and modulation level are usually discrete in practice, a computationally efficient algorithm is further proposed to obtain the discrete optimal packet length and modulation level. |
Ulcerative colitis complicated by autoimmune hepatitis-primary biliary cholangitis-primary sclerosing cholangitis overlap syndrome. Autoimmune liver diseases mainly include autoimmune hepatitis (AIH), primary biliary cholangitis (PBC), primary sclerosing cholangitis (PSC) and overlap syndrome. Patients with IBD are usually complicated by autoimmune liver diseases. We described a rare case of UC complicated by AIH-PBC-PSC overlap syndrome. The male patient had a long history of UC. After admitted to hospital, the patient was found to have abnormal liver function, and was diagnosed with AIH-PBC-PSC overlap syndrome by liver puncture biopsy. This case will help us learn more about patients who are confirmed with IBD complicated by autoimmune liver diseases. |
Measurement of t ¯ t and single top quark production cross sections in CMS With a delivered integrated luminosity of around 140 fb − 1 at a center-of-mass energy of 13 TeV in the CMS experiment during Run 2, almost 300 million top quarks and top antiquarks were produced. As top quarks can be produced through either strong or electroweak interaction, they are a suitable tool to probe the strong and electroweak sectors of the standard model. Precision measurements of top quark pair (t¯t) and of single top quark production cross sections deliver constraints on several standard model parameters, e. g., the top quark mass, the strong coupling S, and the parton distribution functions. In addition, a sufficient amount of data has been collected to search for rare t¯t production modes. In this contribution, recent measurements of the differential t¯t cross sections and the latest inclusive and differential cross section measurement for single top quark production in association with a W boson performed by the CMS Collaboration are presented, as well as the search for exclusive t¯t production using the CMS-TOTEM Proton Precision Spectrometer. |
Physical forcing of nitrogen fixation and diazotroph community structure in the North Pacific subtropical gyre Dinitrogen (N2) fixing microorganisms (termed diazotrophs) exert important control on the ocean carbon cycle. However, despite increased awareness on the roles of these microorganisms in ocean biogeochemistry and ecology, the processes controlling variability in diazotroph distributions, abundances, and activities remain largely unknown. In this study, we examine 3 years (20042007) of approximately monthly measurements of upper ocean diazotroph community structure and rates of N2 fixation at Station ALOHA (22°45N, 158°W), the field site for the Hawaii Ocean Timeseries program in the central North Pacific subtropical gyre (NPSG). The structure of the N2fixing microorganism assemblage varied widely in time with unicellular N2fixing microorganisms frequently dominating diazotroph abundances in the late winter and early spring, while filamentous microorganisms (specifically various heterocystforming cyanobacteria and Trichodesmium spp.) fluctuated episodically during the summer. On average, a large fraction (∼80%) of the daily N2 fixation was partitioned into the biomass of <10 m microorganisms. Rates of N2 fixation were variable in time, with peak N2 fixation frequently coinciding with periods when heterocystous N2fixing cyanobacteria were abundant. During the summer months when sea surface temperatures exceeded 25.2°C and concentrations of nitrate plus nitrite were at their annual minimum, rates of N2 fixation often increased during periods of positive sea surface height anomalies, as reflected in satellite altimetry. Our results suggest mesoscale physical forcing may comprise an important control on variability in N2 fixation and diazotroph community structure in the NPSG. |
Perennial Language Learners or Competent Language Users: An Investigation of International Students Attitudes towards Their Own and Native English Accents English is widely used as a global language. The traditional monolithic model of English has been challenged as the development of World Englishes (WE) and English as a lingua franca (ELF) paradigms challenge the ownership of English. With this newly emerging status quo, English language teaching (ELT) should also recognize the diversity and dynamism of English. This article discusses students attitudes towards their own and native English accents, and describes the influence of English accents in ELT. Data were collected using semi-structured interviews with nine international students from Cambodia, China, Indonesia, Malaysia, and Sri Lanka who were studying at a university in Southern Thailand. The derived data were analysed using qualitative content analysis. The findings revealed that most students still perceived their accents as being deficient, and they believed that native speakers English accents were the norm of English use and the ultimate learning goal. Thus, entrenched native ideology was still persistent among these students. The article also provides some implications for pronunciation teaching from a WE and ELF framework with the Teaching of Pronunciation for Intercultural Communication (ToPIC). It is hoped that an awareness of English as a global language could be recognized, and ToPIC could be applied to ELT in more contexts to reflect the global status of English. |
Traumatic work related mortality among seafarers employed in British merchant shipping, 19762002 Aims: To establish the causes and circumstances of all traumatic work related deaths among seafarers who were employed in British merchant shipping from 1976 to 2002, and to assess whether seafaring is still a hazardous occupation as well as a high risk occupation for suicide. Methods: A longitudinal study of occupational mortality, based on official mortality files, with a population of 1 136 427 seafarer-years at risk. Results: Of 835 traumatic work related deaths, 564 were caused by accidents, 55 by suicide, 17 by homicide, and 14 by drug or alcohol poisoning. The circumstances in which the other 185 deaths occurred, including 178 seafarers who disappeared at sea or were found drowned, were undetermined. The mortality rate for 530 fatal accidents that occurred at the workplace from 1976 to 2002, 46.6 per 100 000 seafarer-years, was 27.8 times higher than in the general workforce in Great Britain during the same time period. The fatal accident rate declined sharply since the 1970s, but the relative risk of a fatal accident was 16.0 in 19962002. There was no reduction in the suicide rate, which was comparable to that in most high risk occupations in Britain, from 1976 to 1995; but a decline since 1995. Conclusions: Although there was a large decline in the fatal accident rate in British shipping, compared to the general workforce, seafaring has remained a hazardous occupation. Further prevention should focus on improvements in safety awareness among seafarers and shipping companies, reductions in hazardous working practices, and improvements in care for seafarers at risk of suicide. |
Study on Direct-Driven Wind Power System Control Strategy The development of direct-driven wind power system using permanent magnet synchronous generator (PMSG) is very fast, and the back-to-back converter has been paid much attention for its excellent performance. The work principle of the generator-side converter (GSC) and control strategy of PMSG are explained in detail, and the steady-state and dynamic performances are analyzed by simulation. The experimental prototype is built to achieve generation and motor state operation, and the vector control of PMSG is realized by generator-side converter. The simulation and experiment results show that using PWM converter as generator-side converter for direct-driven wind power system with PMSG, can achieve good control performance for PMSG and provide possibility to supply excellent power energy transmitted to power grid. |
Getting the right right: redefining the centre-right in post-communist Europe Existing literature on the centre-right in Eastern and Central Europe is small and fragmentary, in contrast with the voluminous, detailed and often sophisticated comparative literatures on the Left and the Far Right in the region. A review and synthesis of the existing literature suggests the possibility of a definition of the Right and Centre-Right in the region, which can both accommodate its diversity and provide a shared framework for analysis. The Centre-Right should be understood as neither an atavistic throwback to a pre-communist past nor a product of the straightforward assimilation of Western ideologies. Rather, it is a product of the politics of late communism, domestic reform, European integration and post-Cold War geopolitical realignment, which has powerfully reshaped historical influences and foreign models. © 2004 Taylor & Francis Ltd. |
A Content Analysis of Military Psychology: 20022014 Content analysis of articles published in professional journals is a viable method to assess the trends and topics a profession deems to be important. Military psychology does not involve only 1 subspecialty of psychologists, so research from many different perspectives has contributed to the field. The purpose of this manuscript is to present a post-9/11 content analysis of articles published in Military Psychology to identify critical issues and trends in the research and practice of military psychology. A total of 379 articles were analyzed, and revealed that the majority were empirical (n = 304, 80.2%) and employed quantitative methods (n = 283, 93.1%). The primary key topics were personnel (air force, army, military, and navy; n = 166), military (psychology, training, veterans, etc.; n = 104), and career issues (e.g., employee, interests, job, vocation, etc.; n = 57). Trends and directions for the future of military psychology are also considered. |
The effect of technology readiness in IT adoption on organizational context among SMEs in the suburbs of the capital The role of Information Technology (IT) in the digital era today is critical in the business environment. IT will provide convenience in the process of managing a Small and Medium Enterprises (SMEs) so that companies have a competitive advantage in the current economic development. Some research has been discussed the issue of IT adoption's impacts and uses in SMEs, in particular in the suburbs bordering the national capital. The purpose of this study is to identify the impact of IT and the factors that influence technology readiness in IT adoption the organizational context among SMEs located in the suburbs of Jakarta. This article follows a quantitative research approach based on case studies and structured questionnaires. The results of SEM analysis using the SmartPLS3 application give the results that awareness of technology, local government support, SME management support, and financial support are essential factors in IT adoption in SMEs. This article tries to look at other phenomena regarding the limitations SMEs have in using IT and tries to make a recommendation on how to overcome them. It is hoped that this research will contribute to the development of information systems models both in terms of academics and practitioner. |
Chopping secondary mirror control systems for the W. M. Keck Telescopes The Keck 1 chopping secondary was built by the Palo Alto Research Laboratories of the Lockheed (now Lockheed Martin) Missiles and Space Company. The only software component of the delivered system is a proprietary error correction algorithm; Keck wrote software to generate acceleration-limited azimuth and elevation demands, to rotate these demands as a function of telescope position, to interact with the error correction system, and to mange hardware start-up and shutdown. The Keck 2 chopping secondary, also built by Lockheed, was originally conceived as an infrared fast steering mechanism (IFSM) and is simpler than the Keck 1 system, with lower power and acceleration limits and, therefore, lower chop amplitude and frequency specifications. As far as possible, it provides the same external interfaces as the Keck 1 system. A new EPICS- based telescope control system has been written for Keck 2 and was retrofitted on Keck 1 in March 1997. The Keck 1 chopper control software has been converted to the EPICS environment and, at the same time, altered so that the same software supports both choppers. This conversion has retained as much as possible of the complex real-time code of the old system while at the same time fully utilizing EPICS facilities. The paper presents more details of both the old and the new systems and illustrates how the new system is simpler than the old as well as being much better integrated into the overall telescope control system. Operational experience is presented. |
. A primitive parathyroid adenoma has been studied by electron microscopy, analytical ion microscopy and electron probe X ray analysis. A number of lysosomal structures has been observed in the cells. Observation of unstained ultrathin sections shows that these lysosomes contain two varieties of structures: dense homogeneous droplets and very dense and small granulations. Aluminium associated with phosphorus has been detected in high concentration in the small granulations. The relations between aluminium and parathyroid function and the possible role of aluminium in the pathology of the parathyroid gland remain to be clarified. |
Water Quality Assessment of Anchar Lake, Srinagar, India Abstract The aim of this study was to ascertain the current condition of the Anchar lake water body in the Indian state of J&K in terms of water quality using some main parameters such as pH, TDS, EC, DO, and nitrates content. For the years 2019 and 2020, samples were obtained for two seasons: summer and winter. The quantitative analysis of the experimental results indicates a general increasing trend and considerable variance in nitrates content, as well as a gradual decrease in pH, indicating that the lakes acidity is increasing, but only within the basicity range, with real values approaching neutrality: TDS and EC content suggest a very favorable situation, but when the overall parameters are tested, they show a defect. Since the sampling sites were well aerated, the dissolved oxygen content showed a growing pattern, and as a result, this metric proved to be useless in deciding the overall scenario in the lake. In the winter, the longitudinal trend line indicates a 10% decrease in pH, while in the summer, it shows a 4.4 percent decrease in pH. In winters, the longitudinal trend line reveals a 6.7 percent growth in nitrate content, while summers see a marginal decline. In the winter, the longitudinal trend line shows a 7% rise in dissolved oxygen, while in the summer, it shows a uniform trend. |
Social Knowledge: The Study of Three Processes of Metamorphosis Social knowledge is more dynamic than natural science. A full recognition of this character is the precondition for upholding the validity of statements in social knowledge. In order to maintain the validity of such statements and to avoid the metamorphosis of social knowledge into other theoretical constructs, this paper, based on referring to the ideal type of social knowledge, aims to describe and explain three processes whereby social knowledge is metamorphosed into theoretical dogmatism, theoretical alienation, and theoretical slavery. Introduction According to the viewpoint of some outstanding philosophers of social sciences and some salient sociologists of knowledge, social knowledge is a form of knowledge directed toward a historical and social context. Also, such knowledge has a special function in society (see: Soroush, 2005;Glover, ;Schutz, 1967Schutz, & 1980Berger and Luckmann, 1966;Braybrooke,1986;Hollis, 1994;Little, 1991;Rosenberg, 1995). Based on this point of view the ideal type of social knowledge is required to have an updated historical, sociological, and functional validity. Due to such characteristic of social knowledge, the main aim of this essay is to clarify and explain one the most important challenges in regard to the preserving of social knowledge validity. This challenge includes three processes through which the above-mentioned types of validity are undermined. These three processes lead social knowledge to transform into the other forms of theoretical constructs: Theoretical dogmatism, theoretical alienation, and theoretical slavery. Theoretical Dogmatism In general, it should be noted that social knowledge is closely situated in its historical time and therefore carries an aura of specific temporality. This means that the present knowledge is hardly applicable to a past historical period. Similarly, the past knowledge can hardly solve the problems and challenges in the present time. As Karl Mannheim contends in Ideology and Utopia, social knowledge is the product of a specific historical condition. Accordingly, any form of knowledge which is decontextualized from its historical condition and considered valid for other periods is most likely liable to turn into a type of theoretical construct known as "Dogmatism". It seems that describing and explaining this process sheds more light on the historicity of social knowledge. Social knowledge becomes transformed into dogmatism whenever it's cognitive and motivational basis changes in the course of history while the social knowledge itself remains unchanged and therefore becomes ossified. Substantiating this claim requires asking two important questions. Firstly, what is exactly meant by cognitive and motivational basis of social knowledge? Secondly, how is the mechanism of the relationship between the changes in those basic assumptions and the transformation of social knowledge into dogmatism? With regard to the first question, it must be noted that different human communities face various forms of issues, problems, challenges, and chances in their historical evolution. These conditions can be categorized into theoretical and practical levels. Among the many reactions to these conditions, the reactions of the thinkers and scientists of a society are most significant. They try to reflect and react to the conditions in a most efficient manner. Therefore, it can be maintained that the theoretical and practical conditions constitute the motivational foundation for producing a set of ideas, more accurately, social knowledge. The reality and reliability of such forms of social knowledge are closely related to the existing general human understanding and knowledge. Accordingly, this variable can be considered the cognitive basis for ideas and knowledge. It is clear by now that there are two variables in the production and formation of social knowledge, namely, types of problems, and the level of human knowledge. It should be also mentioned that there are occasions when ideas produced by thinkers become guidelines for social understanding and praxis by some social groups. The acceptance of these ideas is dependent upon certain cognitive and motivational assumptions too. In fact, these assumptions held by the society's thinkers to the production of ideas and then the same assumptions held by social groups to select and consume the recently produced ideas. In general, when some social groups accept these ideas in terms of their explanatory and normative aspects and use them as the blueprint for social praxis, those ideas turn into ideology. To put it more accurately, ideology is a type of idea or knowledge which works as the basis for social understanding and praxis by some social groups. Accordingly, it can be said that given the variables influencing ideas, ideology is the product of problems and a specific level of knowledge. Then from a logical perspective, the validity of ideology depends on the credibility of the basic assumptions. 2 Let us now turn to the second question, namely, the relationship between the changes in the basic assumptions and the transformation of knowledge into dogmatism. In this regard, it should be noted that theoretical and practical problems, as well as the level of human knowledge under the influence of different factors, are changed. Such changes are stronger and more extensive in the contemporary world. These changes create conditions which are by nature different from previous conditions. Therefore, the existing ideas fall short of dealing with the challenges of the new conditions. In such cases, a group of the society's thinkers embarks on producing new knowledge or tries to adjust and modify the existing knowledge. These theoretical, practical, and cognitive changes logically necessitate changes in the ideology which is based on previous conditions so that it can adapt itself to the new changes and reconstruct itself again. In other words, the proponents of a certain ideology need to update their configuration of social understanding and praxis in the light of new changes. In fact, by accepting these changes, they need to translate appropriate ideas from the realm of theory into the realm of ideology. Having done so, social knowledge and consequently ideology would be able to maintain their historical validity and functionality in the face of a new condition. Otherwise, knowledge and ideology would lose their logical validity and become dogmatism. In fact, dogmatism is the result of a condition in which some people present solutions which are either applicable only for problems in a different historical past or are hardly the best possible solutions for the new problems from the perspective of the evolution of human cognition and knowledge. In these cases, social knowledge loses its organic relationship with the changes in realities and cognitive conditions, and although it may carry the name of social knowledge, it is nothing but theoretical dogmatism. In light of the above questions, the demarcation line between social knowledge and dogmatism is clear by now. Accordingly, the basis of social knowledge is the theoretical and practical problems and the general level of human cognition and knowledge. However, dogmatism is rooted in the other variables which are discussed in the following. In fact, a study of the history of social knowledge reveals their origin and helps us to distinguish between social knowledge and theoretical dogmatism. 3 In other words in the case of dogmatism, knowledge, and ideology have been deprived of their logic and instead of having theoretical and practical efficiency for social understanding and praxis become leant on the other variables.4 Accordingly, it is necessary to protect social knowledge against dogmatism. This is only possible through a constant evaluation of the validity of social knowledge for other historical periods. Based on the mentioned points, it is useful to explain the emergence and formation of dogmatism in more details. Clearly, this can help us in finding ways to deal with the problem of dogmatism. It seems that there are two types of factors in the emergence of dogmatism, namely, objective factors and subjective ones. The first objective cause is the influence of dogmatists among the circle of social scientists and therefore the reproduction of dogmatist procedures under the title of knowledge and ideology. It should be added that perhaps one of the most important challenges for the ideologies is the infiltration of dogmatist views among the circle of ideologues. The influence of such people can make the ideology seem irrational through the detachment of ideology from its motivational and cognitive basis and therefore hinder the process of reformation. The second objective factor is the impact of power relations. With regard to the role of power relations in the creation of dogmatism, it should be noted that one of the causes behind the emergence of dogmatism includes the strong ties between knowledge and ideology in one hand, and different forms of social and political power of bearers of such knowledge and ideology on the other. It is obvious that all dominant ideologies make power, wealth and social status for some special groups. For most people, the proponents and followers of a certain ideology have the necessary requirements to be deemed as competent enough to possess higher social positions and power. However, when the motivational and cognitive foundation of an ideology changes and ideological transformation becomes a public demand, that is, the ideology of period one requires modifications and readjustments, the proponents of that ideology who used to enjoy certain prerogatives find their positions unstable and at risk. In fact, the legitimacy of their power is questioned in these cases. Therefore, they have three options. The first option is to completely leave their previous positions and be replaced by the harbingers of new ideology (Ideology two). This rarely happens because the proponents of ideology one are hardly willing to let go of their positions, that is, due to many reasons they do not voluntarily step aside from their ideological stances. Second, these people may join the followers of ideology two by recognizing the existing changes and therefore gain legitimacy even in the context of ideology two. This rarely happens given the fact that those in the positions of power and wealth do not have the sufficient time to become aware of the changes happening outside of the circles of power and wealth. The third option assumes that those in positions of power (material, social, and symbolic) can fight the ideological changes in order to maintain the legitimacy of their social positions in the previous ideological context. This latter option is usually opted for. However, since changes in the motivational and cognitive foundations are inevitable, this latter option turns their ideology into nothing but dogmatism. The last objective cause is the separation of ideology from its environment. There are cases where the rise of dogmatism is the result of the wide gap between the thinkers and ideologists of a society and the changes in the surrounding environment. It is clear that people adjust their actions and behaviors in terms of their relation to the environment in which they live. Accordingly, if they do not interpret correctly the changes in the environment, their reactions will not be based on the realities, and therefore might seem irrational. From this perspective, dogmatism emerges as the result of the insufficient knowledge of people with regard to their environment, as it is called as with regard to the motivational and cognitive changes of the social knowledge and ideology of their time. However, it should be emphasized that given the growing communication media in today's world few people fall victim to this variable and therefore it may seem as an overstatement to say that dogmatism is the result of this factor in some wide scale. As for the subjective causes of dogmatism, the first factor is the lack of rational-scientific analysis of the accuracy and efficiency of social knowledge and ideology. As iterated earlier, transformation in the motivational and cognitive foundation of knowledge and ideology logically leads to the transformation of knowledge and ideology themselves. Accordingly, having a rational mindset to the recognition of this necessity is highly significant. However, there are people who contend that ideology can be considered rationally, scientifically, or logically. In this view, it is considered as an eternal truth. Such an understanding of ideology severs the relationship between ideology and its motivational and cognitive foundation. Ideology becomes sacred and instead of adapting the ideology to the needs and demands of the society, the latter is curtailed or even sacrificed in the name of ideology. This is exactly what is meant by the metamorphosis of social knowledge into dogmatism. The above problem indicates the existence of a set of wrong assumptions about ideology and the consequent transformation of ideology into dogmatism. This is even more evident in the case of religious ideologies because religious ideologies are naturally based on sacred texts. Accordingly, including the parameter of time in understanding and interpreting these texts may be labeled as distortion, misinterpretation, and deconstruction, evoking harsh reactions. However, different studies strongly support the idea that religious ideologies and on a broader level all religious texts are interpreted and understood in relation to our motivational and cognitive assumptions. Therefore, if these assumptions change, religious ideology will change too. Resisting the openness to new interpretations is against the teachings of religious belief itself (Soroush, 1996 b). The second subjective cause addresses the issue of the mental tendency for stability and resistance to change. The subjective willingness to maintain the stability of knowledge and ideology may lead to dogmatism. This is usually because of the fear of making mistakes. With regard to the way in which this variable may cause dogmatism it should be noted that whenever an ideological agent reaches the conclusion that his/her ideological stance needs some adjustments due to some changes in its motivational and cognitive principles, certain emotional and mental pressures will be imposed on the agent including his/her fear of mistake. The agent might think that if he/she the agent tries to change his/her ideological stance, it is possible to make a mistake and consequently there will follow some irreparable damages. Therefore, subjective dilemmas and doubts will undermine the determination of the agent in changing the ideological stance he/she used to believe in. ideological change, these people need to reconsider such a subjective inclination for stability. It is axiomatic that remaining committed to the principle of stability will prevent ideological readjustment and therefore will lead to dogmatism. Readjusting and updating one's ideology requires not only brave decision, toleration of negative reactions, keeping one's mind open to the true information, fighting one's subjective and personal tendencies for maintaining the ideological benefits and questioning the legitimacy of an ideology, it also requires overcoming the fear of making mistakes. However, this should not mean the impulsive change of ideology without considering the correctness of one's decision. Rather, it means that when after investigating the motivational and cognitive changes of an ideology, one should not hesitate to change his/her previous stance because of conservatism or fear of making mistakes. The damages caused by dogmatism are much more serious than the possible damages caused by one's fear of making mistakes. To prevent dogmatism, one needs to overcome the fear and accept the responsibilities of making a decision to initiate ideological reforms. So far, we have sufficiently discussed the process whereby social knowledge is turned into dogmatism. In the following section, the process in which social knowledge is transformed into theoretical alienation will be taken into consideration. Theoretical alienation One of the characteristics of social knowledge is its close relationship with the sociological condition. Taking into account of the correspondence of social knowledge with a local situation and a specific group, that is, the little chance of universal social knowledge is important in analyzing and understanding this type of knowledge. Accordingly, sociologists of knowledge usually emphasize the links between social knowledge and a specific social condition. Emile Durkheim has defined knowledge as the reflection of social conditions (Hamilton, 1998;Kafi, 2004: 248-249). Sociologists like Max Scheler have made a distinction between the form and content of knowledge and have argued that the construction of form is influenced by social variables (Alizadeh, 2004: 187-200). Karl Mannheim has argued that one of the conditions for the validity of social knowledge is its symmetry with the socio-cultural context in which it is produced (Azhdarizadeh, 2004: 211-229). As mentioned earlier, it seems that the relationship between social knowledge and its sociological context can be considered from two aspects. The first aspect concerns the correspondence of knowledge with the place in which it is produced. The second aspect concerns the relationship between social knowledge and the groups about which the knowledge is produced. According to this view, a specific type of social knowledge may be applicable to a certain place or group while it falls short in explaining the conditions of another place or group. Therefore, in addition to historical limitations, social knowledge has sociological limitations and only applies to a specific condition. This means that we need to evaluate the validity of social knowledge in terms of its sociological applicability. As the above discussion implies, the weakening of the sociological validity of social knowledge leads to the problem of what is known as theoretical alienation. To define theoretical alienation, it can be said that it happens when the motivational and cognitive basis of social knowledge in terms of sociological context changes while the knowledge itself does not readjust itself in relation to these changes and tries to remain attached to the previous social condition. Given the fact that we have already discussed the notion of motivational and cognitive basis of knowledge, a description of the process of the transformation of social knowledge into theoretical alienation would suffice in this case. It should be mentioned that when the theoretical, practical, and cognitive problems of human life change and the existing ideas fall short in coping with the new condition, some thinkers propose the adaptation of solutions from similar contexts as a way of overcoming the problems and issues of their own context. Influenced by such thinkers of the society, some people may select what they have adapted from another context, as an updated or even a new ideology. Consequently, there will be a favorable, low-cost, and short-term transformation in knowledge. However, since there are always major differences between societies and the social conditions of groups, such an adaptation is hardly successful. It may only be successful when the similarities between the local and communal contexts of the societies are examined in details. Therefore, if done without sufficient study, there will emerge as a consequence a type of knowledge which does not have any exact correspondence with the problems, issues, and cognitive difficulties of the new place or group. In short, it will not be able to solve the existing problems. This is the reason why instead of using the term "social knowledge" the concept of "theoretical alienation" can be used. In this view, alienation is the result of the decision of some people who think that appropriating a solution belonging to a different social place or group, could be able to solve the problems and issues of their own. It is clear that similar to the process whereby knowledge is transformed into dogmatism, also in this case knowledge is severed from the changes in problems and evolution of cognition. Although it may still be considered as "knowledge" it is in reality a form of theoretical alienation. Similar to dogmatism, the basis of alienation is different from the basis of social knowledge. Theoretical and practical problems and the level of human cognition are the origin of knowledge. However, alienation is rooted in something else which will be discussed in the following. With regard to the factors influencing the transformation of social knowledge into theoretical alienation, it should be mentioned that they are remarkably similar to those factors involved in the transformation of social knowledge into dogmatism. These factors can be divided into two groups, namely, objective, and subjective. Similar to the causes of dogmatism, the objective causes of theoretical alienation consist of three variables. First, the influence of alienated thinkers and intellectuals into the circle of social thinkers and ideologists and the consequent reproduction of alienated constructs in the form of knowledge and ideology. As explained in the discussion of dogmatism, welcoming those who accept a certain knowledge or ideology without a rational or logical reason can have dire consequences. It should be highly emphasized that the infiltration and growing influence of such people may make knowledge and ideology digressed from its rational and logical course. In fact, when the link between the motivational and cognitive basis and knowledge and ideology is severed, there will be little hope for reformation. The second factor concerns the influence of power relations. Regarding the role of power in the creation of knowledge or theoretical alienation, it should be noted that a certain group of intellectuals, social theorists, and ideologists who are usually not in positions of power may assume that by replacing their local and communal ideology with the popular ideologies from around the world, they can dispossess power from their opponents and possess it themselves. In fact, this group adopts the line of thought that considers dominant global discourses as a way of providing legitimacy and social power for themselves. Doing this for possessing power separates the relationship between ideology from its motivational and cognitive basis. The kind of knowledge which is produced as a consequence of this process is not able to bring about ideological change and instead turns knowledge into theoretical alienation. The third objective cause concerns the separation of ideologists from their social environment. Alienation happens when due to the lack of accurate understanding of the social groups the ideologists and intellectuals are not able to effectively connect themselves to the society. Therefore, they are not able to make an appropriate reaction to the incumbent changes and by suggesting solutions which are cut from the motivational and cognitive basis of their social condition, they produce ideological alienation. However, as mentioned earlier, given the fact that information as become easily accessible in today's world, such variable does not have far-reaching influence. With regard to the subjective causes of alienation, we could focus on two factors. The First one is the lack of rational-scientific analysis of the accuracy and efficiency of social knowledge and ideology. Whenever there is not sufficient critical examination of the applicability of some adapted ideas, or they are not evaluated with empirical, historical, and interpretive methods, one logically can expect alienation. To put it differently, such ideas should be evaluated in terms of their relation to a specific place and social groups so that their appropriateness and applicability becomes clear. However, the problem is that instead of choosing this critical approach, some people ignore the differences for the sake of similarities. This methodological error gradually leads to the production of theoretical alienation instead of effective knowledge and ideology. Similar to dogmatism, the second subjective cause is the tendency for stability and resistance to change. Occasionally, the tendency to import ideas belonging to another place and social groups is the result of the fear of making mistakes. How this variable creates alienation is almost clear. If the ideologists are aware of the differences between their own context and the context from which their ideas are adapted, but believe that modifying the adapted ideas may distort its totality, they would eschew from changing them and therefore alienation will most likely be produced. Accordingly, having the courage to think critically and to have the determination for bringing innovation, when the differences between two social contexts are clear, is an important factor for being able to effectively and appropriately deal with the problems of one's own social conditions. Theoretical slavery Social knowledge, like other forms of human knowledge, is ultimately at the service of specific goals and objectives. There are certain ideologies and social ideas which are primarily produced for achieving specific goals. However, it should be pointed out that the functionality and purposiveness are more significant in social knowledge because it has very strong influence on the social theoretical frames. When the goal and duty of knowledge are not authentic at the time of producing, the possibility of using this knowledge for achieving authentic goals and functions is almost ruled out.5 It should be noted that according to some point of views the authentic function and goal of social knowledge is to help the human being to reach the perfection via solving theoretical and practical problems. If social knowledge lacks this aspect, it does not have one of the important factors for the validity of social knowledge. According to Max Scheler knowledge has three parts. First, it is the knowledge of control and achievement of goals and objectives. Second, it is knowledge of essence and culture. Third, it is the knowledge of the reality of salvation. The first part of knowledge is found in science, the second in philosophy and metaphysics and the third in religion. Scheler believes that there is a hierarchy in the types of knowledge. The knowledge of salvation is at the top, and then comes knowledge of essence, and finally knowledge of control. In this view, each type of knowledge serves a higher order of knowledge (Alizadeh, 2004 a: 183-184). Based on Scheler's idea it can be argued that the ultimate purpose and authentic function of knowledge are nothing but human salvation. Accordingly, such purposiveness and functionality can be used as significant criteria for evaluating the accuracy of social knowledge. However, it is clear that in many instances the purpose of social knowledge is determined by power relations. Therefore, more often than not, social knowledge is governed by the interests of people in positions of power. The prevalence of this phenomenon has convinced thinkers like Michel Foucault to conclude that knowledge is basically at the service of power (Alizadeh, 2004 b: 322-329). Given this important phenomenon, it is necessary to discuss the ways in which the functions of knowledge change and how it is possible to prevent the equation of knowledge and power. As mentioned earlier, social knowledge is the product of some motivational factors and cognitive assumptions. The content of these factors and assumptions has shown two different paths during history. On the first path, certain issues and problems have drawn the attention of social thinkers whose solutions would either satisfy their theoretical concerns or would contribute to the improvement of public welfare. Contrary to this, on the second path, solving certain issues and problems would, in fact, serve the interests of people in the position of power. In fact, most of the social thinkers have often faced this dilemma. The irony is that concerning oneself with the public issues and problems would hardly result in any real rewards for the social thinkers. On the contrary, serving the power interests would always end up in considerable privileges and rewards. True social thinker would choose the first path, that is, dealing with the problems of the people. For them, social thinking should always maintain its legitimacy. However, there are also social thinkers who have served power. To be able to have a logical evaluation of the above dilemma it can be said that what the first group of thinkers do has stronger basis in the logic of social thinking. For this group, the condition for the validity and truthfulness of knowledge is its legitimate function. In this view, the ultimate objective and function of knowledge and therefore its validity depend on serving human values (Habermas, 1987quoted in Soroush, 2005. In contrast to this, the second group of thinkers who serve the interests of power relations, distort the logic of knowledge in terms of its objective and function and therefore imprison thought in the cells of power. The above phenomenon can be referred to as "theoretical slavery". To illustrate the working of this process, it is to be said that the motivational and cognitive basis of knowledge. Idealistically, is intended to bring about human salvation and perfection by producing knowledge, including social knowledge. This knowledge is then transformed into ideology through social acceptance. However, in the course of time, in addition to the ideal motivational and cognitive basis of knowledge, other types of motivational and cognitive bases emerge which are at the service of power. With the appearance of this new phenomenon, social thinkers diverge into two groups. The first group remains committed to the true objective of knowledge, namely, human salvation. The second group chooses to serve the interest of people in power and help produce what is known as "instrumental knowledge". This latter type of knowledge is used as an instrument by the people in power to reach their own goals. The social thinkers may gain certain privileges by doing so; however, they pay a high price by imprisoning their thoughts and minds in the web of power relations. In such conditions, theoretical slavery takes the place of an ideology based on true knowledge. While carrying the name of social knowledge, this type of knowledge is nothing but theoretical slavery in the interests of people in power. It may be useful to discuss the causes of process whereby social knowledge is transformed into theoretical slavery. The factors causing this problem can be divided into two groups, namely, objective and subjective. The objective causes of theoretical slavery consist of three important variables. Similar to the two other explained processes, the first variable concerns the influence of theoretical slaves into the circle of social thinkers and the consequent reproduction of theoretical slavery in the form of knowledge and ideology. Theoretical slaves accept a certain type of knowledge and ideology. However, their approach to knowledge and ideology is instrumental and irrational. For them, knowledge and ideology should serve power and can be used as an instrument to gain and exert dominance over others. A disproportionate level of such influence may marginalize true knowledge and ideology. In such a context, theoretical slavery is camouflaged as true knowledge and ideology. The second variable concerns the influence of the relationship between power and knowledge. Those in power have always tried to dominate the minds of social thinkers and use them for instrumental purposes. This is intensified by the material and economic needs of the thinkers. Those in power can enslave the social thinkers through economic means. There are also social thinkers who have ambitions to gain power themselves. However, it should be noted that in most cases these social thinkers are not only enslaved themselves by the instruments of power, but they also deprive the other people from the possibility of reaching true knowledge. Because under the impact of their cooperation with the people in power, such people get an opportunity to suppress true thinkers and then introduce the instrumental knowledge as a true knowledge. Therefore, in this condition, the only knowledge which will be produced is instrumental. The third objective cause refers to the contemporary socialization of the social thinkers in terms of the ultimate aim of the production knowledge. Nowadays, universities around the world hardly ever address the question of the ultimate aim of knowledge. Most people are educated with the notion that knowledge is only limited to the understanding of the phenomenon and the reality of the world. In this view, the goal, use and function of knowledge do not matter and is only a personal matter. Therefore, there is no systematic education about the use and function of scientific discoveries which lead to arbitrary appropriation of knowledge by everyone. In this condition, knowledge can be purchased by those in power and there seems to be no problem in the enslavement of knowledge to power. With regard to the subjective causes of the emergence of theoretical slavery, we can mention two things. The first cause concerns the prevalence of irrational evaluation of the ultimate goal of knowledge. This phenomenon is the result of illogical socialization of the modern thinkers which has been already discussed. However, it is not limited to socialization process. The point is that when we rationally accept that the ultimate goal of knowledge is human salvation and perfection, theoretical slavery becomes a kind of digression from the principles of rational thought. Accordingly, following rational thinking is the precondition for resisting slavery by power instruments. Therefore, one of the important factors in the emergence of theoretical slavery is the lack of rational evaluation of the ultimate goal of knowledge. The second subjective cause in the emergence of theoretical slavery is the fear of power. This is especially evident in totalitarian societies. In such conditions, social knowledge is not allowed to go beyond the limits imposed by the instruments of power with regard to official forms of knowledge. Therefore, autonomous and free thinkers may be repressed. Fear of repression and persecution demotivates most thinkers. It is clear that such conditions only lead to the production of instrumental knowledge and theoretical slavery. Conclusion The aim of the present essay was to briefly discuss and explain the processes whereby social knowledge is turned into negative forms. This does not mean that all aspects of the issue have been examined. Rather, an introductory remark was intended to initiate further researches. Since social knowledge is historical and situated in a specific socio-cultural context, the emphasis of the present essay was on the necessity of continuous evaluation of the historical, sociological and functional validity of social knowledge. It was argued that lack of attention to this issue may lead to processes whereby knowledge is transformed into theoretical dogmatism, theoretical alienation, and theoretical slavery. As explained, there are various subjective and objective causes which lead the social knowledge to transform into mentioned negative forms. So, these causes necessarily should be controlled in order to reach a valid social knowledge. |
. AIM To know whether caregivers of Alzheimer's disease (AD) patients on donepezil treatment are more satisfied with the orally disintegrating tablet (ODT) formulation than with the film-coated tablets. PATIENTS AND METHODS Multicenter, cross-sectional study of patients with probably AD by DSM-IV or NINCDS-ADRDA criteria, on monotherapy with donepezil, ODT or film-coated tablets. Satisfaction with treatment was assessed by the caregiver self-administered generic Treatment Satisfaction with Medicines Questionnaire (SATMED-Q) -range: 0, no satisfaction, to 100, maximal satisfaction-, total and in six dimensions: undesirable effects, efficacy, medical care, medication ease and convenience, medication impact on daily activities, and overall satisfaction. RESULTS 546 patients were enrolled (9,6% institutionalized); 64,8% women; 78,2 +/- 6,5 years of age; disease evolution of 22.5 +/- 24.6 months, Minimental State Examination (MMSE) mean score: 18,5 +/- 5; 67.9% on film-coated tablets and 32.1% on ODT. After adjusting by MMSE and time of treatment, caregivers of patients on ODT showed significantly higher SATMED-Q total score (74.5 +/- 11.8 vs. 70.4 +/- 12.3; p lower than 0.0004) and medication ease and convenience (84.9 +/- 16.4 vs. 79.8 +/- 17.6; p = 0.0059), impact of medication on daily activities (50.2 +/- 22.8 vs. 43.7 +/- 25.5; p = 0.0006) and satisfaction with medical care (79.4 +/- 19.5 vs. 75.6 +/- 21.8; p = 0.04894) scores. 91.6% of caregivers of patients on ODT (versus 82.9% of those on film-coated tablets; p = 0.023) stated that taking the medication was easy for their relatives. CONCLUSIONS Results show that caregivers of AD patients on donepezil treatment are more satisfied with ODT versus film-coated tablets, especially due to its better ease of use. |
RSK DSCLOSURE ON ACCOUNTING AND FINANCIAL REPORTING STANDARDS: CONTENT ANALYSIS OF BORSA STANBUL (BIST) MANUFACTURING SECTOR The information and data produced by the accounting system reflects the changes in the economy and the sector, as well as the effects of the decisions taken within the company on the business structure and operating results. The risks that companies face in the market are important for financial information users. The standards established for an international accounting practice in this regard also state that the information to be disclosed to the public should include the risks that companies are exposed to. The aim of this study is to analyze the risk disclosures of companies that prepare their financial statements according to international accounting standards in Turkey and to shed light on the relationship between accounting data and risk. In this context, the risk disclosure of companies in the BIST Manufacturing sector for 2020 were analyzed in terms of content. As a result of the analysis, data on the derivatives in IFRS 9 and the status of hedge accounting in financial statements and the risk types arising from financial instruments in IFRS 7 are presented. Accordingly, it has been determined that 32% of the companies use derivative instruments and 20% apply hedge accounting. In addition, qualitative and quantitative disclosure data of companies regarding credit, liquidity, market and other risk disclosures for 2020 were also analyzed. In the footnotes section of the risks arising from financial instruments, it was determined that the most data was related to the foreign currency risk. It has been observed that 50% of the companies that make credit risk disclosures do not make maturity analysis. Finally, the explanations on interest rate risk, other price risk and capital risk were analyzed in terms of content |
Impact of an EMR-Based Daily Patient Update Letter on Communication and Parent Engagement in a Neonatal Intensive Care Unit. OBJECTIVE To evaluate the impact of using electronic medical record (EMR) data in the form of a daily patient update letter on communication and parent engagement in a level II neonatal intensive care unit (NICU). STUDY DESIGN Parents of babies in a level II NICU were surveyed before and after the introduction of an EMR-generated daily patient update letter, Your Baby's Daily Update (YBDU). RESULTS Following the introduction of the EMR-generated daily patient update letter, 89% of families reported using YBDU as an information source; 83% of these families found it "very useful", and 96% of them responded that they "always" liked receiving it. Rates of receiving information from the attending physician were not statistically significantly different pre- and post-implementation, 81% and 78%, respectively (p = 1). Though there was no statistically significant improvement in parents' knowledge of individual items regarding the care of their babies, a trend towards statistical significance existed for several items (p <.1), and parents reported feeling more competent to manage information related to the health status of their babies (p =.039). CONCLUSION Implementation of an EMR-generated daily patient update letter is feasible, resulted in a trend towards improved communication, and improved at least one aspect of parent engagement-perceived competence to manage information in the NICU. |
The Determinants of Knowability Many propositions are not known to be true or false, and many phenomena are not understood. What determines what propositions and phenomena are perceived as knowable or unknowable? We tested whether factors related to scientific methodology (a propositions reducibility and falsifiability), its intrinsic metaphysics (the materiality of the phenomena and its scope of applicability), and its relation to other knowledge (its centrality to ones other beliefs and values) influence knowability. Across a wide range of naturalistic scientific and pseudoscientific phenomena (Studies 1 and 2), as well as artificial stimuli (Study 3), we found that reducibility and falsifiability have strong direct effects on knowability, that materiality and scope have strong indirect effects (via reducibility and falsifiability), and that belief and value centrality have inconsistent and weak effects on knowability. We conclude that people evaluate the knowability of propositions consistently with principles proposed by epistemologists and practicing scientists. |
Larynx, hypopharynx and mandible injury due to external penetrating neck injury. Esophageal and laryngeal injuries due to ballistic injuries are seldom encountered. Ballistic external neck traumas generally result in death. Incidence of external penetrant neck injuries may vary between 1/5000-137000 patients among emergency service referrals. Vascular injuries, esophagus-hypopharynx perforations, laryngotracheal injuries, bony fractures, and segmentations may be encountered in external neck traumas. Here we report a 27-year-old male patient who was referred to our emergency department and presented with hyoid bone fracture, multiple mandibular fractures, and hypopharynx perforation due to a ballistic external neck injury. |
Tough Choices: Exploring Decision-Making for Pregnancy Intentions and Prevention Among Girls in the Justice System Despite Californias declining teen pregnancy rate, teens in the juvenile justice system have higher rates than their nonincarcerated counterparts. This study explored domains that may shape decision-making for pregnancy prevention in this group. Twenty purposively selected female teens with a recent incarceration participated in hour-long semistructured interviews about their future plans, social networks, access to reproductive health services, and sexual behavior. Transcripts revealed that, contrary to literature, desire for unconditional love and lack of access to family planning services did not mediate decision-making. Lack of future planning, poor social support, and limited social mobility shaped youths decisions to use contraceptives. Understanding this groups social location and the domains that inform decision-making for pregnancy intentions and prevention provides clues to help programs predict and serve this populations needs. |
The HgFET: a new characterization tool for SOI silicon film properties Summary form only given. SOI starting wafer characterization relies heavily on non-destructive measurements such as thickness, uniformity, and lifetime. Leakage current through the BOX is sometimes measured and used to determine an electrical defect density. The electrical quality of the Si film is less well known. One device that can be used to assess Si film properties is the "pseudo-FET" in which point contacts made to the film act as the source and drain while the substrate and BOX act as the gate electrode and oxide. However, the point contacts act as Schottky barriers and the characteristics are pressure sensitive, somewhat limiting the properties that can be measured. A new version of the pseudo-FET called the HgFET is described here, in which a combination of broad area Hg electrodes coupled with special surface treatment are used to overcome the limitations of point contacts. The HgFET can be used for quality control of the starting Si film, yielding the electron and hole mobilities, the BOX charge, the interface state density, the doping level, the hole and electron transconductances, the flatband voltage, the linear and saturated threshold voltages, and the mobility versus field. |
Heavy MSSM Higgs Interpretation of the LHC Run I Data We review that the heavy CP-even MSSM Higgs boson is still a viable candidate to explain the Higgs signal at 125 GeV. This is possible in a highly constrained parameter region, that will be probed by LHC searches for the CP-odd Higgs boson and the charged Higgs boson in the near future. We briefly discuss the new benchmark scenarios that can be employed to maximize the sensitivity of the experimental analysis to this interpretation. Introduction The discovery of a SM-like Higgs boson in Run I of the Large Hadron Collider (LHC) marks a milestone in the exploration of electroweak symmetry breaking (EWSB). Within experimental and theoretical uncertainties, the properties of the new particle are compatible with the Higgs boson of the Standard Model (SM). Looking beyond the SM, also the light C P-even Higgs boson of the Minimal Supersymmetric Standard Model (MSSM) is a perfect candidate, as it possesses SM Higgs-like properties over a significant part of the model parameter space with only small deviations from the SM in the Higgs production and decay rates. Here we will review that also the heavy C P-even Higgs boson of the MSSM is a viable candidate to explain the observed signal at 125 GeV. (the "heavy Higgs case", which has been discussed in Refs. ). At lowest order, the Higgs sector of the MSSM can be fully specified in terms of the W and Z boson masses, M W and M Z, the C P-odd Higgs boson mass, M A, and tan ≡ v 2 /v 1, the ratio of the two neutral Higgs vacuum expectation values. However, higherorder corrections are crucial for a precise prediction of the MSSM Higgs boson properties and introduce dependences on other model parameters, see e.g. Refs. for reviews. In the heavy Higgs case all five MSSM Higgs bosons are relatively light, and in particular the lightest C P-even Higgs boson has a mass (substantially) smaller than 125 GeV with suppressed couplings to gauge bosons. We review whether the heavy Higgs case in the MSSM can still provide a good theoretical description of the current experimental data, and which parts of the parameter space of the MSSM are favored. We also discuss the newly defined benchmark scenarios in which this possibility is realized, in agreement with all current Higgs constraints. Theoretical basis In the supersymmetric extension of the SM, an even number of Higgs multiplets consisting of pairs of Higgs doublets with opposite hypercharge is required to avoid anomalies due to the supersymmetric Higgsino partners. Consequently the MSSM employs two Higgs doublets, denoted by H 1 and H 2, with hypercharges −1 and +1, respectively. After minimizing the scalar potential, the neutral components of H 1 and H 2 acquire vacuum expectation values (vevs), v 1 and v 2. Without loss of generality, one can assume that the vevs are real and non-negative, yielding The two-doublet Higgs sector gives rise to five physical Higgs states. Neglecting C P-violating phases the mass eigenstates correspond to the neutral C P-even Higgs bosons h, H (with M h < M H ), the C P-odd A, and the charged Higgs pair H ±. At lowest order, the MSSM Higgs sector is fully described by M Z and two MSSM parameters, conveniently chosen as M A, and tan. Higher order corrections to the Higgs masses are known to be sizable and must be included, in order to be consistent with the observed Higgs signal at 125 GeV. In order to shift the mass of h up to 125 GeV, large radiative corrections are necessary, which require a large splitting in the stop sector and/or heavy stops. The stop (sbottom) sector is governed by the soft SUSY-breaking mass parameter Mt L and Mt R (Mb L and Mb R ), where SU gauge invariance requires Mt L = Mb L, the trilinear coupling A t (A b ) and the Higgsino mass parameter. The "heavy Higgs case", i.e. the heavy C P-even Higgs boson gives rise to the signal observed at 125 GeV can only be realized in the alignment without decoupling limit. In the so-called Higgs basis (see Ref. for details and citations), the scalar Higgs potential in terms of the Higgs basis fields H 1 and H 2, can be expressed as where the most important terms of the scalar potential are highlighted above. The quartic couplings Z 1, Z 5 and Z 6 are linear combinations of the quartic couplings that appear in the MSSM Higgs potential expressed in terms of H 1 and H 2. The mass matrix of the neutral C P-even Higgs bosons is then given by (2. 3) The alignment without decoupling limit is reached for the "heavy Higgs case". The possibility of alignment without decoupling has been analyzed in detail in Refs. (see also the "-phobic" benchmark scenario in Ref. ). It was pointed out that exact alignment via |Z 6 | 1 can only happen through an accidental cancellation of the tree-level terms with contributions arising at the one-loop level (or higher). Parameter scan and observables The results shown below have been obtained by scanning the MSSM parameter space. To achieve a good sampling of the full MSSM parameter space with O(10 7 ) points, we restrict ourselves to the eight MSSM parameters, called the pMSSM 8, most relevant for the phenomenology of the Higgs sector. Here denotes the Higgs mixing parameter, M 3 (M 1,2 ) is the diagonal soft SUSY-breaking parameters for scalar leptons in the thrid (second and first) generation, and M 2 denotes the SU gaugino soft SUSY-breaking parameter. The scan assumes furthermore that the third generation squark and slepton parameters are universal. That is, we take The remaining MSSM parameters are fixed, The high values for the squark and gluino mass parameters, which have a minor impact on the Higgs sector, are chosen in order to be in agreement with the limits from direct SUSY searches. 200 500 space is scanned with uniformly distributed random values in the eight input parameters over the parameter ranges given in Tab. 1. We calculate the SUSY particle spectrum and the MSSM Higgs masses using FeynHiggs (version 2.11.2) 1, and estimate the remaining theoretical uncertainty (e.g. from unknown higher-order corrections) in the Higgs mass calculation to be 3 GeV. Following Refs., we demand that all points fulfill a Z-matrix criterion, |Z 2L 21 | − |Z 1L 21 | /|Z 1L 21 | < 0.25 in order to ensure a reliable and stable perturbative behavior in the calculation of propagator-type contributions in the MSSM Higgs sector. The Z-matrix definition and details can be found in Ref.. The observables included in the fit are the Higgs-boson mass, the Higgs signal rates (evaluated with HiggsSignals ). The total 2 is evaluated as (see Ref. for more details), where experimental measurements are denoted with a hat. Results for the "heavy Higgs case" Based on the above described 2 evaluation the best-fit point, shown as a star below, and the preferred parameter regions are derived. Points with ∆ 2 H < 2.30 (5.99) are highlighted in red (yellow), corresponding to points in a two-dimensional 68% (95%) C.L. region in the Gaussian limit. The best fit point has a 2 /dof of 73.7/85, corresponding to a p-value of 0.87, i.e. the heavy Higgs case presents an excellent fit to the experimental data. In Fig. 1 we review the correlations for the heavy Higgs signal rates,. (4.1) Here XX = VV,, bb, (with V = W ±, Z) denotes the final state from the Higgs decay and P(H) denotes the Higgs production mode. It can be seen that the heavy Higgs case can reproduce the SM case (R P(H) XX = 1), but also allows for some spread, in particular in R H. Figure 1: Correlations between signal rates for the heavy Higgs case. The best-fit points are shown as a black star, and points with ∆ 2 H < 2.3 (shown in red) and ∆ 2 H < 5.99 (shown in yellow). The MSSM parameter space for the heavy Higgs scenario is shown in Fig. 2. The left plot indicates the preferred regions in the M A -tan plane, where one can see that 140 GeV < ∼ M A < ∼ 185 GeV must be fulfilled, while tan ranges between ∼ 6 and ∼ 11. The right plot shows the preferred regions in the X t /M S -mt 1 plane. Here the heavy Higgs case makes a clear prediction with 300 GeV < ∼ mt 1 < ∼ 650 GeV and X t /M S ∼ −1.5. Some properties of the light C P-even Higgs boson are shown in Fig. 3. The left plot shows the light Higgs boson coupling to massive gauge bosons relative to the SM value. One can see that the coupling squared is suppressed by a factor of 1000 or more, rendering its discovery via e + e − → Z * → Zh at LEP impossible. The right plot gives the BR(H → hh) for M h < ∼ M H /2. Here it is shown that the BR does not exceed 20%, and thus does not distort the coupling measurements of the heavy Higgs at ∼ 125 GeV too much. Updated benchmark scenarios In Ref. an updated set of benchmarks for the heavy Higgs case was presented, superseeding the experimentally excluded low-M H scenario. The parameters of the three new benchmark scenarios are given in Tab. 2. The low-M alt− H (low-M alt+ H ) scenario is defined in the -tan plane with M H ± < (>)m t, while the low-M alt v H scenario has a fixed in the M H ± -tan plane. The experimentally allowed parameter space in the three benchmark scenarios is shown in Fig. 4. 2 The red, orange and blue regions are disfavoured at the 95% C.L. by LEP light Higgs h searches, LHC H/A → + − searches and LHC t → H + b → ()b searches, respectively. The green area indicates parameter regions that are compatible with the Higgs signal (at ∼ 95% C.L., see Ref. While being "squeezed" from different searches, Fig. 4 shows that the heavy Higgs case remains a valid option with the interesting feature of a light C P-even Higgs below 125 GeV. We hope that the new benchmark scenarios facilitate the search for these light Higgs bosons as well as for the heavier, not yet discovered Higgs bosons in Run II. Conclusinos We have briefly reviewed the case that the Higgs boson observed at ∼ 125 GeV is the heavy C P-even Higgs boson in the MSSM, as recently analyzed in Ref.. The analysis uses an eightdimensional MSSM parameter scan to find the regions in the parameter space that fit best the experimental data. It was found that the rates of the heavy C P-Higgs boson are close to the SM rates, but can still differ by 20% or more to yield a good fit. Parameters such as M A, tan or mt 1 are confined to relatively small intervals, making clear predictions for Higgs and SUSY searches. The light C P-even Higgs boson escaped the LEP searches via a tiny coupling to SM gauge bosons, and the decay H → hh is sufficiently suppressed not to impact too strongly the heavy Higgs boson rates. Three new benchmark scenarios have been reviewed that have been defined to facilitate the experimental searches at the LHC Run II. |
Survey of part-of-speech tagger for mixed-code Indian and foreign language used in social media Received Apr 29, 2019 Revised Aug 28, 2019 Accepted Oct 6, 2019 A Part-Of-Speech Tagger (POS Tagger) is a tool that scans the text in specific language and allocates chunks of speech to individual word (and another token), such as verb, adjective, nown etc., as more fine-grained POS tags are used in computational applications like 'noun-plural'. Basically, the goal of a POS tagger is to allocate linguistic (mostly grammatical) information to sub-sentential units, called tokens as well as to words and symbols (e.g. punctuation). This paper presents a survey of POS Tagger used for code-Mixed Indian and Foreign languages. Various methods, procedures, and features required to device POS Tagger for code-mixed foreign languages especially for Indian are studied and observations related to it are reported. INTRODUCTION Community language of communication in social media is often combined in nature, where individuals counterfeit their regional dialectal with English and this technique is found to be extremely popular. Natural language processing (NLP) work towards to gather the data from these texts somewhere Part-of-Speech (POS) tagging performs a key title role in receiving the prosody of the inscribed text. One purpose of POS labeling is to disambiguate homonyms. Several kinds of information including dictionaries, lexicons, rules etc. use by taggers. Word may be a member of more than one category. Lexicons have type or types of a specific word. For example, a word address is both verb and noun. Taggers utilizes the probabilistic evidence to solve this indistinctness of actual word. As a preprocessor in text processing POS tagger can be used. Text retrieval and indexing requires POS information. Language processing needs POS tags to choose the pronunciation. For making tagged corpora POS tagger is also used. Dialectal processing methods to code switched text was first accomplished in the early 1980s, whereas in social media text code-switching begun to be considered in the late 1990s. Still, conventional texts code change was rare as to encourage ample curiosity by the computational dialectal research people, and it was first lately that, it emerges a study topic in its own right, with a code-switching workshop at EMNLP 2014. Solorio with Liu, projected a simple but well-designed solution of labeling mixed-code English-Spanish transcript twice -on one occasion for each language, a tagger -and then joining the outcome of the language-explicit taggers to get the optimal word-level tags. For English-Hindi Mixed-Code Social Media Content, a POS Labeling System has been presented in. Efforts has been performed on English-Bengal and English-Hindi data. Nelakuditi, performed, two different kinds of experiments, First, POS taggers based on machine learning and second is uniting POS taggers of individual languages. ISSN: 2252-8814 Survey of part-of-speech tagger for mixed-code Indian and foreign language used in (Bhushan Nikam) 265 POS tagger tool has been designed for various languages, but for code-mixed Indian and foreign Languages, very little work yet is performed with undesirable accuracy. This paper presents review of such work which is prepared into next four Sections. Section 2 and 5 specifies techniques used and approaches involved in the implementation of POS tagger for code-mixed Indian and foreign dialects. Section 3 summarizes efforts made to implement CM POS tagger for Indian Languages. Challenges to implement code-mixed POS tagger is presented in section 4. VARIOUS APPROACHES AND TECHNIQUES USED TO IMPLEMENT CODE-MIXED POS TAGGER FOR INDIAN AND FOREIGN DIALECTS India is homegrown to number of dialects. Language changes and variety in dialect prompt frequent mixing of code in India. Hence, Indians are polyglot by habituation with necessity, and frequently change mix tongues in social media circumstances, that possess additional problems for automatic Indian social media text processing. Requirement for any kind of NLP applications especially in this context Code-Mixed Part-of-speech (CM-POS) labelling is essential. Relating to it, I present a report on various POS tagger approaches and techniques used to implement code-mixed POS tagger for Indian and foreign Languages. Jamatia and Das experimented by using classification algorithms based on four machine learning technique to the undertaking exercise: Conditional Random Fields (CRF), with Sequential Minimal Optimization (SMO), Nave Bayes (NB), and Random Forests (RF). For the Conditional Random Fields they tried the MIRALIUM 1 application, whereas the other three were the applications in WEKA 2 and reported effectuation on the complete dataset (2,583 utterances), after 5-fold cross-validation of all the ML methods using both fine-grained (FG) and coarse-grained (CG) tag sets and noticed that all the ML methods have further problems with HI-EN alternation. In the Machine learning based POS taggers experiment Nelakuditi et. al used three types of Machine Learning techniques for designing the POS tagger viz, Support Vector Machines (SVM), Bayes classification (Bay) and Conditional Random Fields (CRF), with different groupings and distinctions. In second experiment of joining POS taggers of individual languages, CMU's Twitter POS tagger for English with POS tagger developed at LTRC, that is a part of the shallow parser tool 3 for Telugu were used and then finally reported accuracies. Kamal Sarkar, developed HMM-based POS tagging system which is founded on Trigram Hidden Markov Model that uses data from the vocabulary, and some other word level attributes to improve the comment possibilities of the known along with unknown tokens. He gives in to scores for Hindi-English, Bengali-English and Tamil-English Language duos. His scheme has been skilled and tried on the datasets provided for ICON 2015 shared task. In the constrained mode, his technique gains average overall accuracy (averaged over all three language pairs) of 75.60% which is very close to other participating two systems (76.79% for IIITH and 75.79% for AMRITA_CEN) which ordered larger than his system. In the unrestricted mode, his system gets typical overall accuracy of 70.65% which is also nearby to the system (72.85% for AMRITA_CEN) that obtained average overall accuracy highest. Vyas et. al conducted three different experiments: In the first experiment, by assuming the language identities and normalized/transliterated forms of the words, POS tagging is performed. It gives an idea of the accuracy of POS tagging task, if normalization, transliteration and language identification could be done perfectly. Experiments have been conducted with two different POS taggers for English: the Stanford POS tagger and the Twitter POS tagger. In the next experiment, by assuming that only the language identity of the words are known for Hindi their own model is applied to generate the back transliterations. For English, Twitter POS tagger is applied directly to handle social media text. In the third experiment by assuming nothing is known, language identifier process is first applied, and based on the language detected, Hi transliteration module, and Hi POS tagger, or the English tagger is applied and also stated that though the matrix information is not used in any of their experiments, it could be potentially useful for POS tagging which could be explored in future. For constrained and unconstrained training and result submission, Pimpale and Patel, used Stanford POS tagger and machine learning algorithm viz., Decision Tree J48, Decision Tree Random Forest, Naive Bayes and Multilayer Perceptron resp. By concluding, the method used is reporting well for constrained submission, but deficiency of the superiority working information doesn't allow doing ample with it, if they, use the distributed vector illustration of words in feature engineering, that allow them to use non-labeled data for working out. As stated by Sequiera et. al, explored machine learning approaches for Hindi (Hi)-English (En) CM typescript from social media POS tagging starting with repetition of the trials specified in along with, and reconfirming results on dataset. Extending the attributes set applied by Solorio and Liu and doing numerous feature selection experiments, they proposed and conducted a POS-tagging and joint Kamal Sarkar, also proposed a POS tagging system for social media texts. It is developed based on Conditional Random Fields (CRF) trained using a rich feature set that includes contextual features, orthographic features, punctuation features and word length features. He concluded that his system performs well across all three languages Bengali-English-Hindi pairs. He hoped that the proper choice of features along with the suitable grouping of machine learning algorithms would improve the performance of his system. According to Sharma and Motlani, experimented code-mixed POS tagging of Indian social media text using machine learning techniques. Building a POS tagger using constrained system, give them an accuracy of 75.04%, after being estimated on the new test dataset. While by using other resources, namely an unconstrained system, POS tagger did better than the constrained system and gives 80.68% of accuracy. For training and testing of both type of systems they used ten-fold cross-validation method and computed the best model attribute values by undertaking a grid search over all the parameters of the attributes. Finally, for the other two pairs, namely BN-EN (Bengali-English) and TA-EN (Tamil-English), accuracy measured was 79.84% and 75.48% respectively using developed and submitted constrained systems. Pipeline approach, for language identification, Back-transliteration and POS tagging Sisodiya respectively used, logistic based classifier and CRF, Google API, and CRF++ based Hindi POS tagger developed by IIT Kharagpur. Singh and Kanskar employed, controlled word-level classification with and without contextual signs, and sequence labeling using Conditional Random Fields, for implementation of a simple unconfirmed dictionary-based method. A modest dialectal discovery-based investigative used in which first, the text can be separated into portions of tokens belonging to a language, and then each portion be categorized according to its language and further labeled by the POS tagger for that dialectal. Linguistic finding and transliteration text is labeled through an English monolingual tagger and then selecting one out of two labels for a conversation based on some heuristics that was detected by several language detection techniques. As stated by Ghosh et. al, they listed various steps involved in POS labeling task using CRF++ toolkit and Stanford POS Tagger, including chunking, lexicons for dominant languages. They also concluded that Bengali-English and Hindi-English results are more than that of Tamil-English because of difference in labels used in Tamil-English gold standard files. Barman, divided the experiment into four parts viz., implementing, baselines for POS tagging, pipeline systems, their stacking systems and joint model. By performing with the data, five-fold crossvalidation and reported normal cross-validation exactness with investigating the use of hand-crafted features and attributes that can be gained from monolingual POS taggers (stacking), performed researches with different groupings of these attribute sets. They described a trilingual code-mixed corpus with POS comment. Using state-of-the-art methods performing POS tagging and investigating the usage of factorial CRF (FCRF)based joint model found that the best stacking method (S2) that practices the joint features, achieves better than the combine version (FCRF) and the systems with pipeline. They observed that combined modeling outperforms the systems with pipeline in their experimentations. FCRF fall late the best POS labeling system S2. Possibly, to achieve better performance than S2 more training data would help FCRF. According to Gupta et. al, they proposed a system that practices a comprehensive set of features for POS labeling. The feature set was used to design a POS model. Conditional random field (CRF) is applied as the underlying classifier. CRF++, an employment of CRF is used to accomplish the experiment. As CRF++ uses a stated feature template, therefore to discover the optimal feature template a series of experiments were made on the training data set in a cross-validated way. However, they tune the feature pattern on English-Hindi data set only and used the optimal model for all these CM languages (English-Hindi, English-Bengali, and English-Telugu) pairs. Bhargava et. al, experimented similar kinds of approaches to implement POS tagger for English-Telugu, English-Hindi, English-Bengali language pairs with a slight variation to achieve accuracies. VARIOUS APPROACHES AND TECHNIQUES USED TO IMPLEMENT CODE-MIXED POS TAGGER FOR FOREIGN LANGUAGES Efforts are not much more still be seen to implement code-mixed POS tagger for foreign languages. Solorio and Liu just predicted potential code alternation points, in the growth of extra accurate systems for processing code-mixed English-Spanish language. Such mixing of languages is rarely found all over the world, other than in India. CHALLENGES TO IMPLEMENT CODE-MIXED POS TAGGER Building Code-Mixed POS (CM-Part of Speech) taggers for Indian dialects is a particularly interesting problem in computational linguistics due to a lack of accurately glossed training corpora. More cultured language processing techniques are required for POS tagging that is proficient of drawing interpretations from more delicate dialectal information. From a dialectal outlook, meaning arises from the distinctness between dialectal units, including words, phrases, and so on. These distinctness are of two types: paradigmatic (concerning substitution) and syntagmatic (concerning positioning). To implement Code-Mixed POS tagger all these differences are also needed to be considered. CONCLUSION The survey shows that in general, various Machine Learning techniques along with POS tagger are used by researchers to implement CM POS taggers for Indian and foreign languages. Much more work is started to perform for code-mixed Indian languages. But an actual tool for code-mixed POS tagging is not yet available on the internet. |
Different Methods of Forming Cold Fronts in Non-Merging Clusters Sharp edges in X-ray surface brightness with continuous gas pressure called cold fronts have been often found in relaxed galaxy clusters such as Abell 496. Models that explain cold fronts as surviving cores of head-on subcluster mergers do not work well for these clusters and competing models involving gas sloshing have been recently proposed. Here, we test some concrete predictions of these models in a combined analysis of density, temperature, metal abundances and abundance ratios in a deep Chandra exposure of Abell 496. We confirm that the chemical discontinuities found in this cluster are not consistent with a core merger remnant scenario. However, we find chemical gradients across a spiral"arm"discovered at 73 kpc north of the cluster center and coincident with the sharp edge of the main cold front in the cluster. Despite the overall SN Ia iron mass fraction dominance found within the cooling radius of this cluster, the metal enrichment along the arm, determined from silicon and iron abundances, is consistent with a lower SN Ia iron mass fraction (51% +- 14%) than that measured in the surrounding regions (85% +- 14%). The"arm"is also significantly colder than the surroundings by 0.5-1.6 keV. The arm extends from a boxy colder region surrounding the center of the cluster, where two other cold fronts are found. This cold arm is a prediction of current high resolution numerical simulations as a result of an off-center encounter with a less massive pure dark matter halo and we suggest that the cold fronts in A496 provide the first clear corroboration of such model, where the closest encounter happened ~ 0.5 Gyr ago. We also argue for a possible candidate dark matter halo responsible for the cold fronts in the outskirts of A496. INTRODUCTION One of the most interesting features discovered by Chandra satellite observations of galaxy clusters are the sharp X-ray surface brightness discontinuities, accompanied by jumps in gas temperature named "cold fronts" (e.g. ;;). The temperature and density jumps happen in such a way as to maintain the gas pressure continuously across the front and, therefore, they are not created by shocks. They were originally interpreted as being the result of subsonic (transonic) motions of head-on merging substructures with suppressed thermal conduction (). The above mentioned merger core remnant model is theoretically justified (e.g. Bialek, Evrard & Mohr 2002;Nagai & Kravtsov 2003;;Mathis, et al. 2005;) and holds relatively well for clusters that have clear signs of merging, such as 1E0657-56 () and A3667 ). However, these models do not work well for the increasing number of cold fronts (sometimes multiple cold fronts in the same cluster) found in apparently non-merging clusters such as A496 (Dupke & White 2003, hereafter DW03), A1795 ), RXJ1720.1+2638 ). This prompted the development of other models for cold front genera-tion, such as oscillation of the cD and the low entropy gas around the bottom of the potential well (;;DW03), hydrodynamic gas sloshing (Ascasibar & Markevitch 2006, hereafter AM06), or dark matter peak oscillation due to scattering of a smaller dark matter system (Tittley & Henriksen 2005). For very recent review see Markevitch & Vikhlinin. Cold fronts are found with relatively high frequency. A review of Chandra archival images finds that more than 40% of the observed clusters have cold front-like features and their presence may have significant physical impact in the physics of their host cluster cores, such as gas heating, generation of bulk and turbulent velocities, constraining conduction, etc. The significance of cold fronts influence on cluster physics depends on how they are being generated. Therefore, it is important to determine which mechanisms actually produce cold fronts. Abell 496 provides an excellent opportunity to test different scenarios for cold front generation given its physical and observational characteristics. A496 is a typical, bright, nearby (z≈0.032), apparently relaxed cold core cluster. The Xray peak coincides very well with the cD optical centroid. The gas temperature varies from 5-6 keV in the outer regions to 2-3 keV in the central arcmin (e.g., DW03). The presence of a central abundance enhancement has been established with previous instruments including Ginga and Einstein (), 1 ASCA (e.g. Dupke & White 2000a), BeppoSAX (Irwin & Bregman 2001 and XMM (Tamura et al 2001), showing an overall radial enhancement from ∼0.2-0.25 solar in the outer regions to ∼ 0.4-0.7 solar in the central arcmin. Furthermore, Dupke & White (2000a) also discovered radial gradients, for the first time, in various elemental abundance ratios, which indicates that the gas in the central 2-3 has a higher proportion of SN Ia ejecta (∼70%) than the outer parts of the cluster. This was confirmed by more sensitive spectrometers on-board XMM (Tamura et al 2001). As pointed out by DW03, different models for cold front formation can be discriminated through the analysis of chemical gradients across the front. If the cold front is a due to a head-on merger core remnant, we should expect the front to be accompanied by a specific discontinuity of elemental abundance ratios (e.g. ;Dupke & White 2000a,b). The expected discontinuity in this case would be symmetric with respect to the merger axis and asymmetric with respect to the perpendicular direction to the merger axis. This kind of analysis can be performed best with Chandra, given its high angular resolution. DW03 performed a chemical analysis of the cold front in Abell 496. With an effective exposure of ∼9 ksec, they were able to determine abundance ratio profiles only on large semi-annuli, covering a region larger than that of the cold front itself. The distribution of iron, silicon and oxygen abundances showed radial gradients but there were no clear discontinuities uniquely related to the cold front itself, pointing out the weaknesses of the remnant merging core model when applied to A496. Here we report the results of a deeper observation of that cluster that allowed us to produce high quality maps of the gas parameters and to compare more closely the observations with the predictions given by different models for cold front formation. All distances shown in this Letter are calculated assuming a H 0 = 70 km s −1 Mpc −1 and 0 = 1 unless stated otherwise. At the distance of this cluster 1 ≈ 0.66 kpc. DATA REDUCTION Abell 496 was observed by Chandra ACIS-S3 in July 2004 for 76 ksec. The cluster was centered on the S3 chip. We used Ciao 3.2.0 with CALDB 3.0 to screen the data. After correcting for a short flare-like period the resulting exposure time in our analysis was 59.6 ksec. A gain map correction was applied together with PHA and pixel randomization. ACIS particle background was cleaned as prescribed for VFAINT mode. Point sources were extracted and the background used in spectral fits was generated from blank-sky observations using the acis bkgrnd lookup script. Here we show the results of spectral fittings with XSPEC V11.3.1 (Arnaud 1996) using the apec and Vapec thermal emission models. Metal abundances are measured relative to the solar photospheric values of Anders & Grevesse. Galactic photoelectric absorption was incorporated using the wabs model (Morrison & McCammon 1983). Spectral channels were grouped to have at least 20 counts/channel. Energy ranges were restricted to 0.5-9.5 keV. The spectral fitting errors are 1- confidence unless stated otherwise. In order to obtain an overall distribution of the spectral parameters we used an adaptive smoothing code that selects extraction regions based on a fixed minimum number of counts per cell (here we used 3000 counts for temperatures and global abundances and 7000 for individual abundances) to maintain the range of statistical fitting errors more or less constant throughout. The intercell spacing is fixed at a fraction of the radius of the surrounding cells and in general there is significant cell to cell overlap except for the cells with smallest size. The overlap of extraction regions is therefore stronger in low surface brightness regions, away from the core of the cluster. We plot the distribution of region sizes in Figures 3e and 4e, to give an estimate of the local smoothing kernel size. The code produces a matrix with best-fit values and different cell sizes. The best-fit values used here are defined as the mid point of the 68% confidence errors. In order to make the contour plots, this matrix is mapped into a square matrix with equal cell sizes using an interpolation routine. This is done by computing a new value for each cell in the regular matrix weighing by the values of the adjoining cells in the matrix included within some defined search radius (minimum of 3 cells in 4 adjacent quadrants). The closest measured values usually have the most influence on calculating the value of a cell. The computation is based on the Kriging method (for a description see, e. g., Davis, 1986, p.383), which calculates the weights from a semi- ) developed from the spatial structure of the data, where h is the number of intervals between the values of the regionalized variable X taken at location i and i + h and n is te total number of points. The number of cells of the mapped matrix was artificially increased to three times of the maximum length of the original matrix for purposes of improving image quality for analysis. This is responsible for the small "square domains" that appear in Figures 3a,c & 4a,c. The values outside the CCD border contours are also an effect of the smoothing algorithm and should be ignored. Figure 1a shows the exposure corrected smoothed Xray image of A496. One can clearly see the sharp surface brightness edge towards the north, described in DW03. One can also see two other brightness edges (to the SW and SE) that meet at nearly right angles. This suggests the presence of multiple cold fronts in this cluster 1. To analyze the nature of these edges we used the set of extraction regions shown in Figure 1b. The results are shown in Figures 2a,b using a wabs apec spectral model. Figure 2a shows the distribution of surface brightness (top) and projected gas temperatures using the bins shown in Figure 1b (bottom). The color association between the Figures 2 and Figure 1b is: North-black, East-red, South-blue and West-green. The locations of the cold fronts are marked by vertical dashed lines and follow the same color code. There are at least three (up to five) surface brightness edges accompanied by sudden temperature jumps, consistent with cold fronts; The northern one is at ∼ 73 kpc and is the strongest. The western cold front is nearly at the same radial distance (r ∼ 64 kpc) as the northern one and is, apparently, an extension of the northern front. We can also see that the two edges near the core, labeled East (r ∼ 16 kpc) and South (r ∼ 22 kpc) have the temperature jumps characteristic of cold fronts. There is also a marginally significant cold front to the east at r ∼ 106 kpc. Cold Fronts and Temperature Distribution Following DW03, we measured the radial distribution of metal abundance ratios towards the directions of the main fronts (edges). We used "PIE" extraction regions that were chosen in such a way as to have the same opening angle as the cold front of interest. In the radial distributions there are no clear significant systematic relations between the changes in Fe abundance or abundance ratios and cold fronts. The changes seen can be mostly associated with overall (global) radial trends. Globally, the Fe abundance shows a radial decline from supersolar near the cluster's center to subsolar in the outer core regions. At the very center, r ∼ <10 (∼ 7 kpc), there is a significant abundance dip described in the next section. The radially average values in the central 23 kpc is 0.93±0.04 solar (with asymmetric variations from 0.8 solar to 1.2 solar) and in the outer (130±50) kpc is 0.75±0.04 (with asymmetric variations from 0.47 solar to 0.86 solar). The results for the ratios involving Si, S, and Fe are shown in Figure 2b, where the color code is the same as that used in Figure 2a. In the abundance ratio plots we added "2" to the values of the Northern and Western directions, for illustration purposes. Despite the strong anisotropies, particularly in the bins within ∼70-110 kpc, there is also a tendency for the -element ratios to Fe grow radially. The Si/Fe abundance ratio is consistent with a flat (or mildly increasing) profiles going from 1.39±0.10 in the central ∼11 kpc to 1.52±0.23 in the very outer core regions ( ∼ > 110 kpc). In the same regions, the S/Fe ratio exhibits a more significant gradient, growing from ∼ 1.46±0.14 solar to 2.34±0.38 solar. The error weighted average of the SN Type dominance of the Fe mass from the two ratios above corresponds to 65%±4% SN Ia Fe mass fraction for the central region and 57%±7% in the outer regions. This is consistent with the general trend found with ASCA by Dupke & White (2001a) for larger spatial scales (up to ∼ < 500 kpc, although the absolute values of the sulfur abundance are higher than those determined with ASCA (Dupke & White 2000a) and XMM () The variations in the radial distributions of abundance ratios and temperatures suggest the presence of significant asymmetries. To explore the nature of these asymmetries we produced 2-dimensional adaptively smoothed maps of projected gas temperatures, abundances and abundance ratios. We discuss them in the next section. 2-D Maps Given the level of asymmetry of the distributions of gas temperature, metal abundances and abundance ratios, it is helpful to analyze the 2-D distributions of these parameters. The temperature and Fe abundance maps are shown in Figures 3a,c, with X-ray surface brightness contours used in Figure 1b overlaid. The steepest temperature gradient is seen to the North. One striking feature that can be seen in the temperature map is a "cold spiral arm" that departs from the core to the N-NW up to the cold front position and runs along the cold front to the E-NE becoming more diffuse as it turns towards the S. The smoothing kernel radius map (Figure 3e) shows a value of ∼20 pix or 10 (15 ) in the inner (outer) arm regions, which is nearly half of the arm thickness and indicates that the arm is well spatially resolved. Guided by the temperature map, we defined regions that characterize the inner and outer parts of the arm for spectral extraction and they are shown in Figures 3a,c & 4a,c, and the relevant best fit parameters are shown in Table 1. The temperature of the cold arm is ∼3.08±0.07 keV. The temperatures on the surrounding regions of the cold arm are 3.5±0.11 keV and 4.7±0.22 keV towards the inner and outer cluster regions, respectively. The cold arm is definitely associated with the northern cold front and to a lesser extent to the western cold front. It departs from a boxy low temperature region, the edges of which appear coincide with the southeastern and southern cold fronts near the cluster's core, although the temperature edges in these weaker cold fronts are less well-defined than that of the main cold front. From Figure 3b it can be seen that the overall temperature error in the cold arm region is around 0.1-0.2 keV. The higher temperatures near the southern CCD border are not well constrained (with errors ∼ > 1 keV). There are significant indications of a "cold tail" (T ∼ 4 keV ) starting 2.3 southwest of the cluster's center extending to 4.2 to the south of the cluster that is associated with a low Fe abundance region (Figure 3c). The abundance along the cold tail is approximately half of the surrounding regions values of ∼1.2 solar. This "cold tail" seems to extend to the south for more 5 (). A similar cold tail was found on the opposite side of the cold front in the cluster 2A0335+096 (). The Fe abundance map is also inhomogeneous (Figures 3c, d). There is an overall abundance gradient, which is steeper towards the northern regions. In particular the transition from sub to super solar abundances happens at a radius of 100 -140 from the center in all directions but the South. In general, the Fe abundance within the main cold front spatial scales (r<60 kpc) is supersolar, with the exception of the very central 8 kpc, where an abundance "dip" is found. The Fe abundance in the central dip reaches a minimum of 0.55±0.3 solar (an average 0.8±0.03 solar in a circular region 10.5 kpc in radius) and in the immediately surrounding regions achieves a maximum of ∼1.7±0.4 solar (an average of 1.1±0.04 solar within an annulus with radius between 11 kpc and 22 kpc). There is a secondary, marginally significant, abundance dip with similar spatial scales 35 to the N-NW, where the abundance decreases from ∼1.3 to ∼0.7 solar with a characteristic error of 0.3 solar. Central metal abundance dips have been found in other clusters (e.g., A2199 (), Centaurus and Perseus ()), and the mechanisms that gener-ate them are a matter of current debate. Suggested scenarios include resonant scattering (cf. Sanders & Fabian 2006), extremely inhomogeneous metal abundances (Morris & Fabian 2003), artifacts appearing from fitting single temperature models to multi temperature gas (Buote 2000) and buoyant transport to higher radii (Brighenti & Mathews 2005). None of these mechanisms are adequate to explain off-center abundance dips, which are probably related to previous AGN activity. A extended analysis of off-center abundance dips in clusters is provided elsewhere (Dupke, Nyland & Bregman 2007, in preparation). Metal abundances are in general high towards the southern regions, with the exception of the regions coincident with the southern cold tail. We performed an analysis of the 2-D distribution of the elemental abundance ratios in this cluster. Different metal enrichment mechanisms act with different efficiencies at different cluster locations and produce different SN type ejecta signatures. Therefore, elemental abundance ratios can be used as "fingerprints" used to trace the gas history, better than metal abundances alone. The abundance ratio maps involving the best determined abundances (Si, S, and Fe) are shown in Figures 4a, c. The 1- errors of the quantities are shown in Figures 4b, d, and give an idea of the significance level of the measured quantity in the region of interest. Since our best-fit values are defined as the mid-point of the 1- error bars, we use only values with fractional errors smaller than 100% were used to create the 2-D square images. This is done to avoid biases in the interpolation to produce the smoothed color contours that would be caused by upper/lower limits, where the error bars can be highly asymmetrical. It can be seen that, in general, the cold arm is accompanied by enhanced abundance ratio values (lower SN Ia Fe mass fraction than the surroundings), which is visible in the Si/Fe, which shows an average variation from ∼1 to 2, or equivalently, from 85% to 51% SN Ia Fe Mass fraction, respectively in the regions surrounding the cold arm and the regions along the cold arm. The characteristic error is ∼0.4 (∼14% in SN Ia Fe mass fraction) and the characteristic smoothing kernel size is ∼30 pixels, or 15 (25 ). Sulfur abundances are higher than expected and abundance ratios are off-scale when compared to the theoretical predictions of a,b for SN Ia and II yields. However, the trend of S/Fe is similar to that of Si/Fe and to place the limits within theoretical bounds, we need to apply constant positive correction of ∼0.4 to S/Fe, placing the and the corresponding S negative correction ∼ 0.4-0.8 within the errors (see footnote 2). DISCUSSION: THE NATURE OF COLD FRONTS IN ABELL 496 The analysis of the core of A496 presented in this Letter reveals several new features that were not observed previously. A large multiplicity of cold-front features (at least three cold fronts); a spiral cold arm seen in the temperature map, which is clearly associated with the main (northern) cold front; strong indication of spiral (or circular) chemical arms associated with the main cold front; a cold, metal poor tail extending towards the direction opposite to the main cold front; an overall central abundance enhancement with a small-scale "dip" at the core, and marginal evidence for other off-center abundance dips. The multiplicity of cold fronts together with the spiral pattern of the chemical gradients seem to rule out the scenario, where the cold front(s) in this cluster are created by a head-on merging remnant core. Although gas sloshing has been invoked to explain cold fronts in apparently relaxed clusters, there have been very few observable predictions that can be used to discriminate the details of different sloshing mechanisms proposed in the literature. Very recently, AM06 performed high resolution numerical + hydrodynamical simulations specifically designed to investigate the effects of scattering of lower mass dark matter haloes (with and without gas) by clusters of galaxies. One of the results from their work was that the sub-halo flyby induces a variable gas velocity field in the ICM of the main cluster that generates rampressure near the cluster gas core and produces cold fronts, accompanied by significant amount of substructures seen in the gas 2-D temperature distribution. A common feature in most cases analyzed by AM06 was the presence of cold spiral arms coinciding with the cold fronts close to the main cluster's core, which were long lasting. In particular, their case for a dark matter perturber produces properties very similar to those observed in A496. In AM06 a pure dark matter halo with 1 5 of the mass of the main cluster flies by with an impact parameter of 500 kpc and with closest approach at t∼1.37 Gyr. We show part of Figure 7 of AM06, for the epoch corresponding to 1.9 Gyr (Figure 5a). The image is inverted vertically to be compared directly to the temperature map of A496 in Figure 2a. The size of the box is 250 kpc, similar to the size of ACIS-S3 CCD borders at the redshift of the cluster (∼320 kpc). The cold front(s) can be seen when comparing the temperature map with the surface brightness map (Figure 21 of AM06). The main cold front coincides with the large spiral cold arm extending horizontally. The spatial scale is very similar to that of the cold arm in A496. Their simulations also seem to indicate the presence of milder cold fronts in the opposite side closer to cluster's core. These are clear predictions that are corroborated well by A496 and suggest strongly that a flyby dark matter halo created the cold fronts in this cluster. Furthermore, there is a larger-scale more diffuse cold extension of the main arm also towards the South of the main cold front, which is a consequence of the ram-pressure caused by the gas velocity field induced by the DM halo flyby. This suggests that the same process that creates the main cold front may also be associated with the formation of the southern cold tail seen in A496. The existence of such pure DM sub-halos is not completely unexpected since the intergalactic gas originally belonging to the sub-halo could have been stripped in a previous encounter with the main cluster. AM06 cases for gaseous DM sub-clump passages produces a variety of substructures visible in temperatures and surface brightness maps, which are not seen in A496, and are not favored within the limited cases simulated. Future addition of metallicity distributions to cluster merger simulations should constrain further the characteristics of the perturber. A prediction of this scenario is the presence of a DM halo in the outskirts of the cluster without significant Xray emitting gas. From the simulations, the position of that clump at epoch (t=2 Gyr, i.e., now) would be towards its apocenter at North, the same general direction of the main cold front. It is reasonable to assume that galaxies would tend to trace their host DM sub-halo. Recent wavelet analysis of the member galaxies of A496 within a 1.5 h −1 75 Mpc radius (Flin & Krywult 2006), finds a secondary galaxy clump, in most wavelet scales analyzed, to the NW of the core of A496, roughly consistent with the position where the DM perturber was likely to be found in the AM 06 simulation (towards the North). We illustrate this in Figure 5b, where we show the positions of the dark matter clump at 1.34, 1.43, 1.51 and 4.2 Gyr, taken from a merging 4 of the 9 images of Figure 3 of AM06. We overlap part of Figure 5 of Flin & Krywult, which illustrates the position of the galaxy sub-clump for a wavelet scale of 129 h −1 75 kpc. If we scale the ratio of masses of the main cluster to hat of the DM perturber from the AM06 simulation parameters and, conservatively, use for A496 the mass of 4.210 14 M ⊙ (), the perturber should be very massive (0.8410 14 M ⊙ ). This is almost three times more massive than HCG62 (), the brightest HCG in the Ponman et al. survey. Such a group, if not unusually depleted of gas, would easily be detected by current X-ray instruments at the A496 redshift. ROSAT All Sky Survey exposures of that region (R< 50 from A496) fails to detect a significant X-ray excess from any extended sources as expected by the gasless dark matter perturber scenario described here. The excess count in a square region 18 on the side centered in Flin & Krywult's sub clump is 15±18 background subtracted counts. However RASS exposures are too short (∼250 sec) to place any significant constraints on the amount of Xray emitting gas and future combination of weak lensing and deeper X-ray observations of that substructure with current satellites should be able to corroborate this prediction. We acknowledge support from NASA through Chandra award number GO 4-5145X, NNG04GH85G and GO5-6139X. RAD was also partially supported by NASA grant NAG 5-3247. RAD also thanks Yago Ascasibar, Jimmy Irwin, Tatiana F. Lagana, Narciso Benitez & Tracy Clarke for helpful discussions. -Results from an adaptive smoothing algorithm with a minimum of 3000 counts per extraction region (circular) and fitted with an absorbed VAPEC spectral model. The gridding method used is a correlation method that calculates a new value for each cell in the regular matrix from the values of the points in the adjoining cells that are included within the search radius, using the Kriging method (e.g. Davis 1986), see section 2 for details. We also overlay the X-ray contours shown in Figure 1b |
Localization and chiral properties near the ordering transition of an Anderson-like toy model for QCD The Dirac operator in finite temperature QCD is equivalent to the Hamiltonian of an unconventional Anderson model, with on-site noise provided by the fluctuations of the Polyakov lines. The main features of its spectrum and eigenvectors, concerning the density of low modes and their localization properties, are qualitatively reproduced by a toy-model random Hamiltonian, based on an Ising-type spin model mimicking the dynamics of the Polyakov lines. Here we study the low modes of this toy model in the vicinity of the ordering transition of the spin model, and show that at the critical point the spectral density at the origin has a singularity, and the localization properties of the lowest modes change. This provides further evidence of the close relation between deconfinement, chiral transition and localization of the low modes. I. INTRODUCTION As is well known, the phase diagram of QCD at zero chemical potential consists of a low-temperature confining and chirally broken phase, and a high-temperature deconfined and (approximately) chirally restored phase. Interestingly enough, the two transitions take place at nearly the same temperature, or more precisely in the same small temperature range, as both the deconfining and the chirally restoring transition are actually steep but nevertheless analytic crossovers. The close connection between the two transitions is even more striking in certain QCD-like models where they are genuine phase transitions, like for example SU and SU puregauge theories. In this case lattice calculations show that the deconfinement and the chiral transitions take place at the very same temperature (of course, within the inherent numerical uncertainties of lattice calculations). The same coincidence of the transition temperatures has been observed in a model with SU gauge fields and unimproved staggered fermions on coarse lattices. Another interesting case is that of SU gauge fields with adjoint fermions: this model is known to possess different deconfinement (T d ) and chiral-restoration temperatures (T ), with T d < T, but the chiral condensate has a * [email protected] [email protected] [email protected] jump exactly at T d, signaling a first-order chiral phase transition there. So far, no generally accepted explanation has been provided for the coincidence of chiral and deconfinement transitions in these models, and their approximate coincidence in QCD. In recent years there has been growing evidence that the QCD finite-temperature transition is accompanied by a change in the localization properties of the Dirac eigenmodes: while in the low-temperature phase all the Dirac eigenmodes are delocalized in the whole volume, at high temperature the lowest modes are spatially localized. This behavior of the lowest modes is not unique to QCD, and has been found also in the above-mentioned QCD-like models (i.e., SU and SU pure-gauge theory, and unimproved staggered fermions). There are indications that the onset of localization takes place around the same temperature at which QCD becomes deconfined and chirally restored: this issue was first studied by Garca-Garca and Osborn in Ref.. To avoid the complications related to the crossover nature of the QCD transition, it is convenient to consider models where the transition is a genuine phase transition. This was done in Ref., which employed the abovementioned model with unimproved staggered fermions, investigating the confining, chiral and localization properties of the system. In that case, it was found that deconfinement, (approximate) chiral restoration, and onset of localization take place at the same value of the gauge coupling, where the system undergoes a first-order phase transition. These results obviously suggest that localization is closely related to deconfinement and to the chiral transition. Understanding why the lowest Dirac eigenmodes become localized at the transition, and how localization affects the corresponding eigenvalues, might help in shedding some light on the relation between the deconfining and the chiral transition. As it was suggested in Ref., and later elaborated on in more detail in Refs., localization of the lowest modes is very likely to be a consequence of deconfinement. More precisely, the ordering of the Polyakov-line configurations, and the presence therein of "islands" of fluctuations away from the ordered value, leads to the lowest Dirac modes localizing on the "islands". In Ref. it was suggested that the ordering of the Polyakov lines might also be responsible for the depletion of the spectral region near the origin, which in turn leads to a smaller condensate via the Banks-Casher relation, and so to approximate chiral restoration. The argument is most clearly formulated in the Dirac-Anderson approach of Ref.. This consists in recasting the Dirac operator into the Hamiltonian of a three-dimensional system with internal degrees of freedom, corresponding to color and temporal momentum. This Hamiltonian contains a diagonal part, related to the phases of the Polyakov lines, representing a random on-site potential for the quarks, and an off-diagonal part responsible for their hopping from site to site, built out of the spatial links on the different time slices. In this framework, the accumulation of eigenmodes near the origin requires two conditions: sufficiently many sites where the on-site potential is small, and a sufficiently strong mixing (via the hopping terms) of the different temporalmomentum components of the quark wave function. The ordering of the Polyakov lines acts against both these requirements, by reducing the number of sites where the potential is small, and localizing them on "islands" in a "sea" of sites where the potential is large; and by inducing correlations among spatial links on different time slices, which in turn makes the mixing of different temporalmomentum components less effective. This leads to the depletion of the spectral region near the origin. The argument above is based on the results of a detailed numerical study of a QCD-inspired toy model, constructed in such a way as to reproduce qualitatively all the important features of the QCD Dirac spectrum and of the corresponding eigenmodes. In this toy model the role of the Polyakov lines is played by complex spin variables, with dynamics determined by an Ising-like model. This spin model possesses a disordered and an ordered phase, analogous to the confined and deconfined phases of gauge theories. As was shown in Ref., the properties of the Dirac spectrum in the ordered and disordered phases indeed qualitatively match those found in the deconfined and confined phases of QCD, respectively. More precisely, deep in the ordered phase the lowest eigenmodes are localized and the spectral density vanishes near the origin, while in the disordered phase the lowest eigenmodes are delocalized and the spectral density is finite near the origin. This makes us confident in the validity of the mechanism for chiral symmetry restoration discussed above also in the physically relevant case of QCD. The magnetization transition of the spin model is expected to be in the same universality class as that of the 3D Ising model, so one expects it to be a genuine secondorder phase transition. It is thus worth studying the localization properties of the lowest Dirac eigenmodes, and the corresponding spectral density near the origin, close to the magnetization transition. This is the subject of the present paper. The purpose is twofold: on the one hand, this model provides another testing ground for the idea that deconfinement, chiral transition and localization of the lowest modes are closely connected. On the other hand, the different order of the transition with respect to that taking place in the model with unimproved staggered fermions allows us to study the possible dependence of this connection on the nature of the transition. The paper is organized as follows. In Section II we review the approach to the QCD Dirac spectrum as the spectrum of a Hamiltonian with noise ("Dirac-Anderson" approach), considerably simplifying the formalism of Ref.. We then briefly recall the main aspects of the toy model of Ref., which we reformulate equivalently in the new formalism. In Section III we show our numerical results. We first identify precisely the critical point of the spin model, and then discuss the localization and chiral properties of our toy model in its vicinity. Finally, in Section IV we report our conclusions and show our prospects for the future. II. THE DIRAC OPERATOR AS AN ANDERSON-LIKE HAMILTONIAN In this section we briefly review the derivation of the Dirac-Anderson form of the staggered Dirac operator, introduced in Ref.. We also proceed to simplify the formalism with respect to the original formulation. The Dirac-Anderson Hamiltonian is nothing but a suggestive name for (minus i times) the staggered Dirac operator in the basis of the eigenvectors of the temporal hopping term. More precisely, denoting it by H = −iD stag, it reads in compact notation The Dirac-Anderson Hamiltonian, H xak, ybl, carries space, color and temporal-momentum indices, for x, y ∈ Z 3 L, a, b = 1,..., N c, and k, l = 0,..., N T − 1. Here, and L and N T are the spatial and temporal extension of the lattice, which have to be even integer numbers. Periodic boundary conditions in the spatial directions are understood. 1 In Eq., D is the diagonal matrix consisting of the "unperturbed" eigenvalues of the temporal hopping term, V j come from the spatial hoppings, and T j is the translation operator in direction j, and moreover = (−1) < x are the usual staggered phases. Let us explain the notation in detail. The effective Matsubara frequencies ak ( x) are given by with a ( x) being the phases of the Polyakov line P ( The following convention is chosen for the Polyakov-line phases: a ( x) ∈ [−, ) for a = 1,..., N c − 1, and a a ( x) = 0. 2 The spatial hoppings read (N T − 1, x) = diag(e ia( x) ). One can show that V +j ( x) is a unitary matrix in color and temporal-momentum space. The expression Eq. is obviously fully equivalent to the staggered Dirac operator. Moreover, its structure is reminiscent of a 3D Anderson Hamiltonian with internal degrees of freedom corresponding to color and temporal momentum, and with antisymmetric rather than symmetric hopping term. The diagonal noise is provided by the phases of the Polyakov lines. The off-diagonal noise present in the hopping terms comes both from the spatial links and from the Polyakov-line phases. The amount of disorder is controlled by the size of the fluctuations of the Polyakov lines and of the spatial links, and therefore by the temperature of the system (as well as the lattice spacing). Differently from the usual Anderson models, the strength of the disorder is fixed, since the absolute value of course understood in the original four-dimensional staggered operator, and they reflect in the form of the effective Matsubara frequencies given below in Eq.. However, since the Dirac-Anderson Hamiltonian is a three-dimensional Hamiltonian, there are no temporal boundary conditions to be imposed on the fermions. 2 A redefinition modulo 2 corresponds simply to a unitary transformation of the Hamiltonian. of the diagonal terms is bounded by 1, and since the hopping terms are unitary matrices. What is different on the two sides of the deconfinement transition is the distribution of the diagonal terms, and the matrix structure of the hoppings. Indeed, at high temperature the ordering of the Polyakov line leads to the enhancement of diagonal terms corresponding to the trivial phase a ( x) = 0, which form a "sea" of large (i.e., close to 1) unperturbed eigenvalues. Fluctuations away from the trivial phase form localized "islands" of smaller unperturbed eigenvalues. Moreover, the ordering of the Polyakov lines leads to strong correlations among spatial links on different time slices. These correlations tend to reduce the off-diagonal entries of the hopping term in temporal-momentum space in the "sea" region, thus approximately decoupling the different temporal-momentum components of the quark wave function. At low temperatures, on the other hand, correlations across time slices are weaker, and the different temporal-momentum components of the quark wave function mix effectively. A. Simplifications of the Dirac-Anderson Hamiltonian We now discuss a few convenient simplifications of the Dirac-Anderson Hamiltonian, Eq.. First of all, by making a suitable gauge transformation we will disentangle the two sources of noise, i.e., we will make the hopping terms independent of the Polyakov-line phases. Let us define which satisfies W ( x) NT = P ( x), and moreover is easily seen to be unitary and unimodular, thanks to our choice of convention for the phases of the Polyakov lines. Eq. can then be recast as where Since W ( x) ∈ SU(N c ), Eq. is just a gauge transformation, that leads to the "uniform diagonal" gauge:, one has that the temporal links are constant and diagonal. For future reference, we notice that in this gauge the contribution of time-space plaquettes to the Wilson action, which in the temporal diagonal gauge is proportional to becomes The form of the space-space plaquettes is unaffected by the gauge transformation, and so The second simplification is obtained by using the following property of the diagonal entries, where (a + b) NT ≡ a + b mod N T, and the cyclicity of V +j, in particular the property This allows us to organize the matrices D and V j in blocks of size NT 2 NT 2. Explicitly, we can write where with k, l = 0,..., NT 2 − 1, and where i = i ⊗ 1 N T 2, i.e., For future utility we also define (T j ) xak, ybl = x+, y ab kl, k, l = 0,..., NT 2 − 1. We now make use of the block structure of the Dirac-Anderson Hamiltonian , and of the fact that it anticommutes with the unitary matrix Q = 4 1, 3 to simplify the study of the eigenvalue problem. The eigenvectors of Q are of the form where are NT 2 -dimensional. One can easily show that Making use of this we find where the matrices are unitary, as a consequence of the unitarity of V j. One can also prove that det From the orthogonality of + and − it follows that In order to determine the spectrum of H, it is convenient to first diagonalize H 2, In this paper we are interested in the localization properties of the eigenmodes. As discussed in Ref., a convenient measure of localization is provided by the participation ratio PR = IPR −1 /V, where V = L 3 is the lattice volume and IPR is the inverse participation ratio, defined as With this definition, the knowledge of 2 is sufficient to determine the IPR: indeed, For our purposes the problem is thus reduced to a V NT 2 N c -dimensional one. This reduction is the analogue, in the present basis, of the well-known reduction of D 2 stag to the sum of two operators, each of which connects only even or odd sites, in the usual (coordinate) basis. B. Dirac-Anderson Hamiltonian for NT = Nc = 2 In the case N T = N c = 2 the problem simplifies considerably. In this case NT 2 = 1, so a single temporalmomentum component has to be considered, and U ± j have the same dimensionality as j. We have where cos 2 = diag(cos x 2 ) is a diagonal matrix in position space, and 1 c is the identity in color space. Moreover, and so where ± are the projectors on the even and the odd sublattices, Inverting these relations we find Notice that changing integration variables to U ± j leaves the link integration measure unchanged. Let us work out in detail the contribution ∆S ts to the action. Since after simple algebra one finds The toy model of Ref. consists simply in replacing the Polyakov-line phases and spatial links in the various terms appearing in Eq. with suitable toymodel variables, and in choosing appropriate dynamics for these variables, intended to mimic that of the corresponding variables in QCD. In particular, the (diagonal) Polyakov lines e ia( x) are replaced by complex spin variables s a x = e i a x, with dynamics governed by a suitable spin model. The only thing changing for the spatial links is the dynamics, which is still determined by a Wilson-like action (in the temporal diagonal gauge), obtained by dropping the contributions from spatial plaquettes, replacing the Polyakov lines with the diagonal matrices diag(s a x ), and omitting the backreaction of the gauge links on the spins, i.e., treating the spins as external fields for the gauge links. The backreaction of fermions in the partition function is also omitted, i.e., the fermion determinant is dropped. The simplifications of the Dirac-Anderson Hamiltonian discussed previously translate directly into simplifications for the toy model. Indeed, such simplifications are obtained by means of a gauge transformation for the link variables and of a change of basis for the Hamiltonian. In both cases, they amount to a unitary transformation of the Hamiltonian, which therefore leaves the spectrum unchanged. Moreover, since these transformations are local in space, they do not alter the localization properties of the eigenmodes. The toy model obtained by making the substitutions discussed in the previous paragraph in the Hamiltonian , Eq., is thus unitarily equivalent to the one obtained by making the same substitutions in Eq.. In the case N c = N T = 2, which is the one studied numerically in Ref., one can also make a change of variables for the links, as described in Eq., leading to further simplifications. All in all, the toy model for N c = N T = 2 of Ref. can be equivalently formulated as follows. The toy model Hamiltonian reads where it is understood that all variables are now the toymodel variables, e.g., cos 2 = diag( cos x 2 ). The dynamics of the spin phases x ∈ [−, ) is governed by the spinmodel Hamiltonian as in Ref.. Here is the inverse temperature of the spin model, and h is a coupling which breaks the U symmetry of the first term down to Z 2. The dynamics of the toy-model link variables U ± j ( x) ∈ SU is governed by the action where plays the role of gauge coupling. Expectation values are defined as follows: where we have denoted D = x + − d x and DU = x,j dU + j ( x)dU − j ( x), with dU ± j ( x) the Haar measure. Notice the absence of backreaction of the gauge links on the spins. In practice, configurations are obtained by first sampling the spin configurations { x } according to their Boltzmann weight e −Hnoise , and then, for a given { x }, by sampling the spatial link configurations {U ± j ( x)} according to their Boltzmann weight e −Su. The features that have been stripped from QCD in order to build the toy model are those deemed irrelevant for the qualitative behavior of eigenvalues and eigenvectors of the Dirac operator. What has been kept is the presence of order in the configuration of the variables governing the diagonal noise of the Hamiltonian, and the correlations that such order induces on the spatial links. Due to our drastic simplifications we do not expect any quantitative correspondence between our model and lattice QCD, but just a qualitative one. More precisely, there is no simple way to set the parameters of the toy model to get quantitative agreement with lattice QCD. In particular, intuition from QCD about scales (lattice spacing, localization lengths... ) cannot be used in the toy model, as this has its own dynamics that set these scales. One might also be worried by our choice N T = 2, which is known to be problematic in QCD, and not likely to lead to good quantitative results there. Nevertheless, this is a legitimate (and indeed the simplest) choice one can make to build a toy model which qualitatively resembles QCD with staggered fermions (see Ref. for a more detailed discussion). In particular, one need not be worried about the fact that a very coarse lattice is needed in lattice QCD with N T = 2 to reach the transition temperature: having decoupled the spin dynamics from the rest, whether or not the spin system undergoes a transition is entirely independent of N T. The results of Ref. show that the toy model described above is indeed capable of reproducing the important features of the spectrum and of the eigenmodes, both in the ordered and in the disordered phase. III. NUMERICAL RESULTS In this section we report the results of a numerical study of the toy model defined by Eqs. - in the vicinity of the phase transition in the underlying spin model. Numerical simulations near a critical point are hampered by critical slowing down, but this problem can be overcome using a suitable cluster algorithm. This is discussed in subsection III A, where we report the results of a detailed finite-size-scaling study of the spin model Eq., aimed at determining the critical coupling and the universality class of the transition. We then proceed to study in our toy model the issues of localization and chiral transition, the latter understood here as a singularity in the spectral density at the origin. The most effective observables in pinning down the coupling(s) at which localization appears and/or where a chiral transition takes place, are respectively the participation ratio of the lowest eigenmode and the correspond- ing level spacing. This is discussed in subsection III B, where we also report the results of our numerical study. A. Finite-size-scaling study of the spin model We begin by studying the spin model on its own near the critical point. The formulation of a cluster algorithm to this end is made easier by noticing that Eq. can be recast as wheren Near the transitionn x tends to be aligned to ±n *, forming large clusters of like-oriented spins, and this leads to long autocorrelation times in the simulation history. To overcome critical slowing down we thus employed a Wolff-type cluster algorithm consisting of the following steps. 1. Given a spin configuration, we pick a site at random and build a cluster, adjoining nearby sites x and x ± with probability 2. Once the cluster is built, we flipn x → −n x, i.e., we send x → sgn ( x ) − x, for all sites x in the cluster. This algorithm is easily shown to respect detailed balance, but it obviously fails at being ergodic. For this reason, we paired it with a standard Metropolis algorithm, which restores ergodicity. We studied the model as a function of keeping the symmetry-breaking term fixed at h = 1.0. Defining the magnetization of the system as we measured the susceptibility and the fourth-order Binder cumulant: Our definition of B is such that B → 1 in the disordered phase and B → 0 in the ordered phase. Near the critical point, c, the expected behavior of B and is We thus fitted the numerical data in the range ∈ and for the available volumes with the functional forms of Eq., using polynomial approximations of f and g of increasing order, and assessing the error by means of constrained fit techniques. Our results for the critical point, the critical exponents and, and the critical Binder cumulant B * are reported in Tab. I. These values give an excellent "collapse" of the data points on a single, volume-independent curve, as shown in Figs. 1 and 2. For comparison, in Tab. I we report also the results of Blte et al. for the 3D Ising model. The tension in the results for B * and is probably due to the fact that we are not including the effect of irrelevant couplings in our analysis. Nevertheless, our results strongly support the fact that the transition observed in our model belongs to the 3D Ising universality class. B. Onset of localization and chiral transition in the toy model Let us discuss first the issue of localization. The simplest way to check for localization is to compute the socalled "participation ratio", PR n, of the nth eigenmode, n, defined as where IPR stands for "inverse participation ratio", and n n = a,k ( n ) * ak ( n ) ak stands for summation over the color and temporal-momentum degrees of freedom. Here V = L 3 is the spatial volume. If the nth mode is localized, then the average of PR n over configurations, which we denote by PR n, is expected to vanish in the large-volume limit. On the other hand, for delocalized modes this quantity becomes constant at large volume. We already know from Ref. that localized modes appear first near the origin, so in order to check whether there are localized modes or not, it is sufficient to compute the participation ratio of the first eigenmode, and check how it changes with the volume. In Fig. 3 we show the average participation ratio of the first eigenmode, PR 1, as a function of for different system sizes, namely L = 24, 32, 40 and, in the ordered phase only, also L = 48. The localization properties of the lowest mode are clear below c and well above it. In the disordered phase the lowest mode is delocalized, while it is localized deep in the ordered phase. Starting from large and going down towards c the scaling with V becomes slower, and very close to c the participation ratio actually grows up to L = 40. Nevertheless, PR 1 displays a jump at c, and the largest volume always gives the smallest participation ratio. We take these findings as an indication that also right above c the lowest eigenmode has the tendency to localize. This tendency is, however, hampered by the fact that the typical localization length is bigger than or comparable to the system sizes under consideration. As a consequence, the would-be (lowest) localized eigenmode is effectively delocalized on the whole lattice, thus having a strong overlap with the extended modes, and therefore mixing easily with them under fluctuations of the spins and of the gauge fields. Moreover, we expect its participation ratio to grow until the system is big enough to accommodate a localized mode, whereas it will start to decrease for even larger sizes. In conclusion, we expect that for sufficiently large systems the lowest eigenmode is localized as soon as > c. The closer one is to c, the larger the system has to be for localization to be fully visible. Let us consider next the issue of the chiral transition. In principle (and by definition), this issue should be studied by analyzing the spectral density near the origin. In practice, however, this is very hard in the vicinity of the critical point, and could be done reliably only using high statistics and large volumes, in order to sample properly the near-zero spectral region. Rather than attempting a (difficult) direct measurement, we relied upon the following relation: which is based on the following argument. In the largevolume limit, the spectral density at the origin is equal to the inverse of the average level spacing in the near-zero spectral region. In the same limit, and for fixed j, one has for the eigenvalues of the Dirac operator (and thus for those of our toy-model Hamiltonian) that j → 0. Eq. then follows. This applies to any fixed j, but of course one expects that for too large j the finite-size effects would completely obscure the limit (however, see below for some numerical results for j = 2, 3). In Fig. 4 we show 0 as a function of for the available system sizes. It is clear that below c this quantity tends to a finite constant as the volume is increased. For our largest values of above c, on the contrary, there is a clear tendency for 0 to vanish as V → ∞. The region which is most difficult to understand is right above c. There 0 apparently tends to a finite constant, different from the one right below c. Although it is possible that there are two jumps in 0, one at c and another at some higher value of where 0 jumps to zero, we think that there is a more plausible explanation for this behavior. In fact, as we have already mentioned above, the relative smallness of the system, which causes the lowest mode to be effectively delocalized, is also responsible for its mixing with nearby modes under fluctuations of spins and link variables. The behavior of the lowest mode is thus expected to be similar in all respects to what is found in the disordered phase, and more generally the low end of the spectrum is expected to look the same as it looks in the disordered phase. This includes a nonzero spectral density near the origin. It is likely that for large enough systems, 0 will start to show a nontrivial scaling with V, indicating the vanishing of the spectral density at the origin in the thermodynamic limit. In any case, whether vanishes right above c or not, it is clear that at c it displays a singularity. This indicates that the system has a chiral transition at c. An alternative way of determining is based on its relation with the expectation value of the lowest eigenvalue, 1. In the disordered phase, where = 0, the probability distribution of the lowest eigenvalue is expected to be described by the appropriate ensemble of chRMT. In the case at hand, this should be the symplectic ensemble for the quenched theory in the trivial topological sector, and so where z = 1. From this one obtains the appropriate proportionality factor between 1 and, namely For localized modes one expects instead that the corresponding eigenvalues obey Poisson statistics. In this case, assuming a power-law behavior () = CV for the spectral density near the origin, one finds that 1 ∼ V − 1 1+, and in particular = 1 1 for = 0. Our results for 0, are shown in Fig. 5. Comparing this with Fig. 4 we see that the chRMT result works well below c, while it works less and less well as increases above c. In particular, for large one has that 0 tends to vanish as the volume is increased, signaling a vanishing spectral density at the origin. As before, the region right above c is the one where things are less clear. A nonvanishing accompanied by localization of the lowest modes right above c should yield a 0 appreciably smaller than, and so of 0, while the two quantities compare well. This is most likely another consequence of the smallness of the system size compared to what would be required to properly investigate the region near the critical point. In fact, the effective delocalization and easy mixing of the lowest mode mentioned above leads to correlations building up among eigenvalues, thus leading to a chRMT-like statistical behavior, which should go over to Poisson behavior as the system size increases. For completeness, we conclude this section by showing our numerical results concerning the second and third lowest eigenmodes. In Fig. 6 we show the average participation ratios PR 2 and PR 3. The situation is entirely analogous to that encountered when studying the lowest mode, with similar finite-size effects near the transition which slow down the localization of these modes. In Fig. 7 we show the quantities which in the large-volume limit should also approach V. In this case the volume scaling is somewhat more clear, with the tendency to go to zero as V is increased showing up for lower values of. These results clearly do not change the conclusions discussed above. IV. CONCLUSIONS AND OUTLOOK There are by now several hints at a close connection between the deconfining and chiral transitions and localization of the lowest eigenmodes of the Dirac operator. In this paper we have further studied the toy model of Ref., which mimics the effects of the ordering of Polyakov loops in QCD, i.e., deconfinement, on the spectral density of the low Dirac eigenmodes and the corresponding localization properties. In particular, we have focused on the region near the magnetic transition of the underlying spin model, which corresponds to deconfinement in a gauge theory. We have then studied numerically the localization properties of the lowest eigenmode, and the spectral density at the origin. Our findings are consistent with a chiral transition taking place in correspondence with the magnetic transition, accompanied by the appearance of localized modes. This further supports our expectation that deconfinement plays a major role in the chiral transition and in the localization of the low Dirac modes observed in QCD. There are, however, several aspects that deserve further study. The presence of a chiral transition in our toy model when the spins get ordered is quite clear, since the spectral density at the origin shows a jump there. However, it is not clear yet if such a jump is from the finite value of in the disordered phase to zero in the ordered phase, or to a different finite value. Although the latter possibility seems unlikely, nevertheless the presence of strong finite-size effects makes it difficult to extrapolate to the infinite-volume limit. The origin of such effects lies in the fact that although the lowest modes would like to be localized, their typical localization length is bigger than the system sizes at our disposal. This makes those modes effectively delocalized on our finite lattices, and so easily mixed by fluctuations with other nearby modes. In turn, this is probably responsible for a smaller typical level spacing between the first two eigenvalues, from which the spectral density was extracted. Consequently, we are probably overestimating. Moreover, the lowest eigenmode correlates with the nearby modes, which results in statistical properties closer to those predicted by chRMT than to those expected for localized modes, which should obey Poisson statistics. In order to overcome these problems, and unveil the true nature of the lowest modes, bigger lattices should be employed. This situation should be contrasted to that found with unimproved staggered fermions on coarse lattices. In that case the coincidence of deconfinement, chiral transition and appearance of localized modes is more clean cut. A possible explanation of the difference lies in the different nature of the deconfining transition in that system, which is a first-order transition, and the magnetic transition in our toy model, which is a second-order phase transition of the 3D Ising universality class. In the case at hand, the presence of a huge correlation length near the critical point, and at the same time the fact that the magnetization is very small there, makes it more difficult for the low modes to properly localize. As we said above, this is expected to be the source of the large finite-size effects observed in our determination of. Despite these difficulties, we think that our results confirm those of previous studies in other models, in showing that deconfinement, chiral transition and localization are closely tied to each other. There are several possible extensions of the present study. One obvious possibility is to consider our toy model for gauge group SU, thus making it closer to QCD. This involves a different spin model to mimic the behavior of the Polyakov lines than the one employed here (see Ref. for details). A more interesting possibility is to extend the toy model to the case of adjoint fermions: this could help in understanding why for adjoint fermions deconfinement and chiral restoration take place at different temperatures. |
Production of l-Methionine from 3-Methylthiopropionaldehyde and O-Acetylhomoserine by Catalysis of the Yeast O-Acetylhomoserine Sulfhydrylase. l-Methionine is an essential bioactive amino acid with high commercial value for diverse applications. Sustained attentions have been paid to efficient and economical preparation of l-methionine. In this work, a novel method for l-methionine production was established using O-acetyl-homoserine (OAH) and 3-methylthiopropionaldehyde (MMP) as substrates by catalysis of the yeast OAH sulfhydrylase MET17. The OAH sulfhydrylase gene Met17 was cloned from Saccharomyces cerevisiae S288c and overexpressed in Escherichia coli BL21. A 49 kDa MET17 was detected in the supernatant of the recombinant E. coli strain BL21-Met17 lysate with IPTG induction, which exhibited the biological activity of l-methionine biosynthesis from OAH and MMP. The recombinant MET17 was then purified from E. coli BL21-Met17 and used for in vitro biosynthesis of l-methionine. The maximal conversion rate (86%) of OAH to l-methionine catalyzed by purified MET17 was achieved by optimization of the molar ratio of OAH to MMP. The method proposed in this study provides a possible novel route for the industrial production of l-methionine. |
In the Era of Cybersecurity: Cryptographic Hardware and Embedded Systems Cybersecurity is a completely new concept, which has attracted the major interest of both academia and industry. New technologies, such as Internet of Things, have shown special interest for cybersecurity systems, in order to fruitful the special demands, of confidentially, authorization, and integrity, for sensitive and personal data. In the cybersecurity era, critical hardware based technologies are latest applied, in order to support efficiently crucial security applications and embedded devices. Filling the security gaps of present and future, for technologies such as IoT, 5G etc, are consider as targets of major importance. Security tokens, privacy services, approaches such as smart cards, and trusted platforms modules, are also focused. Systems' vulnerabilities, are well as security analysis and possible attacks, are considered of major importance, in the cybersecurity era. Cryptographic hardware and embedded systems, are proven powerful and trustworthy solutions in terms of implementation efficiency: timing, throughput, allocated resources, power, energy, always in balance with the security level, each time. Alternative hardware devices and frameworks, can be used alternatively, in order to achieve the best implementation parameters each time. |
The Relationship between Perception of Dengue Hemorrhagic Fever and Prevention Behaviour in Sorogenen 2 Purwomartani Kalasan Sleman Yogyakarta Dengue Hemorrhagic Fever (DHF) is a disease found in most tropical and subtropical regions. The natural hosts of DHF are humans. The agent is a dengue virus that belongs to the family Flaviridae and the genus Flavivirus. Around 2.5 billion people in the world are at risk of contact with dengue. World Health Organization (WHO) estimates that around 50 million people worldwide are infected with dengue virus each year, with around 400.000 cases of DHF. People tend to respond to DHF is not a serious problem and is considered trivial, which causes the behavior to control the actions of DHF is still lacking. Prevention of DHF is to prevent the bite of Aedes mosquitoes that contain dengue virus to humans by maintaining environmental hygiene so as not to become a medium of suppression of Aedes aegypti mosquitoes. This study aims to determine the relationship of perceptions about DHF and Prevention of DHF behavior in Sorogenen 2 Purwomartani Kalasan Sleman Yogyakarta. We conducted quantitative research with an observational analytic design using crosssectional approach. This research was conducted at Sorogenen 2 Purwomartani Kalasan Sleman, Yogyakarta with a sample of 87 respondents chosen with simple random sampling technique. As many as 71 (81.6%) respondent had a good perception of DHF and 16 (18.4%) respondents had a wrong perception of DHF. A total of 46 (52.9%) respondents had good behavior, and 41 (47.1%) respondents had good behavior towards prevention of DHF. It was a relationship between perceptions of DHF and prevention behavior (PR=1.84; CI=1.23 2.73; p=0.028). People who had poor perceptions of DHF more likely had lack behavior against DHF prevention. Perception of DHF increases the risk of the lack behavour of DHF prevention in Sorogenen 2 Purwomartani Kalasan Sleman Yogyakarta. KeywordsDHF, perception, behavior |
Effect of two volume responsiveness evaluation methods on fluid resuscitation and prognosis in septic shock patients Background Few studies have reported the effect of different volume responsiveness evaluation methods on volume therapy results and prognosis. This study was carried out to investigate the effect of two volume responsiveness evaluation methods, stroke volume variation (SVV) and stroke volume changes before and after passive leg raising (PLR&Dgr;SV), on fluid resuscitation and prognosis in septic shock patients. Methods Septic shock patients admitted to the Department of Critical Care Medicine of Zhejiang Hospital, China, from March 2011 to March 2013, who were under controlled ventilation and without arrhythmia, were studied. Patients were randomly assigned to the SVV group or the PLR&Dgr;SV group. The SVV group used the Pulse Indication Continuous Cardiac Output monitoring of SVV, and responsiveness was defined as SVV ≥12%. The PLR&Dgr;SV group used &Dgr;SV before and after PLR as the indicator, and responsiveness was defined as &Dgr;SV ≥15%. Six hours after fluid resuscitation, changes in tissue perfusion indicators (lactate, lactate clearance rate, central venous oxygen saturation (SCVO2), base excess (BE)), organ function indicators (white blood cell count, neutrophil percentage, platelet count, total protein, albumin, alanine aminotransferase, total and direct bilirubin, blood urea nitrogen, serum creatinine, serum creatine kinase, oxygenation index), fluid balance (6 and 24hour fluid input) and the use of cardiotonic drugs (dobutamine), prognostic indicators (the time and rate of achieving early goaldirected therapy (EGDT) standards, duration of mechanical ventilation and intensive care unit stay, and 28 day mortality) were observed. Results Six hours after fluid resuscitation, there were no significant differences in temperature, heart rate, blood pressure, SpO2, organ function indicators, or tissue perfusion indicators between the two groups (P >0.05). The 6 and 24hour fluid input was slightly less in the SVV group than in the PLR&Dgr;SV group, but the difference was not statistically significant (P >0.05). The SVV group used significantly more dobutamine than the PLR&Dgr;SV group (33.3% vs. 10.7%, P =0.039). There were no significant differences in the time ((4.8±1.4) h vs. (4.3±1.3) h, P=0.142) and rate of achieving EGDT standards (90.0% vs. 92.9%, P = 0.698), or in the length of mechanical ventilation and ICU stay. The 28day mortality in the SVV group (16.7% (5/30)) was slightly higher than the PLR&Dgr;SV group (14.3% (4/28)), but the difference was not statistically significant (P = 0.788). Conclusions In septic shock patients under controlled ventilation and without arrhythmia, using SVV or PLR&Dgr;SV methods to evaluate volume responsiveness has a similar effect on volume therapy results and prognosis. The evaluation and dynamic monitoring of volume responsiveness is more important for fluid resuscitation than the evaluation methods themselves. Choosing different methods to evaluate volume responsiveness has no significant influence on the effect of volume therapy and prognosis. |
The Ruamano Project: Raising Expectations, Realising Community Aspirations and Recognising Gifted Potential in Mori Boys When gifted Mori students feel they belong and find their realities reflected in the curriculum, conversations and interactions of schooling, they are more likely to engage in programmes of learning and experience greater school success. This article reports on a teacher-led project called the Ruamano Project, which investigated whether Maker and Zimmerman's Real Engagement in Active Problem Solving model (REAPS) could be adapted successfully to identify talents and benefit the student achievement and engagement of Mori boys in two rural Northland, New Zealand secondary school contexts. The project aimed to implement Treaty of Waitangi-responsive and place-based science practices by improving homeschoolcommunity relationships through the authentic engagement of whnau and iwi into the schools planning, implementation and evaluation of a REAPS unit. As a result of this innovation, teachers perceptions of Mori boys shifted, their teaching practices changed, more junior secondary Mori boys were identified as gifted by way of improved academic performance, and iwi and community members were engaged in co-designing the inquiry projects. Our research indicated that the local adaptation of the REAPS model was effective in engaging and promoting the success of gifted and talented Mori boys. |
Logistics innovations in the road transport sector in Kenya This study is on the logistics innovations in the road transport sector in Kenya. The researcher sought to know the adopted logistics innovation and the benefits that accrue by implementing the logistics innovations In the 21st century businesses compete greatly to impress and attract customers. The growth o f Kenyas economy hinges to a large extent on the road transport sector operating more efficiently and effectively in moving freight and goods. This study was on benefits which are achieved by the road transport companies which adopt logistics innovations technologies in Kenya. These benefits included in the study are operational efficiency, cost reduction, improved customer services, and competitive advantage. The logistics innovations are classified into data acquisition innovations, information and transportation technologies. The study employed questionnaires as the primary data collection instrument. The questionnaire consisted of both open and close-ended questions aimed at obtaining information on the benefits of the adoption o f logistics innovations in the road transport sector in Kenya. A content analysis and descriptive analysis was employed to analyze the collected data. The content analysis was used in analyzing the respondents views. Tables and other graphical presentations as appropriate have been used to present the data collected for ease of understanding and analysis. The study found that logistics innovations w'hen implemented by the road transport sector firms indeed increased the benefits of operational efficiency, reduction in costs o f operating, customers were satisfied and competitive advantage is gained. The government should encourage the adoption o f innovations in the administration of regulatory mechanisms. It can also provide financial incentives, pilot projects, and tax breaks to stimulate logistics innovations for the road transport sector. Transporters too can take advantage o f logistics innovations to enhance their service provisions. TABLE OF CONTENTS DECLARATION ii DEDICATION iii ACKNOWLEDGEMENT iv T ABLE OF CONTENTS vi LIST OF T ABLES x ABBREVIATIONS xi CHAPTER O N E 1 1.0 Introduction 1 1.1 Background o f the Study 1 1.1.1 Logistics Innovation 1 1.2 Statement o f the Problem 5 1.3 Objective o f the study 6 1.4 Value of the S tudy 6 CHAPTER TWO: LITERATURE REVIEW 8 2. |
THE EFFECT OF HIPPOCAMPAL SPARING ON VERBAL MEMORY OUTCOME FOLLOWING DOMINANT TEMPORAL LOBECTOMY REVISED ABSTRACT RATIONALE: In recent years, risk of material specific verbal memory decline following dominant temporal lobectomy (TL) has led to selective cases of surgical sparing of the hippocampus in our program. This study examines the memory outcome of these patients. METHODS: The subjects were 38 dominant TL patients. Twenty-six underwent TL with resection of the hippocampus, while the hippocampus was spared in 12 cases based on a combination of risk factors including absence of MTS, intact dominant hemisphere memory performance in the intracarotid amobarbital procedure (IAP), later age of seizure onset and intact baseline verbal memory test performance. The two groups were compared on measures of the verbal Selective Reminding Test (vSRT) using pre-to postoperative difference scores with a univariate ANOVA which adjusted for baseline performance on each variable. The two groups did not differ in age, IQ or gender. RESULTS: Preoperative performance on the vSRT was equivalent for the two groups with mean scores ranging from intact to mildly impaired. Postoperatively, patients who underwent hippocampal resection demonstrated significantly greater decline in consistent long term retrieval (p<.05) and delayed recall (p<.05) vSRT measures, compared to patients whose surgeries spared the hippocampus. CONCLUSIONS: These data suggest that sparing of the hippocampus in dominant TL results in preservation of verbal memory. This more conservative surgical procedure in cases at high risk for postoperative memory decline is neuropsychologically justified. These data will be discussed in relation to surgical selection criteria and postoperative seizure outcome. |
Re-defining the question Writing for an LGBT website homiki.pl, the philosopher Tomek Kitlinski and the art historian and curator Pawel Leszkowicz scrutinize the past twenty years of LGBT activism in Poland, reversing the usual narrative of progress. They revisit some political campaigns and artistic projects that focused on gender and on queer sexuality, such as the 2003 poster campaign "Let them see us," which featured photos by Karolina Bregula of same-sex couples holding hands, Tomek and Pawel among them. Writing today, they dismiss the enhanced visibility of LGBT issues because it has not led to institutional change. Mirroring in this respect the failure of the women's movement in Poland to reverse the 1993 ban on abortion, the LGBT movement has failed to implement registered unions or other similar measures. Hence, the representations of gender and sexual difference by Polish activists and artists must be read as dramatizing sexist and heteronormative oppression, as well as the movement's failure, rather than as a mark of progress. |
The Effect of Loading Configuration and Footprint Geometry on Flexible Pavement Response Based on Linear Elastic Theory ABSTRACT The analysis of flexible pavements using circular footprint geometry with uniform contact pressure has been used for decades due to the lack of a powerful computational tool that is both reliable and simple to use by pavement engineers. In this paper, the effect of different footprint geometries with different loading configurations including nonuniform, uniform, and average pressures on the response of flexible pavement is investigated by varying the thickness of the AC layer. Our results indicate that the use of circular footprint areas with uniform contact pressure equal to the tire inflation pressure can produce erroneous results that tend to overestimate the predicted fatigue life and rutting life of flexible pavements. Therefore, the pavement response analysis should be carried out using the measured pressure and footprint area when possible. In addition, the results showed that elastic analysis of flexible pavement can be carried out reliably and accurately using advanced computational tools such as the multilayered elastic pavement program MultiSmart3D. |
A new multidimensional measure of personal resilience and its use: Chinese nurse resilience, organizational socialization and career success. This study refined the concept of resilience and developed four valid and reliable subscales to measure resilience, namely, Determination, Endurance, Adaptability and Recuperability. The study also assessed their hypothesized relationships with six antecedent variables (worry, physiological needs satisfaction, organizational socialization, conscientiousness, future orientation and Chinese values) and with one outcome variable (nurses' career success). The four new 10-item subscale measures of personal resilience were constructed based on their operational definitions and tested for their validity and reliability. All items were included in a questionnaire completed by 244 full-time nurses at two hospitals in China. All four measures demonstrated concurrent validity and had high reliabilities (from 0.74 to 0.78). The hypothesized correlations with the personality and organizational variables were statistically significant and in the predicted directions. Regression analyses confirmed these relationships, which explained 25-32% of the variance for the four resilience facets and 27% of the variance for the nurses' career success. The results provided strong evidence that organizational socialization facilitates resilience, that resilience engenders career success and that identifying the four resilience facets permits a more complete understanding of personal resilience, which could benefit nurses, help nurse administrators with their work and also help in treating patients. |
No effect of water deprivation for 48 hours on the pharmacokiinetics of intravenous tacrolimus in rats. Because the physiological changes that occur in patients with water deprivation could alter the pharmacokinetics of drugs, the pharmacokinetics of tacrolimus were investigated after 1-min intravenous administration of the drug (1 mg/kg) to control rats and rats with water deprivation for 48 h. In rats with dehydration, kidney function seemed to be impaired slightly. Kidney weight (0.800 versus 0.676% body weight) increased significantly and the renal tissue showed only total and mild tubular dilatation and flattening of tubular epithelial cells based on kidney microscopy. However, hepatic function seemed not to be impaired in rats with dehydration. After intravenous administration of tacrolimus, the pharmacokinetic parameters were not significantly different between two groups of rats and the results were expected since tacrolimus was almost completely metabolized in rats (impaired kidney function could not affect considerably the pharmacokinetics of tacrolimus) and hepatic function was not impaired in rats with dehydration. |
Stability and convergence analysis for different harmonic control algorithm implementations In many engineering systems there is a common requirement to isolate the supporting foundation from low frequency periodic machinery vibration sources. In such cases the vibration is mainly transmitted at the fundamental excitation frequency and its multiple harmonics. It is well known that passive approaches have poor performance at low frequencies and for this reason a number of active control technologies have been developed. For discrete frequencies disturbance rejection Harmonic Control (HC) techniques provide excellent performance. In the general case of variable speed engines or motors, the disturbance frequency changes with time, following the rotational speed of the engine or motor. For such applications, an important requirement for the control system is to converge to the optimal solution as rapidly as possible for all variations without altering the system's stability. For a variety of applications this may be difficult to achieve, especially when the disturbance frequency is close to a resonance peak and a small value of convergence gain is usually preferred to ensure closed-loop stability. This can lead to poor vibration isolation performance and long convergence times. In this paper, the performance of two recently developed HC algorithms are compared (in terms of both closed-loop stability and speed of convergence) in a vibration control application and for the case when the disturbance frequency is close to a resonant frequency. In earlier work it has been shown that both frequency domain HC algorithms can be represented by Linear Time Invariant (LTI) feedback compensators each designed to operate at the disturbance frequency. As a result, the convergence and stability analysis can be performed using the LTI representations with any suitable method from the LTI framework. For the example mentioned above, the speed of convergence provided by each algorithm is compared by determining the locations of the dominant closed-loop poles and stability analysis is performed using the open-loop frequency responses and the Nyquist criterion. The theoretical findings are validated through simulations and experimental analysis. |
Nonlinear functional analysis This manuscript provides a brief introduction to nonlinear functional analysis. We start out with calculus in Banach spaces, review differentiation and integration, derive the implicit function theorem (using the uniform contraction principle) and apply the result to prove existence and uniqueness of solutions for ordinary differential equations in Banach spaces. Next we introduce the mapping degree in both finite (Brouwer degree) and infinite dimensional (Leray-Schauder degree) Banach spaces. Several applications to game theory, integral equations, and ordinary differential equations are discussed. As an application we consider partial differential equations and prove existence and uniqueness for solutions of the stationary Navier-Stokes equation. Finally, we give a brief discussion of monotone operators. |
Charge storage at Pt/YSZ interface as the origin of 'permanent' electrochemical promotion of catalysis Tuning of the catalytic reaction rate by electric polarization of the interface between an electron conducting catalyst and an ion conducting support, called electrochemical promotion of catalysis (EPOC), is most often fully reversible. Its state-of-the-art model regards the gas-exposed catalyst surface as the unique location of charge storage via backspillover of electrochemically generated species, responsible for promotion. After long-lasting anodic polarization, a permanent effect (P-EPOC) was observed in ethylene combustion with oxygen over Pt/YSZ catalyst. Double step chronoamperometric and linear sweep voltammetric analysis revealed delayed oxygen storage, located presumably at the vicinity of the catalyst/electrolyte interface. It is proposed that oxygen stored at this location, hence hidden for the reactants and then released during relaxation, was at the origin of the observed P-EPOC. The effect of this hidden promoter on the catalytic reaction rate was found to be highly non Faradaic. Introduction Controlled tuning of catalytic activity has been a longsought goal in heterogeneous catalysis. In their pioneering work in the early 1980s Vayenas et al. reported the control of catalytic reactions via electrochemical polarization. They found that the catalytic activity of thin porous metal catalyst films could be tuned in a controlled manner by polarization of the catalyst/solid electrolyte interface in an electrochemical cell of the type: catalyst solid electrolyte catalytically inert metal where the catalyst film is the working electrode and the catalytically inert metal (typically gold) is the counter electrode. Take the example of ethylene combustion as catalytic reaction: C 2 H 4 (g) + 3O 2 (g) → 2CO 2 (g) + 2H 2 O(g) occurring at an open-circuit reaction rate of r o (mol O s −1 ). Using an oxide ion (O 2− ) conducting material (e.g. yttriastabilized-zirconia, YSZ) as solid electrolyte, application of an anodic current between the counter and the working electrode (now the solid electrolyte is the source of O 2− ions and the working electrode is the collector of electrons) may result in the electrochemical oxidation of ethylene at the working electrode: Supposing a current efficiency of 100%, the maximum possible electrochemical reaction rate, r el (mol O s −1 ), is calculated with Faraday's law: where I is the electric current, z is the charge number of the transported ions (for O 2−, z = 2), and F is the Faraday constant. If open-circuit and Faradaic reactions would be simply added, Eq. would give the maximum expected increase in reaction rate due to polarization. Fig. 1 shows, in a schematic way, the evolution of the experimentally observed reaction rate, r, in a stepwise anodic polarization cycle, i.e. before, during, and after galvanostatic polarization of the catalyst/YSZ interface. It is seen, that the experimental rate increase, rr o, is by orders of magnitude higher T = 380C, p O2 = 17 kPa, p C2H 4 = 140 Pa. than the maximum possible rate increase (1.6 nmol O s −1 ) calculated from Faraday's law. Obviously, polarization of the catalyst/electrolyte interface causes a dramatic alteration in catalytic activity rather than simply contribute to the reaction rate by adding the electrochemical (Faradaic) reaction. The highly non-Faradaic character of electrochemical promotion originates its currently used synonym: non-Faradaic electrochemical modification of catalytic activity (NEMCA effect). EPOC is usually quantified by two parameters, and. The rate enhancement factor,, is defined as the ratio of the steadystate promoted catalytic rate, r, to the initial open-circuit reaction rate, r o, and is a measure of the level of promotion: The Faradaic efficiency,, is defined as the ratio of the observed rate increase to the maximum possible electrochemical rate: so | | > 1 is the criterion of non-Faradaic behavior. For the example seen in Fig. 1, the approximate values are ≈ 7 and ≈ 85. Since its discovery, the non-Faradaic character of EPOC promotion has been demonstrated for more than 70 catalytic reactions, and it is now well established that EPOC is not limited to any particular class of catalysts, electrolytes or reactions. The current understanding of the physicochemical origin of the phenomenon, based on numerous spectroscopic and electrochemical techniques and reviewed thoroughly, attributes the effect of electrochemical promotion to transport of ionic species through the solid electrolyte support, their discharge at the triple phase boundary (tpb) and subsequent migration of the discharged species to the catalytically active catalyst/gas interface. The discharged species act as promoters but are also consumed by the catalytic reaction and/or desorption. The resulting steady-state population of promoters at the gas exposed catalyst surface causes a potential-controlled change in the work function of this latter. According to this concept, EPOC is reversible and the catalyst restores its initial activity, typically within a few tens of minutes, after potential or current interruption. In our laboratory, several cases of irreversible EPOC have been reported. Such effect, termed 'permanent' electrochemical promotion (P-EPOC), was first observed with IrO 2 catalyst for ethylene combustion, and later also with RuO 2 and Rh catalysts, all interfaced with YSZ. Recently, it was found that reversibility of EPOC may depend strongly on the duration of polarization. In fact, as illustrated in Fig. 1, in the same catalytic system short (15 min) polarization causes reversible promotion, see curve (a), while after prolonged (50 min) polarization the open-circuit catalytic reaction rate after current interruption remains significantly higher than its initial value before current application, see curve (b). Such behavior can not be interpreted with the current model of EPOC. In this paper, recent studies are reviewed on the phenomenon of P-EPOC with Pt/YSZ catalyst combining reaction rate measurements in ethylene combustion and electrochemical analysis -, aiming at getting a deeper insight to the origin of P-EPOC. Experimental In the single-pellet type three-electrode Pt/YSZ electrochemical cell used for catalytic measurements, the working electrode was a platinum film deposited onto a 1 mm thick YSZ (8 mol%) pellet by non-reactive magnetron sputtering of platinum at ambient temperature in argon atmosphere followed by heat treatment at 700C in air, while counter and reference electrodes were pasted gold films (Gwent C70219R4) fired at 550C in air. The size of the electrodes was 75 mm, giving a geometric surface of 0.35 cm 2. More details on cell preparation are given elsewhere. In the cell used for electrochemical characterization, all three electrodes were platinum films deposited onto a 1.3 mm thick YSZ (8 mol%) pellet by screen-printing of a paste, composed of 65% w of 1 m particle size platinum powder (Fluka), 11% w of 1 m particle size YSZ (8 mol% Y 2 O 3 in ZrO 2, Tosoh) and 24% w of a polyvinyl pyrrolidone solution (2% in isopropanol, Fluka), followed by sintering at 1400C in air to give a film thickness of 15 m. The resulting deposits of 0.08 cm 2 geometric surface area each are composed of 62% vol of platinum and 38% vol of YSZ and they are highly porous. No morphological change has been observed due to prolonged use and/or polarization during working months. The reactor for both catalytic and electrochemical measurements was of single-chamber type where all electrodes were exposed to the same atmosphere. It consisted of a quartz tube of 90 ml volume closed with a stainless steel cap, and it worked un-der atmospheric pressure. The electrochemical cells were suspended in the reactor with three gold wires serving as electrical contacts to the electrodes. The reactor was put into a furnace (XVA271, Horst) equipped with a heat control system (HT30, Horst), and the temperature was measured with a K-type (NiCr-Ni) thermocouple placed in proximity of the surface of the working electrode. Constant gas flow of 200 ml min −1 STP was fed by mass flow controllers (E-5514-FA, Bronkhorst). Catalytic measurements were conducted in a slightly oxidizing reactive gas mixture containing 0.25 kPa C 2 H 4 and 1 kPa O 2, while electrochemical characterization was made at 20 kPa O 2 partial pressure. The gas sources were Carbagas certified standards of O 2 (99.95%) and C 2 H 4 (99.95%) supplied, respectively, as 20% and 1% mixture in He (99.996%). Balance was helium of 99.996% purity. Electrochemical promotion experiments were realized in potentiostatic mode of operation using a potentiostat (EG&G PAR, Model 362), and CO 2 production of the catalytic oxidation of ethylene was monitored on-line using an IR analyzer (Horiba PIR 2000). Chronoamperometric and voltammetric measurements were made with a scanning potentiostat (Autolab, Model PGSTAT30, EcoChemie). Results Used as catalyst and working electrode at the same time, the catalytic activity of Pt deposited onto YSZ was measured under both open-circuit and anodically promoted conditions, the model catalytic reaction being the combustion of ethylene with oxygen. Electrochemical characterization of the Pt/YSZ electrode was made using chronoamperometry and linear sweep voltammetry - in O 2 -containing atmosphere. The reaction rate of the catalytic combustion of ethylene with oxygen over the Pt/YSZ catalyst was measured at two different temperatures in an ethylene/oxygen mixture of slightly oxidizing composition. Potentiostatic anodic polarization at E W R = 400mV was applied for varying holding times, t h, and polarization and relaxation transients of the catalytic reaction rate were recorded, shown in Fig. 2. Initially the catalyst is under open-circuit conditions and the non-promoted rate is about r o = 150 nmol O s −1 at 525C, while it equals only about 15 nmol O s −1 at 600C due to known desactivation of Pt catalyst above 550C. Once a positive catalyst potential is applied, the rate immediately starts increasing. The time needed for the catalytic rate to reach its new electropromoted steady-state value is approximately 1 hour. At 525 and 600C, respectively, the rate enhancement ratio is = 3.5 and 24, and the Faradaic efficiency is = 160 and 40, showing strong non-Faradaic effect even at 600C which is quite exceptional at such high temperature. It is seen, that once the steady-state rate is reached, it remains constant whatever is the duration of prolonged anodic polarization. The relaxation transients, however, are strongly dependent of the polarization time, t h. After short polarization (t h = 1 h) the open-circuit catalytic activity drops abruptly and reaches quickly its initial values like in any typical (reversible) EPOC experiment. In contrast, by increasing the polarization time, longer and longer relaxation is required to attain the initial catalytic activity. At the higher temperature the relaxation, just like any kinetics, is faster but also passing through a maximum. As seen, the effect of polarization time, t h, is only observed once the circuit is opened, but it has apparently no effect on the electropromoted steady-state reaction rate during the polarization step. Obviously, long-lasting polarization must alter a part of the system which is not exposed to the catalytic reaction. After current interruption this hidden influence becomes visible, so the species implicated must reach somehow the active catalytic surface of the system. Double-step chronoamperometric measurements were made at 450C using a potential program composed of a pretreatment step and two measurement steps (see the inset in Fig. 3). The cathodic pretreatment step (A), polarization at a constant potential of E pr e = -400 mV for t pr e = 60 s, aims to reduce any residual oxidized species and so providing a welldefined initial state. In the first measurement step (B) a constant anodic holding potential, E h, was applied for varying holding times, t h. Then, in the second measuring step (C), the cell was discharged by setting the potential to a constant cathodic potential, E dis = -300 mV, held for t dis = 150 s. This step aims to reduce the species formed during the preceding anodic potential holding step. During the two measuring steps, the current passing through the cell was recorded as a function of time. The observed chronoamperometric transients of the anodic polarization step (B) were composed of two parts. In the first transient part, the current decreased with a time constant of a few tens of seconds to reach a non-zero steady-state value characteristic to the second part. The time constant and the current, both transient and steady-state, depended on the anodic holding potential. The non-zero steady-state current indicates clearly the existence of an anodic Faradaic process under positive polarization, which persists at infinite time. This process is attributed mainly to the oxygen evolution reaction (Eq. 6), which is the discharge of O 2− ion from the solid electrolyte: However, one can not exclude that the observed finite steadystate current is contributed also from other Faradaic reactions related to charge storage, as it will be shown below. On the other hand, there is a clear evidence of a Faradaic contribution also to the transient current. In fact, the experimentally found time constant is by several orders of magnitude higher than that of double layer charging of a blocking Pt/YSZ interface in the given cell, estimated with the ohmic resistance of the electrolyte (R el = 980 measured by impedance spectroscopy), with the capacitance of the double layer (C dl = 50 F cm −2 ) and with the geometric surface area of the electrode (S = 0.08 cm 2 ) to give = R el C dl S = 4 ms. Even with a roughness factor (meaning here the ratio of the Pt/YSZ interface area to the geometric area) estimated to be as high as 10, the time constant of double layer charging would remain by at least two orders of magnitude below the experimentally obtained value. This indicates a contribution from another phenomenon, which is not only much slower than the electrostatic double layer charging but it is also at the origin of the pseudocapacity of the electrode which is by orders of magnitude higher than the electrostatic double layer capacity. Fig. 3 shows the current transients during cathodic discharge, step (C), subsequent to anodic polarization at E h = +100 mV for varying holding time, t h. For short holding time (t h = 5 min), a relatively fast decay of the cathodic current is observed, approaching a steady-state value (-30 A) after about 10 s. The other curves also show a rapid decay in the initial stage of the cathodic current (t < 10 s), but as t h increases, more and more time is needed to reach the steady-state current. In particular, after a long polarization time of t h = 80 min, 80 seconds are necessary to reach the steady-state. This suggests that, during anodic polarization, charge is stored via a Faradaic process, and this is not limited to the period of anodic current decay but is extended to the region of steady-state, in parallel to the main reaction of O 2 evolution. The charges involved in the storage process were determined by integration of the chronoamperometric transients using the final steady-state currents as baseline to give values of Q ch and Q dis in the anodic (charging) and the cathodic (discharging) step, respectively. The value of Q dis is regarded as the charge effectively stored during the anodic step. Fig. 4 shows the values of Q ch and Q dis obtained by integration of the It curves in Fig. 3 as a function of the anodic holding time in the range of 5 to 80 minutes. Contrary to Q ch, which is independent of t h, Q dis increases with increasing t h. At short holding time (t h = 5 min) the two charges are comparable, meaning that the charge apparently stored during the anodic charging step represents the totality of the stored charge measured by its release during the discharging step. However, as t h increases, Q dis exceeds Q ch, meaning that the electrode has stored more charge than expected from the anodic It curve. The difference between these two charges, a sort of excess charge, Q ex (= Q dis -Q ch ) was found to be a linear function of t 1/2 h (see the inset of Fig. 4), suggesting a diffusion-controlled process. The linear extrapolation, however, does not pass through the origin of the plot, having an intersection of t 1/2 h = 10 s 1/2 at Q ex = 0. This means that there is a delay of roughly 100 s after the onset of the anodic polarization before excess charge is stored with a t 1/2 h kinetic rule. Then this storage process goes on during the totality of the anodic polarization step, including the period of steady-state current. Hence, a certain fraction of the apparent steady-state current is in fact stored as excess charge. One can define a charge storage yield as the ratio between Q ex and the total charge passing as steadystate current. In the present case, the obtained yield lies between 1.5 and 6% and tends to decrease with increasing t h. These observations are in good agreement with linear sweep voltammetric measurements, made first with moderate and later with very long anodic potential holding. The measurements consisted of a cathodic pretreatment step, identical to that of the above chronoamperometric experiments, and an anodic potential holding step at E h = +100 mV for different holding times between 1 and 2000 minutes, followed by a linear potential sweep down to a cathodic potential of -800 mV with a scan rate of 10 mV s −1 (first cathodic scan). Fig. 5 shows a typical voltammetric response of the Pt/YSZ electrode. At a very short holding time (1 min), two distinct reduction peaks appear with comparable sizes at about -150 and -250 mV, respectively. By increasing t h, the second peak increases more rapidly than the first peak. By further increasing the holding time, a third peak Per. Pol. Chem. Eng. -not observed at very low t h -appears progressively. One can discern the third peak from about t h = 10 min. At higher holding times (t h > 80 min), the first and second peaks have stopped growing, but the third peak shows no sign of saturation. This appears clearly in Fig. 6 which reports the charge for each peak as a function of the holding time. The charges were obtained by peak integration and given in terms of equivalent amount of oxygen atoms (atom O cm −2 ) for the three peaks (N 1, N 2 and N 3, respectively), calculated with the exchange of two electrons and referred to unit geometrical surface area of the deposit. The amount involved in the first peak increases from the beginning and reaches saturation in about 10 minutes. Similarly, the second peak starts to grow from the beginning suggesting two parallel processes. Also the area of the second peak tends to saturation, at a value of about seven times higher than that of the first peak, in about 80 minutes of holding time. As seen in the inset of Fig. 6, the third peak starts growing when the first peak has reached its saturation. Therefore this process seems to be consecutive to that of the first peak. The third peak then grows continuously and, during holding times as long as 2000 min, there is no clear sign of any tendency to peak area saturation. Similarly to the accumulation of the excess charge in the double-step chronoamperometric experiments (see the inset of Fig. 4), also the increase in the area of the third voltammetric peak (N 3 ) follows t 1/2 h kinetics, suggesting again a diffusion mechanism. Supposing that N 1 corresponds to the formation of an oxide monolayer at the Pt/YSZ interface and N 3 to the formation of multilayer, one can estimate the diffusion length L t at a given time from is the amount of oxygen atoms in the multilayer at time t h, N 1 is the amount of oxygen atoms in the monolayer at the Pt/YSZ interface (6.610 14 atom) and d is the average thickness of an oxide layer (2.710 −10 m, estimated with the Pt-Pt atomic distance ). Knowing the diffusion length as a function of time, a diffusion coefficient of D = 310 −22 m 2 s −1 is calculated. This value is typical for a diffusion process in a solid phase and is in good agreement with prediction for the diffusion of oxygen inside platinum at the experimental temperature of 450C. Discussion The pseudocapacitive behavior of the O 2(g),Pt/YSZ system reveals that Faradaic processes contribute to both the time dependent and the steady-state current observed in chronoamperometry. A possible reaction scheme involving two anodic Faradaic processes is proposed. One of them is oxygen evolution via electrochemical oxidation of O 2− ions (Eq. 6), responsible for the main part of the steady-state current, while the other is electrochemical oxidation of platinum to form Pt-O type species (Eq. 7), responsible for charge storage: where use of the symbol Pt-O is due to the unknown stoichiometry of the electrochemically formed oxide. The two reaction paths share the same reactant O 2−, the charge carrier in the solid electrolyte YSZ. Charge storage may take place at different locations in the O 2 (g),Pt/YSZ system. First, O 2− originating from the YSZ lattice gets in contact with the Pt electrode to form a platinumoxygen compound by releasing two electrons. The formation of this first oxide layer at the Pt/YSZ binary interface is believed to be at the origin of the first peak observed by linear sweep voltammetry. The process is fairly reversible, and the completion of this oxide layer is rapid requiring about 10 minutes of holding time at the anodic potential of +100 mV (see the inset of Fig. 6). The saturation amount of the oxide species is 810 15 atom O cm −2 of geometrical surface area. Comparison with the surface density of Pt (∼110 15 atom O cm −2 ) gives a roughness factor of about 8. Due to the formation of a compact and poorly conducting oxide layer, the Pt/YSZ binary interface gets a blocking character, which renders any further charge transfer through this interface difficult. However, due to build-up of a strong concentration gradient, the accumulating oxygen species may diffuse slowly away from the electron exchange site, following t 1/2 kinetics. The third peak observed by linear sweep voltammetry may be correlated with this slow process, apparently consecutive to the first process, without showing any tendency to saturate even after anodic polarization as long as 2000 minutes. It is believed that this process consists of progressive growth of the platinum oxide layer formed at the metal/electrolyte interface during the first process. However, there is no direct experimental evidence about the location of oxygen stored in this solid diffusion-controlled step. One can not exclude the possibility that oxygen is stored in the YSZ at the vicinity of the anodically polarized Pt electrode, and not in the Pt electrode itself. Charge may be stored in form of oxygen species also at the Pt/gas interface via spillover mechanism. Atomic oxygen released at the tpb does not desorb necessarily to the gas phase as molecular oxygen but may be stuck on the metal, the resulting oxygen species spreading out over the gas-exposed surface. This process is well known in heterogeneous catalysis and considered as the origin of EPOC. The second peak observed in linear sweep voltammetry may well correspond to the reduction of oxygen species populating the gas exposed surface via the inverse reaction of (Eq. 6). In fact, the second process of charge storage has a time constant of a few tens of minutes, in good agreement with that commonly observed in EPOC experiments. The area of the second peak tends to saturation at about 610 16 atom O cm −2, which corresponds to partial steady-state coverage of the highly porous Pt/gas interface. This picture is in good agreement with the sacrificial promoter mechanism of electrochemical promotion presented in the Introduction. As postulated, the electrochemically produced species populate progressively the catalyst/gas interface where they are consumed both by reaction with the reactant (ethylene) and by desorption. When balance between electrochemical production and consumption is reached, the electropromoted rate of the catalytic reaction (ethylene oxidation in the present case) reaches a steady-state and it remains constant during the whole polarization period, meaning that no more alteration of the catalyst/gas interface occurs. However, as revealed by electrochemical techniques, during this apparent steady-state period the polarization still alters the system, without concerning directly the gas-exposed catalyst surface. The alteration must then occur at another location of the system, 'hidden' from the gas phase. It only affects the catalytic activity once the circuit is opened. The very long characteristic times of this effect indicate that the hidden alteration is linked to very slow processes, even at high temperatures. Now a mechanism is proposed for the persistent enhancement of catalytic activity after current interruption, involving oxygen storage under polarization (Eq. 7) and consumption of stored oxygen by the hydrocarbon during relaxation at open-circuit (Eq. 8). A hypothetical maximum amount of Pt-O, N F, stored via Eq. can be calculated with Faraday's law from the total electric charge passed through the cell during the whole polarization period, t h. Obviously, the effective oxygen storage represents only a minor fraction of N F because the majority of the electrogenerated oxygen either desorbs into the gas phase as O 2 (Eq. or, in the presence of a reactant like ethylene, is consumed in the catalytic reaction. On the other hand, the amount of oxygen consumed in the catalytic reaction in excess to the non-promoted rate during relaxation, N r, can be calculated by integrating the area between the relaxation transient curve and the base line given by the non-promoted catalytic rate. One can then define an oxygen storage efficiency, O S = N r / N F, as the ratio between the amount of oxygen consumed by reaction with ethylene, N r, and the maximum amount of electrochemically stored oxygen. Values of O S higher than unity would mean that the effect of stored oxygen on the catalytic reaction rate is non-Faradaic. Obviously, when calculated with N F, O S is highly underestimated. Even so, for the example of experiments at 525C in Fig. 2, values of O S between 40 and 70 are found, exceeding significantly the critical value of one. This reveals that the rate enhancement after current interruption is not simply due to the consumption of the stored oxygen species by ethylene (Eq. 8), but the effect of stored oxygen on the catalytic reaction rate is highly non-Faradaic. Conclusions In reactive atmosphere, the catalytic activity of Pt/YSZ may be enhanced strongly by application of anodic potential (EPOC). After prolonged anodic polarization, an unusual long-lasting relaxation of the reaction rate of ethylene oxidation was observed Per. Pol. Chem. Eng. (P-EPOC). For the interpretation of this phenomenon, a model is attempted relating EPOC with oxygen storage at various locations in the O 2(g),Pt/YSZ system. In fact, in O 2 -containing atmosphere, prolonged anodic polarization of Pt/YSZ causes, in excess to the main reaction of oxygen evolution, storage of Pt-O species at various locations of the electrode. This takes place not only at the gas-exposed platinum surface but also at other hidden phases and/or interfaces. These charging/discharging processes are responsible for the pseudocapacitive behavior of the electrode. Linear sweep voltammetric measurements indicated that, upon anodic polarization, at least three types of Pt-O species were stored, following distinct kinetics. Based on the effect of the polarization time on the amount of the stored Pt-O species, they were attributed to three different locations on the electrode: i) at the Pt/YSZ interface, ii) diffusing from the tpb toward the Pt/gas interface, iii) diffusing from the Pt/YSZ interface toward the bulk of the platinum electrode. According to the proposed model of P-EPOC, anodic polarization produces Pt-O species. The majority of these species is released at the tpb, spills over the catalyst/gas interface and promotes the catalytic activity to reach a promoted steady-state as it is done in any reversible EPOC experiment. In parallel, Pt-O species are continuously formed at the Pt/YSZ interface to be stored at two distinct locations, both being hidden from the reactive gas phase. One of these hidden locations is the Pt/YSZ interface itself, where Pt-O storage is quickly saturated due to the limited amount of available storage sites. The other hidden location is the neighboring Pt phase reached by solid state diffusion, consecutive to saturation of the Pt/YSZ interface, and indicating a very large storage capacity and obeying t 1/2 kinetic law. When the polarization is switched off, these hidden oxygen species reappear at the tpb, spread out over the gas-exposed surface and cause non-Faradaic promotion, as any electrochemically formed backspillover oxygen does. The large amount of stored charge and its slow diffusion-controlled emergence causes the rate enhancement to last for hours. |
Anticonvulsant effect of flavonoid-rich fraction of ficus platyphylla stem bark on pentylenetetrazole induced seizure in mice Context: Epilepsy is characterized by recurrent spontaneous seizures. Several antiepileptic drugs have been used over the years and these drugs have shown serious side effects, thereby prompting the use of medicinal plants to avert the resultant side effects of anti-epileptic drugs. Aim: To evaluate the anticonvulsant effect of the flavonoid-rich fraction (FRF) of Ficus platyphylla stem bark (FPSB) on pentylenetetrazole (PTZ) induced seizures in mice. Study Design: Experimental cohort study. Subjects and Methods: We evaluated the anticonvulsant effect of the flavonoid-rich fraction (FRF) of Ficus platyphylla stem bark (FPSB) on pentylenetetrazole (PTZ) induced seizures in mice by measuring its antioxidant activity in vivo and in vitro and identify possible flavonoids present via Liquid Chromatography Mass Spectroscopy (LC MS) and Fourier Transform Infrared Spectroscopy (FTIR). Statistical Analysis: One way analysis of variance (ANOVA) was used to determine the level of significance at a 95% confidence interval followed by Tukey's multiple comparison test using SPSS software. Result: The FRF of FPSB exhibited weak anticonvulsant activity against PTZ-induced seizure in mice. Maximum anticonvulsant activity (25% protection) was observed at a dose of 100 mg/kg and 200 mg/kg with a delay in the meantime of onset of myoclonic jerks and latency to tonic seizure. The effect of the fraction was found to be dose-independent. The FRF contains a flavanone Astilbin (flavonoid 3 O glycosides) which may have effectuated the high antioxidant activity against 2,2 diphenyl 1 picrylhydrazyl (DPPH) and nitric oxide (NO) while increasing brain glutathione content and decrease in malondialdehyde content. Conclusion: Although the anticonvulsant capacity of FRF on PTZ-induced mice was minimal, this further requires an exploration of other seizure models to ascertain its mechanism of action. |
Effect of Cobalt Oxides on the Catalytic Combustion of Odor. The catalytic combustion of acetaldehyde was studied using various types of Co oxides and Co-PC. The Co oxides and Co-PC were characterized using an X-ray diffractometer (XRD), X-ray photo-electron spectroscopy (XPS), and a particle sizing analyzer. The Co-PC and CoO were converted into Co₃O₄ under an air atmosphere at 450 °C, and the results were confirmed using the XRD and XPS. According to the pretreatment of the Co-PC and Co oxides, the conversion of acetaldehyde increased. The order of particle size for both fresh and pretreated samples is summarized as follows: CoO < Co-PC < Co3O4 powder < Co₃O₄ (99.995%). For all samples, acetaldehyde was not observed at temperatures above 320 °C owing to complete combustion. The conversion of acetaldehyde in the samples was affected by the fresh state of the Co oxides and the space velocity. The catalytic activity depended on the chemical state of the Co oxides and the surface concentrations of Co, O, and N. |
Remote sensing of wetlands in South America: status and challenges ABSTRACT South America has a large proportion of wetlands compared with other continents. While most of these wetlands were conserved in a relatively good condition until a few decades ago, pressures brought about by land use and climate change have threaten their integrity in recent years. The aim of this article is to provide a bibliometric analysis of the available scientific literature relating to the remote sensing of wetlands in South America. From 1960 to 2015, 153 articles were published in 63 different journals, with the number of articles published per year increasing progressively since 1990. This rise is also paralleled by an increase in the contribution of local authors. The most intensively studied regions are the wetland macrosystems of South American mega-rivers: the Amazon and Paran Rivers, along with the Pantanal at the headwaters of Paraguay River. Few studies spanned more than two countries. The most frequent objectives were mapping, covering all types of wetlands with optical data, and hydrology, focusing on floodplain wetlands with microwave data as the preferred data source. The last decade substantial growth reflects an increase in technological and scientific capacities. Nevertheless, the state of the art regarding the remote sensing of wetlands in South America remains enigmatic. Fundamental questions and guidelines which may contribute to the understanding of the functioning of these ecosystems are yet to be fully defined and there is considerable dispersion in the use of data and remote-sensing approaches. |
Closed-Loop Quality Management Systems: Are Czech Companies Ready? Purpose: The paper brings set of original information related to the next quality management systems development with regard to digitalisation and other features of new era. It proposes basic structure of closed-loop quality management systems (CLQMS) as a mixture of internal, external, horizontal and vertical loops. Methodology/Approach: Comparative literature analysis, standards analysis, brainstorming, field research, interviews and design review were used. Findings: Information flows are counted as vital part of all advanced closed-loop quality management systems. Authors established definition of CLQMS. 209 of various requirements related to information exchange were discovered through study of ISO 9001:2015, IATF 16949:2016 and EFQM Model, version 2020. These requirements should create a basic platform for CLQMS establishing and development. Authors performed an empirical field research which. Confirmed that current readiness of Czech production companies for CLQMS implementation is insufficient, despite the automotive sector reaches a higher level of such readiness. Research Limitation/Implication: The field research was performed in time span accompanied by stern measures caused by COVID-19. The only English language literature resources were considered for a literature review. Originality/Value of paper: The paper brings original set of information, regarding to definition of the CLQMS, findings from special field research. |
Automatic fetal weight estimation using 3D ultrasonography This paper proposes a novel and fast approach for automatic estimation of the fetal weights from 3D ultrasound data. Conventional manual approaches are time-consuming and involve inconsistence by different sonographers because of the difficulty to trace limb boundaries in complicated Ultrasound limb volumes. It takes up to 10 minutes to manually trace the surface borders of a 20cm long limb. Using our automatic approach, the time is significantly reduced to 2.1 seconds for measuring the weights based on the entire limb. Experiments with the automatic approach also show comparable standard deviation and limits of agreement to the manual approaches. |
Population Genetic Structure and Biodiversity Conservation of a Relict and Medicinal Subshrub Capparis spinosa in Arid Central Asia : As a Tertiary Tethyan relict, Capparis spinosa is a typical wind-preventing and sand-fixing deciduous subshrub in arid central Asia. Due to its medicinal and energy value, this species is at risk of potential threat from human overexploitation, habitat destruction and resource depletion. In this study, our purpose was to evaluate the conservation strategies of C. spinosa according to its genetic structure characteristics and genetic diversity pattern among 37 natural distributional populations. Based on genomic SNP data generated from dd-RAD sequencing, genetic diversity analysis, principal component analysis, maximum likelihood phylogenetic trees and ADMIXTURE clustering, the significant population structure and differentiation were explored. The results showed the following: Six distinct lineages were identified corresponding to geographic locations, and various levels of genetic diversity existed among the lineages for the natural habitat heterogeneity or human interferences; The lineage divergences were influenced by isolation by distances, vicariance and restricted gene flow under complex topographic and climatic conditions. Finally, for the preservation of the genetic integrity of C. spinosa, we suggest that conservation units should be established corresponding to different geographic groups, and that attention should be paid to isolated and peripheral populations that are experiencing biodiversity loss. Simultaneously, monitoring and reducing anthropogenic disturbances in addition to rationally and sustainably utilizing wild resources would be beneficial to guarantee population resilience and evolutionary potential of this xerophyte in response to future environmental changes. Introduction Arid central Asia is deemed to be the largest arid region in the temperate zones of the northern hemisphere and even the world. The floristic compositions here are mostly descended from Tethyan xerophytic vegetation flora and have been phenoecological plant taxa shaped by the long-term continental-arid desert climate. Because of the specific geomorphologic landscapes comprising the vertical and horizontal convergence of several major Asian tectonic mountains (Himalayas, Karakoram, Kunlun and Tianshan Mountains) and plateaus (Pamirs and Tibet Plateau), as well as the mosaic distribution of many large basins (Tarim, Turpan-Hami and Junggar basins) and deserts (Taklimakan, Kumtag and Gurbantunggut deserts), the monsoon current and moisture from the Indian and Pacific Oceans have been blocked and intercepted, and the amount of precipitation is low and unevenly distributed, which has constituted the complexity of the extreme geographic and climatic conditions, consequently forming a very fragile ecological environment. The types of vegetation are relatively sparse and simple in this region. To adapt to environment variability, desert species from oligotypic genera and a monotypic genus with drought resistance and barren tolerance have mainly occurred and been discontinuously A number of previous studies have hitherto been carried out on the genetic variation and diversity of C. spinosa. Earlier on, scholars mainly devoted their efforts to morphological taxonomy and subspecies or variety identification. In recent years, with the development of molecular marker technology, researchers have achieved certain advancements in intraspecific diversity and infraspecific differentiation by virtue of methodologies such as RAPD, AFLP, ISSR, IRAP and EST-SSR. Nonetheless, most of these studies have focused on Mediterranean coastal regions with a subtropical dry summer climate. However, the relict wild populations remaining in arid central Asia, dominated by a temperate continental desert climate, have been shown a lack of concern. Valuable information on the genetic structure and diversity conservation of C. spinosa in this region is relatively scarce and is of significant biogeographical and conservation interest to phytologists in the fields of plant genetic diversity and biodiversity conservation. The conservation of genetic diversity is of great importance to the maintenance of biological species and ecosystem diversification, which provides a crucial foundation for the survival and development of species and enhances their long-term adaptive evolution to environmental changes, reducing the risk of extinction. The evaluation of levels of genetic diversity based on integrated population structures is a critical prerequisite for species protection. For source-sink metapopulations of terrestrial plants, the genetic structures and divergences are not only dominated by their own species and population characteristics, such as growth rates, effective population sizes and dispersal capacities, but are also closely related to their demographic responses to external spatial and temporal variability of environment. Drivers of isolation by distances (IBD), vicariance and landscape heterogeneity influence the species' evolutionary processes (genetic drift effects, gene exchanges, natural selection and local adaptation), thereby determining the genetic diversity patterns of metapopulations. Human disturbance is also a contributory external factor influencing biodiversity. Since the beginning of the Anthropocene in the latter part of the 18th century, hazards of wild habitat destruction and loss caused by expansions in the range of anthropogenic activities have been becoming increasingly serious. Under the background of global environmental and climatic changes, mankind's excessive pursuits of economic benefits are causing the growing reduction in biological resources, the extinction of species and populations, as well as the modification of natural landscapes. Notably for small-scale isolated or peripheral populations, due to lack of connectivity with source populations, they are particularly prone to genetic depletion through genetic drift and inbreeding, whereas anthropogenic interference undoubtedly further weakens their fitness, decreasing survivorship and reproduction, and increasing extinction risks. Therefore, research on and the protection of these sink populations are considered to be essential for maintaining the genetic integrity of large-scale metapopulations. Hitherto, the conservation of genetic resources has mainly focused on native or endemic rare and endangered species in global biodiversity hotspots, such as the Andes Mountains, Mediterranean Basin, eastern Himalayas and Hengduan Mountains in the tropics and subtropics. However, some widespread natural resources, typically with medicinal or economic value, experiencing diversity reduction in temperate arid regions have not been sufficiently recognized and effectively protected. If this continues, these species would not escape from the hidden dangers of resource exhaustion or even extinction. Compared with traditional molecular genetic markers, genome-wide single-nucleotide polymorphisms (SNPs) have the characteristics of higher genotyping efficiency and data quality, analytical simplicity, broader genome coverage and sheer abundance, as well as advantages in further analyzing complex population structures and distinguishing the evolutionary relationship of genetic polymorphisms. In recent years, they have been increasingly widely applied in research fields of phylogenetics, molecular ecology, conservation genetics and biogeography. As a high-efficiency next-generation highthroughput sequencing technology (NGS) that has been developed and commonly used, restriction-site-associated DNA sequencing (RAD-Seq) can rapidly identify and score highresolution SNP data from thousands of orthologous sites, while reducing the complexity of genomic sequences, even in the absence of reference genomes, becoming one of the main effective methods to address critical issues in the fields of evolution and genetics at the genomic level, especially for studies of species groups with ancient origins and complex evolutionary and genetic structures. In the present study, for the first time, we used the SNP marker from dd-RAD sequencing, ADMIXTURE clustering, principal component analysis (PCoA) and maximum likelihood (ML) phylogenetic analysis methods to fully explore the integrated genetic diversity mechanisms and conservation significance of C. spinosa across large-scale environmental gradients in arid central Asia, based on spatial source-sink theory of metapopulation dynamics. Thus, we aimed to: elucidate the population structure and intraspecific divergence of the lineages of this species; to adequately reveal the spatial pattern of genetic diversity among identified geographical groups and, accordingly, formulate feasible conservation strategies for maintaining the genetic integrity and population resilience of this xerophytic subshrub; and to provide a referential understanding for future research on the diversity conservation of relict plants in arid regions. Sampling, DNA Extraction and dd-RAD Library Construction We located and collected a total of 37 natural populations (5-12 individuals for each population) of C. spinosa in arid central Asia during 2019-2020 ( Figure 2, Table S1), according to the distribution described in Flora of China, Flora Xinjiangensis and Flora of Tibet as references. The sampling spacing was at least 100 m between the neighboring individuals within each population. We also collected Capparis erythrocarpos Isert (northeastern Kenya, Africa) and Capparis bodinieri Lvl. (Xishuangbanna, China) as outgroups for the subsequent phylogenetic analysis. Total genomic DNA was extracted from silica-dried leaf tissues using a DNeasy Plant Mini Kit (Qiagen, Valencia, CA, USA), following the manufacturer's protocol. The cleaned DNA had previously conducted simulated double digestion with 15 combinations among seven common or rare restriction enzymes to predict enzyme cutting sites (Table S2). Considering the yielded tag numbers and average sequencing depths, we finally selected the optimal combination of BfaI (5 -C↓TAG-3 ) and DpnII (5 -↓GATC-3 ) (New England Biolabs, Ipswich, MA, USA) to construct the dd-RAD library, following a modified approach of Peterson et al.. The library with an inserted fragment between 200 and 600 bp was recovered for sequencing. All the qualified samples' sequencing was performed on the Illumina Hiseq X Ten platform (Illumina, San Diego, CA, USA) with paired-end reads of 2 150 bp in length. Library preparation and sequencing were performed by Shanghai Personalbio Biotechnology Co., Ltd. Data Processing and SNP Calling Raw data produced by Illumina were processed to yield RAD tags and to obtain SNPs using the Stacks v1.4.8 program. Initially, paired-end reads were cleaned and filtered for quality control and then trimmed to 140 bp 2 in length using the process_radtags module. Individuals with sufficient RAD tags and coverage depth were used for SNP calling. Since there were no available reference genome data for C. spinosa, we next chose the ustack module to assemble reads per individual into RAD tags following the protocol of de novo workflow. To remove high-repetitive stacks via a deleveraging algorithm, we set coverage depth to at least m = 3, the maximum distance allowed between stacks as M = 2 and the maximum number of mismatches allowed between loci as n = 1. Then, we combined the consensus loci among all 263 samples to build a catalog file in a cstacks module and aligned each loci sequence with the catalog to generate alleles in an sstacks module. Finally, filtered SNP matrices were identified and exported using a populations module, where the optimal operation parameters were set as follows: complete degree > 0.5 (individuals where the same RAD locus was detected accounted for at least 50% of each population), minor allele frequency (MAF) > 0.05 and minimum stack depth m = 4, p = 100 (one locus appeared in at least 100 populations). Based on the above calculations, the resulting high-quality SNP dataset was used for further analysis. Genetic Structure Analysis Population genetic structure, phylogenetic analysis and principal component analysis (PCoA) were comprehensively considered to evaluate the spatial distribution pattern of genetic variation. We performed a maximum likelihood estimation to analyze the genetic structure of C. spinosa using the ADMIXTURE program, version 1.3.0, in order to identify the group clustering of each individual based on unlinked SNP datasets via a block relaxation algorithm. When the numbers of grouping (K value) were preset as 1 ≤ K ≤ 10, ten replicated runs were implemented in ADMIXTURE to compute the mean cross-validation (CV) error for each K. The optimal K value was determined in conditions of the lowest CV error. For the analysis of phylogenetic relationships, we then reconstructed a maximum likelihood (ML) tree based on the unlinked genomic SNPs among these 263 individuals using the PhyML program, version 3.0. Capparis erythrocarpos Isert and Capparis bodinieri Lvl were chosen as the outgroups to root the tree. After the tree topology with maximum likelihood was established, the reliability of the branches was verified through a bootstrap (BP) with 1000 replications. Lastly, principal component analysis (PCoA) was performed to exhibit the first two axes of population variation in the species using GCTA software, version 1.93.2. After removing SNPs with MAF values of less than 0.05, genomic SNP data of different individuals were formed into a genetic matrix where the feature vectors that made a major contribution to variance were extracted, and then a scatter plot was drawn using the first two of the eigenvectors. According to the distribution characteristics of the principal components, the clustering relationship of individuals could be inferred. Genetic Diversity and IBD Analysis Based on the results of genetic grouping, population genetic statistics, including observed heterozygosity (H o ), expected heterozygosity (H e ), nucleotide diversity (P i ) and inbreeding coefficient (F IS ), were computed using the populations command in the Stacks v1.4.8 package at the group and population levels of the species. Degrees of deviations from the Hardy-Weinberg equilibrium were measured for all 37 populations. Hierarchical AMOVA (analysis of molecular variance) in the ARLEQUIN version 3.5.2.2 program was used to classify the total genetic variation partitioning within and across groups by calculating the evolutionary distance between alleles or genotypes. Pairwise genetic differentiation indices (F ST ) among populations were also calculated in the populations module in the Stacks. Additionally, the significance of isolation by distance (IBD) was further tested to compare Pearson's correlation between the genetic distance (F ST ) matrix and geographic distances matrix among these 37 populations using the cor.test function in the R v3.3.3 package. SNPs from dd-RAD Analysis of C. spinosa A total of 263 C. spinosa individuals from 37 populations were sequenced using the dd-RAD library construction methodology. Both the base content and quality distribution across all the sequences were uniform and optimal, considering that the average proportion of bases whose recognition accuracy exceeded 99.9% (Q30) was greater than 90% and that the peak value was around Q36. The numbers of raw reads for each sample ranged from 6,455,458 to 26,065,222. After filtering out low-quality reads, the numbers of high-quality reads ranged from 5,995,376 to 24,640,428, and the mapping rate ranged from 90.24% to 97.98%. According to the clustering algorithm through the Stacks procedure, the obtained tag number per sample ranged from 247,350 to 823,581, and the average depth of sequencing was from 10 to 31. Via mutation spectrum analysis, C:G > T:A and T:A > C:G account for the overwhelming majority of mutation types of SNPs ( Figure S1). The final dataset, which contained 559,586 high-integrity SNPs, could be used for the subsequent population genetics analyses. Population Genetic Structure of C. spinosa The ADMIXTURE analysis showed that, when K = 6, the minimal value of the crossvalidation (CV) error was detected ( Figure S2), indicating that the best-fit number of genetic groupings of C. spinoca was six: Table S1). This pattern of population genetic structure was also supported by the topological relationship of the ML phylogenetic tree, with the main nodal bootstrap values exceeding 89.3% at the intraspecific level ( Figure 4). Populations from group WH (red) always diverged first from all other populations, whether in the ADMIXTURE clustering or in the phylogenetic tree (Figures 3 and 4). Then, group EP (blue) branched off, followed by group WT (yellow) and group ST (green). Group NT (azure) was nested within group ST, and populations from group ET (purple) formed one clade ( Figure 4). The first two coordinate axes of PCoA identified three major clusters, and corresponding principal components explained 16.55% (PC1) and 6.99% (PC2) of the total genetic variation, respectively (Figure 5a). Considering the preliminary results, group WH formed the first cluster, which was completely separated from the rest of the groups; group EP and group WT were generally merged into the second cluster; and group ST, group NT and group ET were broadly integrated into the third cluster. Then, we conducted further PCoA analyses on the latter two clusters, separately. The substructure within the second cluster showed that group EP was gradually split from group WT, with 16.15% (PC1) and 6.99% (PC2) of the total variation (Figure 5b). The substructure within the third cluster showed that populations in groups ST, NT and ET were mostly divided from each other (PC1, 16.15% vs. PC2, 6.99%), whereas admixed populations crossed among them (Figure 5c), which was also evident in the ADMIXTURE diagram ( Figure 3). Genetic Diversity Pattern and IBD The results of the genetic diversity analysis showed that the overall genetic diversity among geographical groups was ranked as follows: EP > ST > NT > WT > ET > WH, which exhibited distinct differences in various degrees (Table 1). At the population level, the maximum observed heterozygosity (H o ), expected heterozygosity (H e ) and nucleotide diversity indices (P i ) were in Manas (C26) and Jiashi (C10), while the minimum H o, H e and H e values were in Zabrang (C01) ( Table 1). Inbreeding coefficients (F IS ) that measured degrees of deviation from the Hardy-Weinberg equilibrium were positive for all 37 populations. The results of the AMOVA showed that the percentage of variation among the six geographical groups was relatively low; the variation was mainly from individuals within the populations, and the variation rate across populations within groups was the lowest ( Table 2). Significant genetic differentiation existed among geographic popula-tions ( Figure S3, Table S3), with F ST values on a scale from 0.076 to 0.707 (Table S3). The maximum genetic differentiation was detected between C02 (Sarang population) and K01 (Bishkek population) (F ST = 0.707), while the minimum genetic differentiation existed between C27 (Shihezi population) and C29 (Wusu population) (F ST = 0.076) (Table S3). Nei's genetic distance was roughly synchronous with pairwise F ST between populations ( Figure S3). Furthermore, the result of isolation by distance (IBD) analysis displayed a considerable correlated pattern between genetic divergence and geographical distance among these 37 populations, given that the two-tailed test of the Pearson correlation coefficient was of significance (r = 0.499, p < 0.001) ( Figure 6). Relationships between Genetic Structure, Lineage Differentiation and Geographic Distribution In this study, we obtained a clear genetic structure of C. spinosa through RAD-seq and identified six clustering results corresponding to geographic locations (Figures 2 and 3, Table S1). For this Tertiary relic plant, the effects of isolation by distance and vicariance has caused long-term shaping of local ecological environments, which had a profound influence on population structure and lineage differentiation among different geographic units. It is speculated that the formation of this disjunction is the result of intense and frequent orogeny during the geological period, coupled with extensive Quaternary glaciation, as well as the long-term process of aridification in central Asia. The allopatric divergence and local adaptive evolution among distant geographical groups, especially for the isolated populations, are probably attributed to the incapability of long-distance genetic exchange and being hindered by mountains and immense deserts. Similar disjunctive or fragmented population structure patterns have also been reported in other Tethyan relicts that occur in this area, such as Gymnocarpos przewalskii, Amygdalus mongolica and Ammopiptanthus, owing to the influence of long-term aridification. The constantly extended deserts and climatic changes have led to the fragmentation and separation of suitable habitats for those xerophytic taxa, thereby restricting the migration among distant geographic populations. The relatively isolated populations distributed in the northern piedmont of the western Himalayas (Group WH, C01-C03) have the maximum genetic distance from other populations. Plant taxon located in this desert plateau region is related to the sub-frigid arid climate, precipitous terrain and topography, as well as abnormally barren soil conditions (Figure 1a). Due to the complex and discontinuous natural environment, the sink populations here, to a certain extent, are differentiated by hereditary and habitat variability. Compared to populations distributed at high altitudes, populations in medium-altitudinal valleys and low-altitudinal desert basins have different habitat requirements (Figure 1). Among the eastern Pamir region and the southern piedmonts of the Tianshan Mountains, we found that population structures were associated with spatial geographic distribution, as well as heterogeneous environments. For instance, the Jiashi population (C10) is geographically nearby the Kalpin (C12) and Tumxuk (C11) populations, but due to the hereditary differences and landscape heterogeneity (gravel Gobi vs. dried clay desert) they belong, respectively, to group EP and group ST, although some of the mixed genotypes were found in these divergent populations (Figure 3). On the contrary, according to research results from Mims et al., among populations within the same geographic unit, the habitat preferences are of continuity and similarity, where high degree of gene flow could exist among them in a relatively flatter and more expansive range. For example, source populations that inhabit piedmont proluvial fans on the southern side of the Tianshan Mountains (Group ST, C11-C17) are more closely distributed, and the connectivity of high-quality breeding habitats may play a role in facilitating closer reduced genetic distance between these populations. Genetic Diversity Pattern of Species Metapopulations The genetic diversity of C. spinosa (Pi = 0.2644) ( Table 1) through RAD-seq is higher than that of other relic species (such as Euptelea pleiosperma and Gymnocarpos przewalskii) because widespread species have a higher genetic diversity than species with endemic distribution. The total diversity index in arid central Asia is also different from that obtained by previous research using AFLP, RAPD and EST-SSR for C. spinosa in the Mediterranean coast (southern Europe, north Africa and west Asia). The primary reasons may be the different systematic molecular markers and the varying regional environments and weather conditions. In the arid region of central Asia, the distribution of C. spinosa covers most of the mountain piedmont, warm dry valley and Gobi desert terrains. Based on the results of SNPs, the range of this species is divided into six geographic groups with various levels of genetic diversity, indicating the levels of geographic isolation and environment gradients. According to the observed heterozygosity, the genetic diversity of source populations in groups ST and NT remained at relatively high and stable levels ( Table 1), indicating that populations within these two groups are both likely to have experienced long-term migration of gene flow. In these geographic units, the high-quality piedmont alluvial or diluvial fan habitats are usually continuous, which is likely conducive to maintaining a high level of genetic diversity throughout these regions. In contrast, sink populations in group WH (western Himalayas) inhabit low-quality desert plateaus with high altitudes and low temperatures, especially the isolated Zabrang population (C01) with lowest genetic diversity (Table 1) adjacent to the middle reaches of the Sutlej River Basin, which grows on steep rocky slopes and seems a bit far away from the water source ( Figure 1a); even worse, natural disasters, such as landslides and mudslides, occur frequently during the annual flowering and fruiting phases. In addition, because of insufficient visits by pollinators, inbreeding or self-breeding dominate the reproduction of C. spinosa under this situation, resulting in significant population depression (current effective size of <50 individuals). Many previous studies have demonstrated that, in nature, isolated populations with smaller effective sizes often have more genetic depletion and extinction risks due to genetic drift and inbreeding, which are, to a large extent, disadvantageous to the persistence of population diversity. In the process of evolutionary history, the Zabrang population survives in a separate habitat but has not adapted to the extreme environment, which may be one of the reasons why its population scale is declining and even endangered. Nevertheless, it is noteworthy that the Taxkorgan (C04, 2855 m alt) and Akto (C05, 2845 m alt) populations, which are also located at high altitudes, could maintain certain high levels of genetic diversity because they are not completely isolated in the eastern Pamirs region (group EP). These populations have continuous extension and buffer along the descending elevation gradients, and they also have a certain degree of gene exchange with the Yegisar population (C06) in the piedmont belt. Researchers have proved that, compared with diffusion within short distances, the long-distance dispersal of gene flow plays a more pivotal role in the persistence of the entire connectivity and genetic diversity of plant metapopulations. Simultaneously, the contributions of wind, pollinators and river water help the pollen flow to spread farther. However, within some geographic groups (group ET, group WT), moderate degrees of genetic differentiation (F ST > 0.1) (Table S3) have appeared in neighboring populations, indicating that anthropogenic activities may have encroached into these natural territories, leading to the discontinuous gene flow. Driven by economic interests and commercial purposes, human over-exploitation has weakened the reproductive capacity of C. spinosa populations and has arbitrarily cut off the long-distance dispersal chains of gene flow, making short-distance spreading predominant, increasing the rates of inbreeding depression and resulting in population decline, thereby making it difficult for them to maintain their genetic diversity and evolutionary potential. Some populations near human activity areas, such as Toksun (C18), Hami (C21), Dunhuang (C22), Yining (C31) and Gongliu (C32), are frequently threatened by natural resource plundering and other disturbances, such as riverway-or highway-widening, agricultural reclamation and industrial expansion, etc. The relatively low genetic heterozygosity and diversity levels ( Table 1) found in these populations may be primarily caused by the results of the above depredations. Implications for Conservation Evaluating the spatial genetic structure and survival situation of metapopulations across regional environment variability could efficiently and effectively support and improve the management measures of biodiversity conservation, thereby conducing to maintain the evolutionary potential of widespread species in response to changing environments. In this study, most of the central source populations with large effective sizes are located in the expansive piedmonts of north and south Tianshan Mountains, where multiple hotspots of biodiversity have been shaped in their long evolution history in the arid region. In contrast, isolated populations on the desert plateau of western Himalayas, as well as peripheral populations in the desert basin adjacent to eastern Tianshan, are experiencing a decrease in or even disappearance of biodiversity. As a result of the adverse effects of hereditary factors or human disturbance, these sink populations are extremely vulnerable to habitat fragmentation and resource exhaustion, so it is particularly important to monitor population trajectories, minimize human disturbances and restore natural landscapes. Given the limited manpower and material resources, in addition to relevant policies for protecting xeromorphic vegetations in arid regions, we recommend that conservation management units should be established that are consistent with the six effective evolutionary groups and that the emphasis of preservation work should be laid on the genetic rescue of isolated populations and habitat restoration of peripheral populations in order to persist the genetic integrity of this plant species. Conservation of sink populations has been described as an important part of biodiversity persistence of integrated metapopulations because they often contain certain rare alleles or genotypes in the process of historic immigration. The Zabrang population (C01) is distributed in more sparse and localized low-quality habitats, resulting in its greater genetic differentiation with other populations within group WH (F ST = 0.1357-0.1878) (Table S3). However, due to a lack of corresponding attention and conservation, the genetic diversity and uniqueness of such isolated population are being threatened and almost disappearing. Empirical studies have illustrated that inputting a certain level of stable gene flow from large source populations to small sinks may counter the negative effects of inbreeding depression and genetic drift. Currently, we suggest that genetic rescue such as immigration, exchange of pollen flow and hybridization, might reduce the extinction risk of isolated populations of C. spinosa. If conditions permit, ex situ preservation and reintroduction instead of natural dispersal may be more effective protection strategies to increase local effective population sizes. The disturbance of human activities, such as the expansion of farmland, makes this subshrub lose its competitive advantage, which could consequently lead to its habitat shrinking. Additionally, the industrial waste generated by newly built factories in the suburbs has caused serious pollution to the soil environment and increased the salinealkali stresses on its habitat, whereupon the mortality rates of plant individuals have also risen. If these trends persist, the relic species would scarcely escape from the hidden danger of resource exhaustion or even the verge of extinction. Populations in the Tuha Basin-Hexi Corridor (Group EP) are located on the edge of the Kumtag Desert, where water shortages and soil salinization are becoming increasingly serious. These findings are the consequences of continuous increasing human pressure on the barren habitats of xerophytes. According to our field investigation conducted from 2011 to 2020, the population sizes within group EP in the desert basin were originally large (>300 individuals per population in 2011). Nevertheless, over the past decade, due to humans' immoderate harvesting and unceasing expansion of agriculture and industry, the living space of C. spinosa in this region has decreased sharply. In particular, the genetic diversity of the peripheral Hami and Dunhuang populations (C21 and C22) has been gradually decreasing, and the plant morphology and population sizes (<100 individuals per population in 2020) here have also been shrinking under these dual pressures, which weaken the plant's ability to prevent wind and sand and preserve water and soil (Figure 1c). This region has been trapped in a vicious circle of natural desert vegetation destruction, water loss and soil erosion, as well as aggravated saline-alkali desertification. In order to realize the sustainable development of the ecological economy in arid regions, we should rationally conduct agricultural production on the basis of protecting natural desert landscapes, reduce the pollution of industrial waste to primitive habitats and limit the predatory exploitation of germplasm resources, therefore protecting the biodiversity of wild drought-resistant species to ultimately slow down the processes of desertification and salinization. Conclusions In this study, we analyzed the population genetic structure and biodiversity conservation management of C. spinosa, a Tethyan relic medicinal subshrub in arid central Asia. Six geographical units with differences in genetic diversity and lineage differentiation were distinguished. This excellent wind-preventing, sand-fixing and soil-and water-conserving xerophyte is at potential risk of natural resource depletion. Populations at high altitudes of the western Himalayas have been experiencing a gradual decline in genetic diversity and effective scales as a result of inbreeding depression by IBD and habitat isolation. On the other hand, peripheral populations at low altitudes in eastern desert basins with original rich genetic diversity are suffering from the plundering of germplasm resources and the destruction of survival environment. Therefore, to restore the genetic integrity of the metapopulations, we suggest that central source populations should be preserved in situ corresponding to different geographical units and that conservation priority should be focused on the genetic rescue of isolated populations and the habitat restoration of peripheral populations. Meanwhile, minimizing human activities in addition to the rational and sustainable utilization of natural resources will be of significance to the population resilience and evolutionary potential of C. spinosa in response to the long-term aridification and future changing environment, and it will eventually maintain the ecosystem balance, as well as slow the desertification process in arid regions. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/d14020146/s1, Table S1: Details of geographical locations, altitude and effective sizes of natural populations of C. spinosa. Table S2: Digestion sites of several common or rare restriction enzymes, as well as their combinations for screening schemes in this study. Table S3: Pairwise genetic differentiation indices (F ST ) among the 37 populations. Figure S1: Mutation spectrum of genome-wide single-nucleotide polymorphism (SNP) dataset of C. spinosa. Mutation types of SNPs are distinguished by different colors. T:A > C:G and C:G > T:A account for the overwhelming majority. Figure S2: Distribution of cross-validation (CV) error in the ADMIXTURE analysis. K = 6 is the optimal K value in condition of the lowest CV error. Figure Institutional Review Board Statement: Not applicable. Data Availability Statement: All RAD-seq data of C. spinosa were submitted to the NCBI (SRA accession: SAMN24698411-SAMN24698673). |
Providing care for the 99.9% during the COVID-19 pandemic: How ethics, equity, epidemiology, and cost per QALY inform healthcare policy Managing healthcare in the Coronavirus Disease 2019 (COVID-19) era should be guided by ethics, epidemiology, equity, and economics, not emotion. Ethical healthcare policies ensure equitable access to care for patients regardless of whether they have COVID-19 or another disease. Because healthcare resources are limited, a cost per Quality Life Year (QALY) approach to COVID-19 policy should also be considered. Policies that focus solely on mitigating COVID-19 are likely to be ethically or financially unsustainable. A cost/QALY approach could target resources to optimally improve QALYs. For example, most COVID-19 deaths occur in long-term care facilities, and this problem is likely better addressed by a focused long-term care reform than by a society-wide non-pharmacological intervention. Likewise, ramping up elective, non-COVID-19 care in low prevalence regions while expanding testing and case tracking in hot spots could reduce excess mortality from non-COVID-19 diseases and decrease adverse financial impacts while controlling the epidemic. Globally, only ∼0.1% of people have had a COVID-19 infection. Thus, ethical healthcare policy must address the needs of the 99.9%. |
Hopf Algebra Primitives and Renormalization The analysis of the combinatorics resulting from the perturbative expansion of the transition amplitude in quantum field theories, and the relation of this expansion to the Hausdorff series leads naturally to consider an infinite dimensional Lie subalgebra and the corresponding enveloping Hopf algebra, to which the elements of this series are associated. We show that in the context of these structures the power sum symmetric functionals of the perturbative expansion are Hopf primitives and that they are given by linear combinations of Hall polynomials, or diagrammatically by Hall trees. We show that each Hall tree corresponds to sums of Feynman diagrams each with the same number of vertices, external legs and loops. In addition, since the Lie subalgebra admits a derivation endomorphism, we also show that with respect to it these primitives are cyclic vectors generated by the free propagator, and thus provide a recursion relation by means of which the (n+1)-vertex connected Green functions can be derived systematically from the n-vertex ones. By the application of an algebra homomorphism to these primitives and the use of the Connes-Kreimer twisted antipode axiom together with the Birkhoff algebraic decomposition, we investigate their relevance to the renormalization process and arrive in a rather straightforward and heuristic manner at the basic equation of renormalization theory from which the explicit relations between the bare and physical parameters of the theory may be derived, and from which the corresponding renormalized Green functions, as well as the Renormalization Group equations in the Mass Independent Renormalization Scheme result. MSC: 16W30, 57T05, 81T15, 81T75. ∗ E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] |
Psychological evaluation of seafarers. The CIA Factor Assessment is a psychological testing and related research activities carried out under auspices of Wartsila Land and Sea Academy at WLSA Regional Training Centre in the Philippines in June 2003. The purpose was to evaluate seafarers against their mental and physical health and obtain their charachteristic as individuals and as a group. The CIA (Consciousness, Intuition. Anticipation, professional psychometric testing) is a set of tests covering three essential values to determine one's suitability and fitness for work at sea developed in order to recognize and improve potentials of professional seafarers. The results of the evaluation were good. The examined persons had good or very good scores, only few of them were below average, requiring professional individual psychological attention/assistance, extended specialized training, or intensive pertinent testing. |
Motion Synchronization Control of Distributed Multisubsystems With Invariant Local Natural Dynamics This paper addresses a new control strategy for synchronizing two or more distributed and interconnected dynamic systems having communication time delays. The proposed strategy that uses the Smith predictor principle and delay information not only achieves synchronization but also preserves the natural local dynamics of each subsystem without being affected by the feedback nature of control. The proposed synchronization scheme is generalized to cases that deal with an arbitrary number of heterogeneous interconnected systems through dynamic scaling of input under a ring-type network configuration. In addition, possibility of applying the proposed scheme to nonlinear systems is discussed. Simulation and experimental tests are conducted to validate theoretical results. |
Message from the Program Chair ACCT 2015, the Fifth International Conference on Advanced Computing & Communication Technologies, will continue to feature comprehensive coverage of traditional and emerging disciplines of Computing, Communication & Control Systems. Researchers and educators will have a unique opportunity to learn about recent advances, get updated on cutting edge applications and techniques, and network with leading researchers. |
Lessons Learned From Seven Space Shuttle Missions Since its founding, NASA has been dedicated to the advancement of aeronautics and space science. The NASA Scientific and Technical Information (STI) Program Office plays a key part in helping NASA maintain this important role. The NASA STI Program Office is operated by Langley Research Center, the lead center for NASA's scientific and technical information. The NASA STI Program Office provides access to the NASA STI Database, the largest collection of aeronautical and space science STI in the world. The Program Office is also NASA's institutional mechanism for disseminating the results of its research and development activities. These results are published by NASA in the NASA STI Report Series, which includes the following report types: TECHNICAL PUBLICATION. Reports of completed research or a major significant phase of research that present the results of NASA programs and include extensive data or theoretical analysis. Includes compilations of significant scientific and technical data and information deemed to be of continuing reference value. NASA's counterpart of peer-reviewed formal professional papers but has less stringent limitations on manuscript length and extent of graphic presentations. TECHNICAL MEMORANDUM. Scientific and technical findings that are preliminary or of specialized interest, e.g., quick release reports, working papers, and bibliographies that contain minimal annotation. Does not contain extensive analysis. CONTRACTOR REPORT. Scientific and technical findings by NASA-sponsored contractors and grantees. TECHNICAL TRANSLATION. English-language translations of foreign scientific and technical material pertinent to NASA's mission. Specialized services that complement the STI Program Office's diverse offerings include creating custom thesauri, building customized databases, organizing and publishing research results... even providing videos. |
Isolation, screening and identification of moisture stress tolerant Rhizobacteria from xerophyte Prosopis juliflora (Sw) Plant growth and productivity are adversely affected by various abiotic (high temperature, moisture stress and salinity) and biotic stresses (Pest and Disease). To overcome this problem various strategies are followed viz., modified cultivation practices, improved breeding methods and application of stress tolerant/resistant microorganisms. Present investigation was designed with the view to address the moisture stress by exploring autochthonous microflora of xerophyte Prosopis juliflora (Sw). Ten different isolates were obtained from southern agro-climatic zone of Tamil Nadu and screened in vitro by artificial induction of drought in solution using PEG 6000. Of which two potential isolates (MLSB 2 & MLSB 6) were obtained from the rhizosphere of Prosopis juliflora collected based on significant anount of IAA and proline produced during moisture stress condition and withstand upto (-1.03 M Pa) osmotic potential. Introduction Plant growth and productivity are adversely affected by various abiotic and biotic stress factors. Amongst all the stresses, drought is a major hindrance to crop production. Growing agricultural crops under dry conditions is achieved through utilization of xerotolerant microorganisms associated to the xerophytic crops. Xerophytic microflora can be found in environments where they are constantly exposed to water stress over a long span of time. Extremophilic plants and their associated micro biota involves in beneficial microbe-plant interactions (;) by boosting plant growth and productivity under harsh conditions such as soil salinity and water shortage (;). Prosopis (Prosopis juliflora) is a xerophytes, belongs to the family Fabaceae profusely occurs in many tropical regions including Southeast Asia, South Asia, North-eastern Brazil, Australia and Africa. They also called as phreatophyte-deep rooted plants that depend for their water supply upon ground water that lies within reach of their roots. This deep rooted bush/tree and widely propagated in Asia, particularly in India and Pakistan (;). In many parts of the world it is a well-known plant species for its use as a fuel, shade, timber and forage. It also has many ethno medicinal values to human being. They have been reported to be resistant to salinity, heat and drought. Moreover, being a leguminous plant it can also help to fix the atmospheric nitrogen in the soil and improves the soil nutrient status. In spite of their allelopathic and phreatophytic effect upon other plant sp., potentiality of the tree against various a biotic stresses is explored recently. PGPR or stress homeostasis-regulating bacteria (PSHB) () enhanced the growth of many different crops even under stressed agricultural environment (). These microbes confers drought tolerance to all plant species than those they were isolated originally from (), and in some circumstances they enhance plant growth only under moisture limiting conditions (;). There were many evidences reported that inoculation of beneficial rhizobacteria imparts drought tolerance in plants (). Drought resistance in plants were induced by PGPR, through the elicitation called Rhizobacterial-Induced Drought Endurance and Resilience (RIDER) process () which promotes modifications in phytohormonal content and antioxidant defense of plants. Many evidences reported that inoculation of beneficial rhizobacteria imparts drought tolerance in plants (;). The mechanism behind drought tolerance of rhizobacteria is associated with the production of osmolytes, antioxidants, volatile compounds, stress proteins, exopolysaccharides and up-regulation or down-regulation of stress-responsive genes (;Kaur and Asthir, 2017). With the available knowledge on the moisture stress mitigation, the present investigation was planned to explore potential xerophytic microflora that could survive under severe moisture stress conditions and as a result abiotic stress tolerant microorganisms were isolated from the rhizosphere of Prosopis juliflora, xerophytic tree species of Fabaceae. Materials and Methods Isolation and screening of drought tolerant xerophytic microflora Rhizosphere soil samples were collected from young saplings/bushes of Prosopis juliflora located in Melur, Madurai (10° 1' N, 78° 20' E,). Soil samples were serially diluted up to 10 −6 dilution and bacteria were isolated using Tryptic soy agar (TSA), nutrient agar (NA) and Luria Bertani agar (LB) by pour plate method. Isolates differed in morphology were purified. Moisture stress was induced artificially using PEG 6000 in nutrient broth and the isolates were screened in vitro based on their drought tolerant potential. Rhizosphere isolates were inoculated in nutrient broth containing different concentrations of PEG 6000 (0, 2, 4, 6, 8 &10%) and allowed for incubation for five days at RT. Concentration of PEG 6000 and Osmotic Potential (M Pa) in terms of negative water potential created in solution were given below. Characterization of rhizobacteria and identification The bacterial isolates (MLSB 2 & MLSB 6) exhibiting high drought tolerant in PEG 6000 amended nutrient broth were screened and identified by Phenotypic, biochemical characterization. For molecular characterization, genomic DNA of the isolate was extracted following Green and Sambrook. The 16S rRNA amplification of the bacterial isolates was done using universal primer set consisting of fDl 5AGAGTTTGATCCTGGCTCAG3 and rDl 5AAGGAGGTGATCCAGCC3 () and the amplicons were purified and sequenced. Obtained bacterial sequences were compared with 16S rRNA gene sequences available in the GenBank databases of NCBI by BLASTn search. The 16S rRNA sequence was submitted to GenBank Database and accession numbers were assigned Bacillus altitudinis (MLSB 2) -MT729974, Bacillus pumilus (MLSB 6)-MT729998. Plant growth promoting properties Estimation of Indole acetic acid using Salkowski method The bacterial isolates were grown in LB broth for 24h. The isolates were inoculated to 10 mL of LB broth without and with PEG (4, 6 and 8%) concentration amended with 0.1% with tryptophan. Then incubated for 72 h in a shaker at 28 ± 2C (120 rpm). The IAA production was determined using the Salkowski method (). After 72 h of growth, the isolates were centrifuged and cell-free supernatants were used for IAA determination. To the 10 mL of supernatant, 2 mL of Salkowski reagent was added and incubated for 10 min. The blank was prepared using sterile broth with Salkowski reagent. Then the samples were read for absorbance at 530 nm. IAA standard graph curves were calibrated using Indole acetic acid (Himedia) in LB broth at different concentrations (5, 10, 20, 50 and 100 g mL -1 ) and sample IAA concentration was calculated by plotting the values against standard graph. Estimation of endogenous osmolytes production by bacterial isolates The major osmolytes produced from bacterial isolates were estimated under both stressed and non-stressed condition (Qurashi and Sabri, 2013). A day old drought tolerant bacterial culture were inoculated in 10 mL LB broth at 4, 6 and 8% PEG and without PEG (0%). The cultures were incubated for 48 h at 32°C on a shaker (180 rpm) and the OD of respective cultures were taken at 600 nm. The cells were extracted by centrifugation at 5000 rpm for 5 min. The cells were resuspended in sterile distilled water. To estimate the endogenous proline, the bacterial cells were boiled for 20 min and the respective cell extracts were used for osmolyte estimation. Endogenous Proline content of drought bacterial isolates For proline estimation, 2 mL of cell free culture extract were transferred to separate tubes and kept in a water bath at 100C for 20 min. To each tubes, 2 mL of acid ninhydrin and 2 mL of glacial acetic acid were added and mixed gently and placed in a water bath at 100°C for 1 h. Appearance of red colour indicates the presence of proline in the sample. Red chromophore of proline from the culture samples were separated by the addition of 4 mL of toluene and mixed vigorously. Within few seconds, the proline content was transferred to the toluene layer. The concentration of proline was determined by checking the absorbance at 520 nm in UV-Vis spectrophotometer. Standards curves were prepared by pure L-Proline in sterile distilled water (0.2, 0.4, 0.6, 0.8 and 1 mg Proline mL -1 ) and the sample proline concentration was calculated using standard curves., isolated 65 Bacillus spp., of which10 potential sp. grow at minimal water potential (-0.73 M Pa) and were screened in vitro for PGP traits under stressed and non-stressed conditions. In the present study, among the 10 isolates, isolates MLSB2 and MLSB6 recorded growth up to 10 per cent (- reveals that Bacillus spp. tolerate minimal water potential (-0.73 MPa). reveal Bacillus Isolate HYD-17 produced highest amount of IAA (16.2 and 32.5 g mL -1 protein under stressed and nonstressed conditions, respectively). IAA production was reduced during artificial induction of moisture stress by PEG 6000 @ 4, 6 and 8 compared to nonstressed condition (0% PEG). Fig 1. Shows IAA production was higher at 6% concentration and reduced at 8% PEG concentration irrespective of the isolates. Per cent reduction of IAA was low in Bacillus pumilus (59.9% @6% PEG), followed by Bacillus altitudinis (62.9% @6% PEG). Standard strain recorded higher per cent reduction (69.46% @6% PEG) compared to two isolates (table 3, Estimation of proline in moisture stressed and nonstressed condition Proline is an osmactant produced when a plant subjected to a stress conditions like drought, high temperature and salinity and also an adaptive mechanism provided to the plant to withstand the harsh stress conditions. Similarly plant growth promoting rhizobacteria subjected to induced drought in a solute by PEG 6000, it also produces osmactants like proline, glycine betaine, trehalose etc., provides stress tolerance to the rhizobacteria. Estimation of internally produced proline concentration of rhizobacteria during moisture stress conditions comparing with non-stress condition give insight about the potentiality of the microorganisms to counteract the effect of abiotic stress. Result and discussion In the present investigation bacterial strain Bacillus altitudinis MLSB2 produces significant amount of proline (698.1g mL -1 ) at -0.66 M Pa osmotic potential compared to standard strain Bacillus megaterium MTCC 453 (685.2 g mL -1 ). Proline concentration was increased in 4 & 6% and reduced in 8 per cent PEG concentration compared to non-stressed condition (table 4, fig 2). Similarly in the study conducted by Sandhya et al. recorded significant increase in the concentration of proline and total soluble sugar content was observed under stressed conditions as compared to nonstressed conditions. Study conducted by Aswathy et al. registers that B. altitudinis FD48 was found to produce Indole acetic acid (IAA) (2.82 g/ml) compared to other two isolates (Bacillus pumilus FS20 and Bacillus aquimaris MD02) even under PEG induced drought conditions. However, under normal conditions, B. altitudinis FD48 produced 8.0 g/ml. Conclusion Moisture stress is one among the major abiotic stress pose a serious problem to crop cultivation and several mitigation & adaptation strategies were followed worldwide to resolve this problem. Inoculation of PGPR is one amongst the technology to alleviate the ill effects created by the moisture stress. In the present study an attempt was made to explore xerophytic microflora from Prosopis juliflora to alleviate moisture stress and two potential strains were isolated (Bacillus altitudinis -MT729974 and Bacillus pumilus -MT729998) from rhizosphere of Prosopis juliflora having essential adaptive mechanism against moisture stress. |
On the strong regularity of degenerate additive noise driven stochastic differential equations with respect to their initial values Recently in and stochastic differential equations (SDEs) with smooth coefficient functions have been constructed which have an arbitrarily slowly converging modulus of continuity in the initial value. In these SDEs it is crucial that some of the first order partial derivatives of the drift coefficient functions grow at least exponentially and, in particular, quicker than any polynomial. However, in applications SDEs do typically have coefficient functions whose first order partial derivatives are polynomially bounded. In this article we study whether arbitrarily bad regularity phenomena in the initial value may also arise in the latter case and we partially answer this question in the negative. More precisely, we show that every additive noise driven SDE which admits a Lyapunov-type condition (which ensures the existence of a unique solution of the SDE) and which has a drift coefficient function whose first order partial derivatives grow at most polynomially is at least logarithmically H\"older continuous in the initial value. Introduction The regularity analysis of nonlinear stochastic differential equations (SDEs) with respect to their initial values is an active research topic in stochastic analysis (cf., e.g., and the references mentioned therein). In particular, it has recently been revealed in the literature that there exist SDEs with smooth coefficient functions which have very poor regularity properties in the initial value. More precisely, it has been shown in that there exist additive noise driven SDEs with infinitely often differentiable drift coefficient functions which have a modulus of continuity in the initial value that converges to zero slower than with any polynomial rate. Moreover, in additive noise driven SDEs with infinitely often differentiable drift coefficient functions have been constructed which even have an arbitrarily slowly converging modulus of continuity in the initial value. In these SDEs it is crucial that the first order partial derivatives of the drift coefficient functions grow at least exponentially and, in particular, quicker than any polynomial. However, in applications SDEs do typically have coefficient functions whose first order partial derivatives grow at most polynomially (cf., e.g.,, , and for examples). In particular, in many applications the coefficient functions of the SDEs under consideration are polynomials (cf., e.g.,, , and for examples). In view of this, the natural question arises whether such arbitrarily bad regularity phenomena in the initial value may also arise in the case of SDEs with coefficient functions whose first order partial derivatives grow at most polynomially. It is the subject of the main result of this article to partially answer this question in the negative. More precisely, the main result of this article, Theorem 1.1 below, shows that every additive noise driven SDE which admits a Lyapunov-type condition (which ensures the existence of a unique solution of the SDE) and which has a drift coefficient function whose first order partial derivatives grow at most polynomially is at least logarithmically Hlder continuous in the initial value. Theorem 1.1. Let d, m ∈ N, T, ∈ [0, ∞), ∈ [0, 2), ∈ C 1 (R d, R d ), ∈ R dm, V ∈ C 1 (R d, [0, ∞)), let : R d → [0, ∞) and |||||| : R m → [0, ∞) be norms, assume for all x, h ∈ R d, z ∈ R m that (x)h ≤ 1 + x h, V (x)(x + z) ≤ (1 + |||z||| )V (x), and x ≤ V (x), let (, F, P) be a probability space, and let W : → R m be a standard Brownian motion with continuous sample paths. Then (i) there exist unique stochastic processes X x : → R d, x ∈ R d, with continuous sample paths such that for all x ∈ R d, t ∈ , ∈ it holds that X x (t, ) = x + t 0 (X x (s, )) ds + W (t, ) and (ii) it holds for all R, q ∈ to be locally -Hlder continuous in the initial value. Even more, we show that under the hypotheses of Theorem 1.1 the upper bound in can not be substantially improved in general. In the following we briefly sketch the key ideas of our proof of inequality in Theorem 1.1. A straightforward approach to estimating the expectation of the Euclidean distance between two solutions of the SDE with different initial values (cf. the left hand side of ) would be (i) to apply the fundamental theorem of calculus to the difference of the two solutions with the derivative being taken with respect to the initial value, thereafter, (ii) to employ the triangle inequality to get the Euclidean norm inside of the Riemann integral which has appeared due to the application of the fundamental theorem of calculus, and, finally, (iii) to try to provide a finite upper bound for the expectation of the Euclidean operator norm of the derivative processes of solutions of with respect to the initial value. This approach, however, fails to work in general under the hypotheses of Theorem 1.1 as the derivative processes of solutions may have very poor integrability properties and, in particular, may have infinite absolute moments. A key idea in this article for overcoming the latter obstacle is to estimate the expectation of the Euclidean distance between the two solutions in terms of the expectation of a new distance between the two solutions, which is induced from a very slowly growing norm-type function. As in the approach above, we then also apply the fundamental theorem of calculus to the difference of the two solutions. However, in the latter approach the derivative processes of solutions appear only inside of the argument of the very slowly growing norm-type function and the expectation of the resulting random variable is finite. We then estimate the expectation of this random variable by employing properties of the derivative processes of solutions and the assumption that the first order partial derivatives of the drift coefficient function grow at most polynomially and, thereby, finally establish inequality. The remainder of this article is organized as follows. In Section 2 we establish an essentially well-known existence and uniqueness result for perturbed ordinary differential equations. In Section 3 we recall well-known facts on measurability properties of function limits and in Section 4 we establish a well-known measurability result for solutions of additive noise driven SDEs. In Section 5 we prove existence, uniqueness, and pathwise differentiability with respect to the initial value and in Section 6 we present a few elementary integrability properties for solutions of additive noise driven SDEs with a drift coefficient function which admits a Lyapunov-type condition. In Section 7 we establish an abstract regularity result for solutions of certain additive noise driven SDEs with respect to their initial values. This result together with the results of Sections 5 and 6 is then used to prove the main result of this article, Theorem 8.4, in Section 8. 2 Existence of solutions of perturbed ordinary differential equations (ODEs) In this section we employ suitable Lyapunov-type functions to establish in Lemma 2.2 in Subsection 2.2 below an essentially well-known existence and uniqueness result for a certain class of perturbed ordinary differential equations (ODEs). Our proof of Lemma 2.2 employs the essentially well-known a priori estimate in Lemma 2.1 in Subsection 2.1 below. Our proof of Lemma 2.1 uses a suitable Lyapunov-type function (denoted by V : R d → R in Lemma 2.1 below). A priori estimates for solutions of perturbed ODEs, and Then it holds that sup t∈J (w(t)) + w(t) < ∞ and Proof of Lemma 2.1. Throughout this proof assume w.l.o.g. that sup J > 0, let I ⊆ be the set which satisfies I = (0, sup J), let K ∈ satisfy and let z : J → R d be the function which satisfies for all t ∈ J that Observe that the fact that and w are continuous functions ensures that This and the hypothesis that w is a continuous function ensure that Next note that and imply that for all t ∈ J it holds that The hypothesis that and y are continuous functions and the fundamental theorem of calculus hence ensure that for all t ∈ I it holds that z| I ∈ C 1 (I, R d ) and (z| I ) (t) = (y(t)). This, the assumption that V ∈ C 1 (R d, [0, ∞)), and the chain rule imply that for all t ∈ I it holds that V (z| I ) ∈ C 1 (I, [0, ∞)) and Furthermore, note that the hypothesis that V ∈ C 1 (R d, [0, ∞)) and the hypothesis that y, w, and are continuous functions establish that J ∋ t → V (z(t))(y(t)) ∈ R is a continuous function. Combining this and with the fundamental theorem of calculus and the fact that z = shows that for all t ∈ I it holds that The hypothesis that for all x ∈ R d, u ∈ R m it holds that V (x)(x + u) ≤ (u)V (x) and hence prove that for all t ∈ I it holds that The assumption that sup J > 0 and the fact that J ∈ t → V (z(t)) ∈ [0, ∞) is a continuous function therefore imply that for all u ∈ {s ∈ J : s = sup J} it holds that This,, and the fact that V (z) = V () demonstrate that for all t ∈ J it holds that Combining this and with Gronwall's integral inequality (see, e.g., Grohs et al. for t ∈ J in the notation of Grohs et al. )) proves that for all t ∈ J it holds that The triangle inequality and the hypothesis that for all The proof of Lemma 2.1 is thus completed. Existence of solutions of perturbed ODEs locally Lipschitz continuous function, and assume for all. Then there exists a unique y ∈ C(, R d ) such that for all t ∈ it holds that Proof of Lemma 2.2. Throughout this proof assume w.l.o.g. that T > 0. Note that the hypothesis that is a locally Lipschitz continuous function, the hypothesis that w is a continuous function, and [12, in the notation of ) ensure that there exists an interval J ⊆ with 0 ∈ J and sup J > 0 such that there exists a unique x ∈ C(J, R d ) which satisfies for all t ∈ J that Lemma 2.1 hence proves that sup t∈J < ∞ and Combining this with ensures that sup J = T. Therefore, we obtain that J = [0, T ) or J = . This, the hypothesis that is a locally Lipschitz continuous function,, and [12, In the next step we observe that and the fact that sup The fact that y is a continuous function and the fact that Combining this with establishes. The proof of Lemma 2.2 is thus completed. Measurability properties In this section we recall in Lemmas 3.1-3.4 in Subsection 3.1 and in Lemmas 3.5 and 3.6 in Subsection 3.2 below a few well-known facts on measurability properties of suitable function limits. For completeness we also include in this section proofs for Lemmas 3.1-3.6. Proof of Lemma 3.1. Note that the hypothesis that for all i ∈ I it holds that X i is an F/B()-measurable function and the hypothesis that I is at most countable establish that for all c ∈ R it holds that The proof of Lemma 3.1 is thus completed. Lemma 3.2. Let (, F) be a measurable space, let Y : → R be a function, and let X n : → R, n ∈ N, be a sequence of F/B(R)-measurable functions which satisfies for all ∈ that lim sup n→∞ |X n ()−Y ()| = 0. Then it holds that Y is an F/B(R)-measurable function. Proof of Lemma 3.2. First, observe that the assumption that for all ∈ it holds that lim sup n→∞ |X n ()−Y ()| = 0 implies that for all ∈ it holds that N ∋ n → X n () ∈ R is a convergent sequence and lim Moreover, note that Lemma 3.1 ensures that for all n ∈ N it holds that This and show that for all c ∈ R it holds that The proof of Lemma 3.2 is thus completed. let X n,i : → R, n ∈ N, i ∈ {1, 2,..., d}, be the functions which satisfy for all n ∈ N, ∈ that X n () = (X n,1 (), X n,2 (),..., X n,d ()), and let Y i : → R, i ∈ {1, 2,..., d}, be the functions which satisfy for all ∈ that Observe that the fact that all norms on R d are equivalent ensures that K < ∞. This implies that for all n ∈ N, i ∈ {1, 2,..., d}, ∈ it holds that The assumption that for all ∈ it holds that lim sup n→∞ X n () − Y () = 0 and the fact that K < ∞ hence show that for all i ∈ {1, 2,..., d}, ∈ it holds that lim sup Furthermore, observe that the assumption that for all n ∈ N it holds that X n is an Proof of Lemma 3.4. Throughout this proof let D n : → R d, n ∈ N, be the sequence of functions which satisfies for all n ∈ N, ∈ that and let : R d → [0, ∞) be the d-dimensional Euclidean norm. Note that for all ∈ it holds that lim sup Furthermore, observe that the assumption that for all z ∈ R d it holds that y z is an F/B(R d )-measurable function ensures that for all n ∈ N it holds that D n is an F/B(R d )measurable function. Combining this and with Lemma 3. The proof of Lemma 3.4 is thus completed. Lemma 3.5. Let T ∈ [0, ∞), let (, F, P) be a probability space, and let Y : → R be a stochastic process with continuous sample paths. Then Measurability properties for stochastic processes Proof of Lemma 3.5. Observe that the hypothesis that for all ∈ it holds that ∋ t → Y (t, ) ∈ R is a continuous function and the fact that ∩ Q is dense in imply that for all ∈ it holds that Combining this with Lemma 3.1 (with (, F) ← (, F), The proof of Lemma 3.5 is thus completed. Proof of Lemma 3.6. Throughout this proof let Observe that Lemma 3.5 implies that for all x ∈ R d, ∈ it holds that The assumption that for all t ∈ , ∈ it holds that Hence, we obtain that for all ∈ it holds that This establishes (i). In the next step we combine and the fact that I is an at most countable set with Lemma 3.1 (with (, F) ← (, F), ) (x,t)∈I in the notation of Lemma 3.1) to obtain (ii). The proof of Lemma 3.6 is thus completed. Measurability properties for solutions of SDEs In this section we establish in Lemma 4.5 in Subsection 4.3 below the well-known fact that pathwise solutions of certain additive noise driven SDEs are stochastic processes. 4.1 Time-discrete approximations for deterministic differential equations (DEs) Then it holds for all n ∈ {0, 1,..., N } that Proof of Lemma 4.1. Throughout this proof let u 0, u 1,..., u N ∈ R be the real numbers which satisfy for all n ∈ {0, 1, 2,..., N } that Hence, we obtain that for all n ∈ {0, 1,..., N − 1} it holds that This implies that for all n ∈ {0, 1,..., N } it holds that Moreover, observe that induction shows that for all n ∈ {0, 1,..., N } it holds that Combining this with establishes. This completes the proof of Lemma 4.1. and Then it holds that Proof of Lemma 4.2. Throughout this proof let R ∈ [0, ∞) be the real number which satisfies let L ∈ [0, ∞) be the real number which satisfies let N ∈ {0, 1,..., N }, N ∈ N, be the numbers which satisfy for all N ∈ N that and let N ∈ [0, ∞), N ∈ N, be the real numbers which satisfy for all N ∈ N that Hence, we obtain that for all N ∈ N, n ∈ {0, 1,..., N } it holds that This and imply that for all N ∈ N, Combining this with shows that N ∈ N, n ∈ {0, 1,..., N } it holds that In the next step we observe that the fact that f is a continuous function ensures that there Therefore, we obtain that lim sup This and prove that there exists M ∈ N such that for all Combining this with shows that for all N ∈ {M, M + 1,... } it holds that This and show that for all Combining this with proves. The proof of Lemma 4.2 is thus completed. Then it holds that Proof of Lemma 4.3. Throughout this proof let Z : Note that and the hypothesis that w is a continuous function imply that for all In addition, observe that the hypothesis that Y, w, and f are continuous functions shows that Moreover, note that the triangle inequality ensures that for all t This demonstrates that for all t ∈ , r ∈ (0, ∞), x, y ∈ R d with x = y and x + y ≤ r it holds that Combining this with ensures that for all r ∈ (0, ∞) it holds that Moreover, observe that the hypothesis that for all t Next we combine,, and to obtain that for all N ∈ N, n ∈ {0, 1,..., N − 1} it holds that Furthermore, observe that the assumption that for all N ∈ N it holds that Y N = Y ensures that for all N ∈ N it holds that Combining this,,,, and Moreover, note that for all N ∈ N, n ∈ {0, 1,..., N } it holds that Combining this with establishes that This completes the proof of Lemma 4.3. Time-continuous approximations for deterministic differential equations and assume that Then it holds that Next observe that the hypothesis that Y : In the next step we observe that for all ∈ (0, ∞), This,,, and the triangle inequality show that for all ∈ (0, ∞), The triangle inequality and hence demonstrate that for all ∈ (0, ∞), This establishes. The proof of Lemma 4.4 is thus completed. Measurability properties for solutions of SDEs and let Y : and Then it holds that Y is a stochastic process. In the next step we observe that implies that for all N ∈ N, Combining this with the fact that for all This and prove that for all N ∈ N, Combining this,, and, with Lemma 4. for ∈ in the notation of Lemma 4.4) establishes that for all ∈ it holds that lim sup Next observe that the assumption that is an F/B(R d )-measurable function. Furthermore, observe that, the hypothesis that f is a continuous function, and the hypothesis that is an F/B(R d )-measurable function. Combining this and with the induction principle proves that for all N ∈ N, n ∈ {0, 1,..., N } it holds that is an F/B(R d )-measurable function. This and demonstrate that for all is an F/B(R d )-measurable function. The proof of Lemma 4.5 is thus completed. Differentiability with respect to the initial value for SDEs In this section we establish in Lemma 5.4 in Subsection 5.3 below an existence, uniqueness, and regularity result for solutions of certain additive noise driven SDEs. Our proof of Lemma 5.4 exploits the related regularity results for solutions of certain ODEs in Lemmas 5.1-5.3 below. For the reader's convenience we include in this section also detailed proofs for Lemmas 5.1-5.4. Local Lipschitz continuity for deterministic DEs Proof of Lemma 5.1. Throughout this proof assume w.l.o.g. that T > 0, let D : let R ∈ (0, ∞) be the real number which satisfies and let x ∈ , x ∈ R d, be the real numbers which satisfy for all x ∈ R d that Note that In addition, observe that the hypothesis that The fundamental theorem of calculus hence implies that for all t Moreover, note that ensures that for all The triangle inequality,, and hence show that for all Gronwall's integral inequality (see, e.g., Grohs et al. The triangle inequality therefore proves that for all In addition, observe that for all v ∈ R d with v − w < (2 exp(KT )) −1 it holds that This, the assumption that T > 0, and the hypothesis that for all x ∈ R d it holds that y x is a continuous function ensure that for all Combining this and the hypothesis that for all x ∈ R d it holds that y x is a continuous function with implies that for all v ∈ R d with v − w < (2 exp(KT )) −1 it holds that The fact that for all x ∈ R d it holds that y x is a continuous function and therefore ensure that for all t ∈ The proof of Lemma 5.1 is thus completed. Differentiability with respect to the initial value for deterministic DEs and Proof of Lemma 5.2. Throughout this proof let :, be the real numbers which satisfy for all r ∈ [0, ∞) that let R x ∈ [0, ∞), x ∈ R d, be the real numbers which satisfy for all x ∈ R d that and let K r ∈ , r ∈ [0, ∞), satisfy for all r ∈ [0, ∞) that Observe that for all r ∈ [0, ∞) it holds that In addition, observe that the fact that Note that Lemma 5.1 proves that there exist In the next step we observe that implies that for all w ∈ R d, t ∈ , u ∈ it holds that This,, and the triangle inequality hence prove that for all v, w ∈ R d, t, u ∈ with v − w < r w it holds that Therefore, we obtain that for all v, w ∈ R d, t, u ∈ , ∈ (0, ∞) with v − w < min{r w, (2L w ) −1 } and |t − u| < (2C Rw + 1) −1 it holds that This establishes that is a continuous function. Next note that there exist unique v This implies that for all Combining this with proves that for all This shows that for all x ∈ R d, t ∈ it holds that is a linear function. Next observe that the fact that [0, In addition, note that implies that for all, ∈ (0, ∞), Combining this with shows that for all ∈ (0, ∞), z, The triangle inequality therefore implies that for all ∈ (0, ∞), z, The fundamental theorem of calculus and hence prove that for all ∈ (0, ∞), = 1 0 D t, y z (t) + u(y z+k (t) − y z (t)) (y z+k (t) − y z (t)) − D(t, y z (t))(v z,k (t)) du ≤ 1 0 D t, y z (t) + u(y z+k (t) − y z (t)) (y z+k (t) − y z (t)) − D(t, y z (t))(v z,k (t)) du Combining this with and shows that for all ∈ (0, ∞), Gronwall's integral inequality (see, e.g., Grohs et al. This establishes that for all ∈ (0, ∞), Therefore, we obtain that for all z ∈ R d, t ∈ it holds that Combining this with shows that for all t ∈ This and establish (ii). Next note that the triangle inequality and imply that for all Gronwall's integral inequality (see, e.g., Grohs et al. In addition, observe that and the triangle inequality imply that for all This ensures that for all x, z ∈ R d with x − z < min{1, r z } it holds that Combining this with proves that for all Next note that implies that for all ∈ (0, ∞), This, Gronwall's integral inequality (see, e.g., Grohs et al. Moreover, and show that for all Combining this with proves that for all ∈ (0, ∞), z, Combining this with establishes that is a continuous function. This and prove (i). The proof of Lemma 5.2 is thus completed. Proof of Lemma 5.3. Throughout this proof let f : and let z x : Observe that the hypothesis that ∈ C 1 (R d, R d ) and the hypothesis that w is a continuous function show that for all t ∈ , x ∈ R d it holds that The hypothesis that ∈ C 1 (R d, R d ) and the hypothesis that w is a continuous function Next we combine In addition, note that the assumption that for all x ∈ R d it holds that y x is a continuous function and the assumption that w is a continuous function imply that for all x ∈ R d it holds that z x is a continuous function. Combining this,,, and with and Observe that and imply that. This and establish that for all Combining this with proves that for all The proof of Lemma 5.3 is thus completed. Differentiability with respect to the initial value for SDEs, let (, F, P) be a probability space, and let W : → R m be a stochastic process with continuous sample paths. Then Proof of Lemma 5.4. First, observe that Lemma 2. ∈ in the notation of Lemma 2.2) proves that there exist unique y In addition, note that the hypothesis that ∈ C 1 (R d, R d ) ensures that for all r ∈ (0, ∞) it holds that Combining this and with Lemma 4. x ∈ R d in the notation of Lemma 4.5) shows that for all x ∈ R d it holds that ∋ (t, ) → y x (t) ∈ R d is a stochastic process. This and establish (i). Next note that and Lemma 5. 6 Integrability properties for stochastic differential equations (SDEs) In this section we present in Lemma 6.1 in Subsection 6.1 below, in Lemmas 6.3-6.5 in Subsection 6.2 below, and in Lemma 6.6 in Subsection 6.3 below a few elementary integrability properties for standard Brownian motions (see Lemmas 6.1 and Lemmas 6.3-6.5) and solutions of certain additive noise driven stochastic differential equations (see Lemma 6.6). Lemma 6.1 establishes exponential integrability properties for one-dimensional standard Brownian motions and is a straightforward consequence of Ledoux-Talagrand . Lemmas 6.3 and 6.4 establish exponential integrability properties for multidimensional standard Brownian motions. Our proof of Lemma 6.3 uses Lemma 6.1 and an application of the well-known inequality for real numbers in Lemma 6.2 below. Lemma 6.4, in turn, is an immediate consequence of Lemma 6.3. Lemma 6.5 establishes polynomial integrability properties for multi-dimensional standard Brownian motions and is a direct consequence of Lemma 6.4. Lemmas 6.1 -6.5 are essentially well-known and for the reader's convenience, we include in this section full proofs for these lemmas. Integrability properties for scalar Brownian motions Proof of Lemma 6.1. Throughout this proof assume w.l.o.g. that T > 0 and > 0 and let Note that the fact that 2− 2 + 2 = 1 and the fact that for all a, b ∈ [0, ∞), p, q ∈ (0, ∞) with 1 p + 1 q = 1 it holds that ab ≤ a p p + b q q (Young inequality) implies that for all ∈ it holds that c sup Furthermore, observe that Lemma 3.5 ensures that for all ∈ [0, ∞) it holds that is an F/B(R)-measurable function. In addition, note that for all ∈ [0, 1 2T ) it holds that This completes the proof of Lemma 6.1. Integrability properties for multi-dimensional Brownian motions Proof of Lemma 6.2. Throughout this proof let : R → R, ∈ [1, ∞), be the functions which satisfy for all ∈ [1, ∞), x ∈ R that Note that for all ∈, m ∈ N, a 1, a 2,..., a m ∈ R it holds that Next observe that for all ∈ [1, ∞) it holds that is a convex function. Jensen's inequality hence establishes that for all m ∈ N, a 1,..., a m ∈ R it holds that This implies that for all m ∈ N, a 1,..., a m ∈ R it holds that The proof of Lemma 6.2 is thus completed. and let K ∈ satisfy Note that the fact that all norms on R m are equivalent ensures that K < ∞. Hence, we obtain that for all ∈ it holds that sup This, the fact that < 2, and Lemma 6.2 show that for all ∈ it holds that c sup Hence, we obtain that for all ∈ it holds that exp c sup In the next step we note that Lemma 3.5 (with T ← T, (, F, P) ← (, F, P), Y ← ( ∋ (t, ) → W (t, ) ∈ [0, ∞)) in the notation of Lemma 3.5) ensures that for all ∈ it holds that Combining this with shows that for all ∈ it holds that In addition, observe that Lemma 6.1 (with T ← T, c ← 2 m cK, ←, (, F, P) ← (, F, P), W ← W i for i ∈ {1, 2,..., m} in the notation of Lemma 6.1) proves Note that the fact that W 1, W 2,..., W m are independent stochastic processes,, and establish that The proof of Lemma 6.3 is thus completed. (, F, P) be a probability space, let W : → R m be a standard Brownian motion with continuous sample paths, and let : Proof of Lemma 6.4. Note that the assumption that for all z ∈ R m it holds that (z) ≤ C(1 + z ) implies that for all c ∈ [0, ∞), ∈ it holds that Combining this with Lemma 6.3 establishes that for all c ∈ [0, ∞) it holds that This completes the proof of Lemma 6.4. Note that y. Integrability properties for solutions of SDEs, let (, F, P) be a probability space, let W : → R m be a stochastic process with continuous sample paths, let X Then (i) it holds for all R, r ∈ [0, ∞) that is an F/B()-measurable function and Proof of Lemma 6.6. Throughout this proof let Y, Z : → [0, ∞) be the functions which satisfy for all ∈ that Note that Lemma 2., ∈ in the notation of Lemma 2.1) ensures that for all x ∈ R d, ∈ it holds that The hypothesis that for all x ∈ R d, ∈ it holds that ∋ t → X x (t, ) ∈ R d is a continuous function and the fact that for all a, b ∈ R, r ∈ [0, ∞) it holds that |a + b| r ≤ 2 r (|a| r + |b| r ) hence ensure that for all ∈, R, r ∈ [0, ∞) it holds that Next we combine the assumption that for all ∈ it holds that ∋ t → W (t, ) ∈ R m is a continuous function and the assumption that is a continuous function with Lemma 3.5 to obtain that (a) for all ∈ it holds that Moreover, note that Lemma 5. for ∈ in the notation of Lemma 5.3) ensures that for all ∈ it holds that Combining this with Lemma 3.6 shows that for all R, r ∈ [0, ∞) it holds that is an F/B()-measurable function. In the next step we observe that the assumption that for all c ∈ [0, ∞) it holds that E sup t∈∩Q exp c (W (t)) < ∞,, and the fact that Y is an F/B([0, ∞))-measurable function ensure that for all r ∈ [0, ∞) it holds that In addition, note that the hypothesis that V is a continuous function implies that for all R ∈ [0, ∞) it holds that sup Furthermore, observe that, the fact that Z is an F/B([0, ∞))-measurable function, and the hypothesis that for all c This completes the proof of Lemma 6.6. Conditional regularity with respect to the initial value for SDEs In this section we study in Lemmas 7.4 and 7.5 in Subsection 7.2 below regularity properties of solutions of certain additive noise driven SDEs with respect to their initial values. In particular, in Lemma 7.5 we establish in inequality Conditional local Lipschitz continuity for deterministic DEs Then it holds for all Proof of Lemma 7.1. Throughout this proof let D Note that the assumption that for all t ∈ it holds that ( Moreover, observe that the fact that for all x ∈ R d it holds that z x is a continuous function, the fact that for all t ∈ it holds that ( is a B()/B(R d )-measurable function. This and the hypothesis that for all In addition, observe that the hypothesis that for all w ∈ R d it holds that z w is a continuous function and the hypothesis that is a continuous function ensure that for all w ∈ R d it holds that sup This,, and ensure that for all w, h ∈ R d, t ∈ it holds that Combining this,, and with Gronwall's integral inequality (see, e.g., Grohs et al. Combining this with shows that for all x, y ∈ R d, t ∈ it holds that This completes the proof of Lemma 7.1. Conditional sub-Hoelder continuity for SDEs Proof of Lemma 7.2. Throughout this proof let f : (1, ∞) → [0, ∞) be the function which satisfies for all z ∈ (1, ∞) that Note that f is a continuously differentiable function and for all z ∈ [e q, ∞) it holds that Hence, we obtain that f | [e q,∞) is an increasing function. This establishes. The proof of Lemma 7.2 is thus completed. Proof of Lemma 7.3. Throughout this proof assume w.l.o.g. that q > 0. Observe that Lemma 7.2 (with q ← q, a ← e q a, b ← e q b in the notation of Lemma 7.2) implies. The proof of Lemma 7.3 is thus completed. assume that E sup x∈{z∈Q d : z ≤R+1} sup t∈∩Q 4q+4 ≤ K, and assume for all let Y : → be the function which satisfies for all ∈ that let A ⊆ be the set which satisfies and let Z : → [0, ∞) be the function which satisfies for all ∈ that Note that implies that for all y ∈ [0, ∞) it holds that Hence, we obtain that for all This and show that for all y ∈ [0, ∞) it holds that G(y) ≤ 1 + y. Next note that the fact that Z is an F/B([0, ∞))-measurable function, the fact that for all ∈ it holds that Z() ≥ 1, and the fact that for all y ∈ [0, ∞) it holds that ln(1 + y) ≤ y show that for all h ∈ {v ∈ R d \ {0} : v < 1} it holds that The fact that for all ∈ it holds that Z() ≥ 1 and Lemma 7.3 hence prove that for all This and (with C ← e q, r ← 2q in the notation of ) establish that for all In the next step we observe that (with C ← 1, r ← 4q in the notation of ) and the fact that for all Furthermore, observe that (with C ← 2, r ← 4 in the notation of ) shows that This and ensure that for all h ∈ {v ∈ R d \ {0} : v < 1} it holds that Combining this,,, and the fact that for all ∈ it holds that Z() ≥ 1 with the Cauchy-Schwarz inequality establishes that for all h ∈ {v ∈ R d \ {0} : v < 1} it holds that Hence, we obtain that for all R, q ∈ [0, ∞), Moreover, observe that the hypothesis that for all ∈ it holds that Furthermore, note that and demonstrate that for all Combining this,,, and with Lemma 7. This completes the proof of Lemma 7.5. In addition, observe that Lemma 6.5 (with d ← d, m ← m, T ← T, r ← c, ←, ←, (, F, P) ← (, F, P), W ← W for c ∈ [0, ∞) in the notation of Lemma 6.5) ensures that for all c ∈ [0, ∞) it holds that E sup and assume that C = sup x∈{v∈R d : v ≤R} sup t∈ E X x (t). Then it holds for all x, y ∈ {v ∈ R d : v ≤ R} with 0 < x − y = 1 that Proof of Lemma 8.3. First, note that implies that for all x, y ∈ {v ∈ R d : v ≤ R} with x = y and x − y < 1 it holds that Furthermore, observe that the triangle inequality and the hypothesis that C < ∞ show that for all x, y ∈ {v ∈ R d : v ≤ R}, t ∈ it holds that The fact that for all q ∈ [0, ∞) it holds that [1, ∞) ∋ z → |ln(z)| q ∈ R is an increasing function and the fact that for all x, y ∈ {v ∈ R d : v ≤ R} it holds that x − y ≤ 2R hence show that for all t ∈ , x, y ∈ {v ∈ R d : v ≤ R} with x − y > 1 it holds that sup t∈ E X x (t) − X y (t) ≤ 2C |ln( x − y )| q |ln( x − y )| q ≤ 2C |ln(2R + 1)| q |ln( x − y )| q. The proof of Lemma 8.3 is thus completed. |
A Multicenter, Prospective, Randomized, Single-Blind, Controlled Clinical Trial Comparing VASER-Assisted Lipoplasty and Suction-Assisted Lipoplasty Background: No scientific comparative study has demonstrated any statistically significant clinical improvement attributable to a new lipoplasty technology relative to traditional suction-assisted lipoplasty. This prospective study used a contralateral study design to evaluate postoperative differences between vibration amplification of sound energy at resonance (VASER)assisted lipoplasty and suction-assisted lipoplasty. Methods: Twenty female patients between the ages of 20 and 48 years received contralateral treatment with suction-assisted lipoplasty and VASER-assisted lipoplasty in one or more anatomical regions for a total of 33 regions. Patients received suction-assisted lipoplasty on one side of the body and VASER-assisted lipoplasty on the contralateral side. Patients were blinded to technology application. Aspirate was analyzed for blood content, and skin retraction was analyzed by measuring changes in ultraviolet light tattoos. Results: Regarding skin retraction, the VASER-assisted lipoplastytreated side resulted in a statistically significant improvement in skin retraction of 53 percent relative to suction-assisted lipoplasty (17 percent per liter versus 11 percent per liter, p = 0.003) with 33 paired sites using a two-tailed t test. Regarding blood loss, VASER-assisted lipoplasty treatment resulted in a statistically significant reduction in blood loss of 26 percent (11.2 versus 14.0 cc blood/100 cc) relative to the suction-assisted lipoplasty side (p = 0.019 with n = 20 using a two-tailed t test). Subjective measures (i.e., pain, swelling, appearance, and patient and physician preference) showed no statistical difference between the two methods at the 6-month evaluation. Conclusions: The VASER-assisted lipoplasty method demonstrated improved skin retraction and reduction in blood loss compared with suction-assisted lipoplasty. This is the first study to demonstrate statistically significant and clinically relevant improvements in a new lipoplasty technology relative to suction-assisted lipoplasty. CLINICAL QUESTION/LEVEL OF EVIDENCE: Therapeutic, I. |
A generalized entropy characterization of N -dimensional fractal control systems It is presented the general properties of N-dimensional multi-component or many-particle systems exhibiting self-similar hierarchical structure. Assuming there exists an optimal coarse-graining scale at which the quality and diversity of the (box-counting) fractal dimensions exhibited by a given system are optimized, it is computed the generalized entropy of each hypercube of the partitioned system and shown that its shape is universal, as it also exhibits self-similarity and hence does not depend on the dimensionality N. For certain systems this shape may also be associated with the large time stationary profile of the fractal density distribution in the absence of external fields (or control). I. INTRODUCTION Multi-component, strongly correlated systems, often exhibit non-linear behavior at the microscale leading to emergent phenomena at the macroscale. As P. W. Anderson stated back in 1972, it often happens that "the whole becomes not only more but very different from the sum of its parts". Fingerprints of such emergent phenomena can be identified in hierarchical behavior, or constraints,, sometimes associated with fractal behavior. Hierarchical behavior exhibiting self-similarity has been identified in Physical, Social, Biological and Technological systems. Employing theoretical tools such as the singularity spectrum or its equivalent, multiscaling exponents (via a Legendre's transformation), etc., fractal analysis has been applied to geophysics, medical imaging, market analysis, voice recognizion, solid state physics, etc. (see e.g. - ). Recently, it has been unravelled high quality spatiotemporal fractal behavior in connection with built-up areas in planar embeddings, whose diversity of fractal dimensions covered the entire dimensionality spectrum, reflecting the presence of self-organizing principles which strongly constrain the spatial layout of the urban landscape. Here I investigate in detail the general behavior of multicomponent complex systems constrained to exhibit fractal behavior in a space of arbitrary dimensionality N. This work is organized as follows. In section II it is defined the central quantity of this work, the entropy S(D) of a cell as a function of its fractal dimension. In section III there are shown some estimates for S(D) and for the total number of cell fractal configurations. Section IV is about selfsimilarity properties satisfying S(D) that show that its shape is virtually independent of N. Some possible applications of this N −dimensional generalization are commented in section V. II. COARSE GRAINING ASSUMPTIONS AND THE FUNCTION S(D). It will prove convenient to represent our multi-component system in terms of black pixels (each a hypercube of side 1) embedded in a space of dimensionality N otherwise filled with white pixels. Let us assume the entire system fits into a single hypercube of side ∆ (and volume ∆ N ), and let us divide the system-wide hypercube into a grid of smaller hypercubes (or cells) of side, to each of which it is applied the standard box-counting method (BCM ) in order to assess the fractal dimension of the system at every cell (location). Clearly, the number of scales involved in the fractal Top: The diferent panels illustrate the diversity in fractal dimensions exhibited by the same N-dimensional, hierarchical system, depending on the level of coarse-graining at which the fractal analysis is carried out. Bottom: The same idea, in terms of the number of fractal dimensions used (for a given precision D). It is assumed that there exists an intermediate level at which fractal information is optimal in the sense defined in the main text. behavior will depend on and will always be finite. In general, however, there is no reason to expect to find the same fractal dimension at every location, and hence in the following I shall refer to this diversity of fractal dimensions as multi-fractal behavior. The choice of should not be arbitrary (see Fig.1). If = ∆, there will be only one fractal dimension for the entire system. On the other hand, if = 1 each cell encompasses a single pixel resulting in D = 0 for the (white) pixels that do not belong to the system and, D = N > 0 for those that do. In between these opposite cases, it is reasonable to assume the existence of some set of coarse-graining levels in which the number of fractal dimensions spanned by the BCM is maximized (for a given precision D), because the system is multifractal. In addition, the larger the value of, the larger the number of steps that become possible when the BCM is performed, resulting the fractal dimension at every cell to be shared at a larger number of scales and naturally, more properly defined. Thus, we define the optimal coarse-graining level (1 < < ∆) as follows: providing an optimal portray of the multi-fractal behavior of the system. For each cell, the total number of possible configurations is 2 N. The volume V of the pixels in each cell that belong to the system can in principle vary between 0 and N. However, if the system exhibits a local fractal dimension D, the number of system configurations is strongly reduced, and it will also depend on the fraction of the cell volume occupied by the system. This conforms with the notion that an emerging property of a complex system provides a constraining condition regarding its "entropy" (in a generalized sense), viewed here as the log-number of possible configurations of the system compatible with the specified value of the emergent macroscopic variable. Before dwelling into the problem of the number of configurations, we shall start by determining lower L(D) and upper U (D) bounds for the volume of a given cell compatible with a pre-defined fractal dimension D. According to the BCM, there exists a minimal number N k of hyper-boxes of side 2 k (k = 0, 1, 2,...) that cover the pixels of the system inside the cell, which satisfy: (throughout this work, log will be used as a shorthand for log 2 ) where D is the fractal dimension and B is a constant which is not associated to the fractal behavior. In our case, B just shifts the linear fitting up or down, according to the multiple values that V can have. Taking k = 0 in Eq. one has that log N 0 = B, where N 0 is the number of pixels that belong to the system, i.e. its volume V ; thus, Recalling that we are dealing with a finite number of scales, we will assume that Eq. holds for k = 0,..., m, with m = . Hence, in particular we have that log V = Dm + log N m. On the other hand, N m is just the number of boxes of edge 2 m : 1 ≤ N m ≤ (/2 m ) N. Put together, we obtain, Thus, there is no configuration with fractal dimension D and with a volume V outside of the bounded region defined by Eq. (the Fig. 1 of is an example of this, for N = 2). It is quite remarkable that the lower bound L(D) is independent of the embedding dimension N. Knowledge of these constraints on the cell volume compatible with a given fractal dimension D considerably simplifies the determination of the total number of configurations accessible to the system in a cell exhibiting a fractal dimension D. Let us denote this quantity by (V, D), where V is the volume occupied by the system in the cell. From Eq., taking B = log V, the number of boxes covering the part of the system inside the cell at the k th stage satisfies N k = N k−1 2 −D. N k is thus defined by V and D, but does not depend on the final configuration. In addition, the previous equation defines a recursive sequence in which each of the N k boxes is sub-divided in 2 N sub-boxes, N k−1 of which will be picked up to enclose the system at the previous k − 1 th stage. The number of ways this procedure can be done is equal to f (2 N, N k, N k−1 ) (see, ), leading to whereas the total number of configurations having dimension D is given by: For a given cell of the grid, the function Let us conclude this section by remarking something about the function S(D). For another size of the grid, say which, likely, will not correspond to the optimal scale (i.e. ), one can similarly define the number of configurations as well as the associated entropy, say(D) andS(D). Then, if and do not differ in orders of magnitude (that is, if m remains invariant), one easily finds that: and hence S(D) andS(D) will be proportial to each other; consequently, one can say the definition of S(D) is robust to the choice of. III. SOME ESTIMATES. In the following, some estimates for S(D) are computed. These will turn out to be important both to visualize which will be the usual orders of magnitude involved in (D), as well as to understand some of its main properties. For simplicity, I shall use U = U (D) and L = L(D) except where explicitly indicated. From, and, one has that: then, maximizing: As usual, replacing (D) by the largest of (V, D) constitutes a good approximation. Indeed, the relation a b t ≤ ta tb and imply that given D, is an increasing function of V (k = 0,..., m) as well as (V, D) (see Eq. ); hence, from Eqs. and, one has that: Then, given that L ≥ 1: Thus, from Eq., one can see that provides a good estimate for S(D), by just assuming the summation in to be approximately equal to its greatest summand (U, D), i.e.: The only problem of this estimate occurs at D = N : since (U (N ), N ) = 1 one will have from that S(D = N ) = 0 which is a excellent estimate, despite the fact that S(D) never vanishes. The equation says that (D) will usually be a very large number (see Figs. 2 and 3). For instance, with N = 2 and = 100, it turns out that max D (D) ≈ 10 1300. Nevertheless, the multi-fractal behavior assumed here, together with the existence of an optimal coarse-graining scale and associated S(D), strongly constrains the number of possible overall configurations the system may explore, dramatically reducing this number compared to an uncorrelated multi-component system: Theorem 1. If our multi-fractal analysis is carried out with a precision -in dimensionality -of D, then the total number of configurations of a given cell exhibiting a fractal behaviour satisfies Proof. From, and one has that: From, one has that for k = 1, 2,..., m. From and one obtains: for D ≥ D * = log ≈ 1.585. In the following section, which is independent of this theorem, it is shown that the maximum of S(D) is reached at D = D 1 ≈ N −1/ ln(/2). Thus, D 1 > D * for reasonable N and m (i.e. for every N > 2 and ≥ 5, even for N = 2 and ≥ 23). Then, given that 2 ≤ 2 m+1, it follows that: On the other hand, where D is the precision with which one studies the fractal properties of the system. Then, the theorem follows from Eqs., and. Indeed, the theorem above says that for a given cell, the number of fractal configurations will be just a tiny fraction of the total number of possible, uncorrelated configurations, given by 2 N. Let us work out a numerical example. According to, for N = 2 and = 100 (m = 5), one finds that a 0.77013 and 0≤D≤N (D) 10 2300 /D, which despite being a very big number, it is absolutely negligible with respect to 2 N ≈ 10 3000 (for every reasonable D). The theorem explains why almost every random configurations of pixels is not fractal, quantitatively. This result may have applications in Pattern Recognition Theory. The following theorem says that the shape of S(D) is virtually independent of the dimensionality N of the embedding space, actually: Proof. Denoting by U D N the upper bound of, one has from and that Note that the most important summands of the Eq. above are the ones with U D N 2 −kD and 2 D U D N 2 −kD being large numbers (k = 1, 2,...). Then, given that D ≥ 3, from and one can subsequently get to: (the last equation is obtained by comparison with ). Finally, by iterating the above equation, one will obtain Eq.. Clearly, the result of the above theorem talks about the existence of self-similarity properties, because the entropies S N (D) and S N +∆N (D) of Eq. are related via a translation in D and a multiplication by −∆N, which are self-similar transformations (see Fig. ). Something Since this equation holds for a range of h, one can choose this parameter in order to minimize the quadratic norm of both members, simultaneously. A good anstz for this, is to take h = 1/ ln(/2) which decreases with the number of orders of magnitude involved in the fractal behavior (i.e. h = (m ln 2) −1 ) and it also vanishes the right member of. Clearly, the Eqs. and follow from and. In Control Theory, it is well-known the fact the controllability of a system depends on the knowledge of the full set of variables that describes the state of the system at a given time. In theory, this principle also applies to Complex Systems but, for almost every system of this kind, there exists a high uncertainty about the numbers of variables that control the system as a whole. In this context, let us assume a kind of very bad possible scenario in which every part of the system is out-of-control, namely, evolving in time towards a state in which the number of cells with dimension D will be ultimately determined by the entropy S(D) and, having the following density of cells: Given that the entropy is a meassure of the ignorance/uncertainty, and assuming that what cannot be controlled is precisely what is ignored, S(D) may provide the degree of uncontrollability of the cells having dimension D. Thus, Eq. may be seen as a metastable state, result of an hypotetical evolution in the absence of controls (fields/constraints of any kind). In addition, this state may also be recognized as a self-organized one, because s(D) is a non-uniform density and it is assumed that the system can reach it, being driven by its own entropy (purely). If the system is not dominated by the entropy, the quadratical norm || − s|| 2 (where (D) is the density of the system) defines how further the system's dynamics is from the one corresponding to the ideal out-of-control worst situation (constituting s(D) as a point of reference). If the system's state is given by (or converging to it), one can be sure that the final state will have associated a risky lack of robustness defined as follows. First, it is remarkable the way in which s(D) is concentrated for high dimensions. Actually, independently of N, the variance 2 of the fractal dimension reads 2 ≈ 2b 2 and, the intervals and approximately incorporate 54% and 76% of the cells, respectively (this follows from the fact the solution of is approximately proportional to (N − D)e (D−N )/b ). Thus, specially for large N, an entropical evolution of the system could ultimately imply a potential lack of robustness and fault-tolerance, in the sense that some external force could affect a big part of the system, despite acting on the cells of a small interval of dimensions. On the other hand, it is noteworthy that the rate of growth, as a function of N, of the number of possible configurations is by far faster than exponential (see Eq. ). This means that every time that the embedding dimension increases, the diversity of configurations for fixed D will increase significantly. Thus, assuming that -which is not necessarily the case -the density of the system (D) had reached a self-organized equilibrium mimicking s(D) and, if it was possible to lessen the inherent constraints associated with the embedding dimension, increasing the dimensionality from N to N + ∆N, one could let the components of the system to evolve following higher dimensional patterns, opening the possibility for significant diversification. Furthermore, in some cases, even under an entropical evolution, the approach of the system to the new equilibrium given by S N +∆N (D) will act to increase the variance 2, improving the robustness of the system, in the sense mentioned above. Indeed: Theorem 4. Let us denote by s N and s N +∆N to the densities defined by Eq. associated to embedding spaces of dimensionality N and N + ∆N, respectively. Let us suppose that the initial cell distribution is given by Then, if follows the shortest track towards s N +∆N, the variance of the fractal dimension will have a maximum in between. Proof. Let us parametrize the evolution of the cell distribution as t, with 0 ≤ t ≤ 1 ( 0 = s N and 1 = s N +∆N ). In Functional Analysis, it is shown that is the shortest track between s N and s N +∆N (independently of the functional norm). In addition, provides a explicit formula of the variance 2 t and one can differenciate it twice with respect to t, obtaining: (because of Theorem 2). Then: which is just a parabola having a maximum at t = 1/2. The shortest track of Eq. is by no means the unique track that makes the variance to increase. Actually, the map t → 2 t can be considered as a continuous one. This means that, if ∆N is not excessively small, at least there will exist a positively measurable set of tracks, consisting of paths which are close to the shortest one between s N and s N +∆N and consequently, with associated variances 2 t looking like the one of Eq.. I conclude by providing some suggestions of problems and systems to which I believe concrete applications of the general ideas developed here could be realized, addressing also the issue of how increasing complexity may evolve in this realm. Let us consider a given cell of the multifractal system, having fractal dimension D an volume V. Per se, the hyperpixels of the cell encode all the information about this particular part of the system. This encoding is what we may designate by strategy in the sense that it may describe a kind of pattern of behaviour of the part (or agent) of the system associated to the cell (sometimes it could be related to a way to accomplish some goal). For instance, in urban planning, each strategy encodes a specific pattern of urban layout and land use. In other words, we may associate to each strategy a given (interval of) fractal dimension(s) D. If we assume a population of individuals, then different individuals may adopt different varieties of a given strategy, that is, different sequences of hyper-pixels leading to the same D. As I have demonstrated here, there is a number of configurations accessible to each D, which is given by S(D) and it is constrained by L(D) and U (D). Furthermore it is clear that, at least in some systems, and in the absence of control, the systems will evolve in time in such a way that the population density of strategies (D) will tend to mimick the normalized S(D). Given the behavior of S(D) mentioned above, this means that the majority of the population will employ strategies lying in a short interval, which corresponds with a low degree of diversity in the number of used strategies. If we think in terms of economics, this means that, countrywide, the natural tendency will be to specialize in connection with a narrow set of activities (say, related to farming and/or textile, or to oil production). In the context of international commerce, this may reflect a tendency for the majority of the commercial actors to find partners or allies in the same area of the world. This, in turn, may be undesirable, and governments and/or agencies may implement policies which constitute the necessary external field ensuring the systems to remain "far from equilibrium", with an associated (D) significantly more uniform than S(D) (that is, diversifying the systems' porfolio ), being it by fostering the increase of "production means", being it by increasing the number of "partners", so as to increase systems' robustness and faulttolerance. It is also worth mentioning that by increasing the dimensionality of a (hierarchical) system from N to N + ∆N, one not only increases its inherent complexity but, given the rise of a new available set of strategies, it also paves way for a potential diversification. This may be related to the open problem of understanding the major transitions in evolution. All these last ideas are not the main issue of this paper but they could be the topic of future researches, naturally. VI. CONCLUSION. The main assumption of this work was the existence of a coarse graining level that provides an optimal portray of the multi-fractal behavior of the system. This work tells neither which part of a given real complex control system can be modelled as a multifractal, nor how to define the associated N −dimensional space in which it is embedded. Even so, hierarchical systems are ubiquitous in the natural world and, given that the hypotheses on the basis of the ideas studied here are so general, it is likely that the multicomponent systems we were dealing with may comprise a non-negligible fraction of the hierarchical systems observed, and to which the principles discussed here do apply. |
Carcinoid Tumors of the Extrahepatic Bile Ducts: A Study of Seven Cases The authors report seven patients with carcinoid tumors of the extrahepatic bile ducts (EHBDs). All patients were women, with an average age at diagnosis of 49.8 years (range, 3767 yrs). The most common presenting symptom was painless jaundice with or without pruritus. Although one patient had peptic ulcer disease before the onset of obstructive jaundice, none had systemic endocrine manifestations. These neoplasms were most often located in the common bile duct. Grossly, the carcinoid tumors were usually nodular and poorly demarcated, and ranged from 1.1 to 2.7 cm in size. Only one of the neoplasms was polypoid. Microscopically, the tumors had a trabecular or nesting pattern with occasional tubule formation, and were composed of relatively small cells with granular chromatin. All of the neoplasms expressed chromogranin and two expressed synaptophysin. Three expressed serotonin and two of the three were also immunoreactive for pancreatic polypeptide or somatostatin. Two tumors were focally positive for gastrin and one of these two tumors was also positive for serotonin and pancreatic polypeptide. All seven carcinoid tumors showed no immunoreactivity for p53, and assays for p53 loss of heterozygosity analysis were negative in two, suggesting that p53 mutations do not play a role in the pathogenesis of EHBD carcinoids. A mutation in codon 12 of K-ras was found in one carcinoid tumor whereas two of two showed immunoreactivity for Dpc4 protein. In view of the small number of carcinoids studied, the importance of these findings in the pathogenesis of these tumors is unclear. Ultrastructural examination of three of the tumors revealed numerous membrane-bound, round neurosecretory granules. Clinically, these lesions had an indolent course. Even in the presence of lymph node metastases (noted in two patients), all of the patients remained disease free 2 to 11 years (average follow up, 6.6 yrs) after segmental resection or pancreaticoduodenectomy (Whipple's procedure). Because carcinoid tumors of the EHBD are of low malignant potential, they should be separated from the more common adenocarcinomas in this location. |
Fluorinated Nanocarbons Cytotoxicity. As the research in nanotechnology progresses, there will eventually be an influx in the number of commercial products containing different types of nanomaterials. This phenomenon might damage our health and environment if the nanomaterials used are found to be toxic and they are released into the waters when the products degrade. In this study, we investigated the cytotoxicity of fluorinated nanocarbons (CXFs), a group of nanomaterials which can find applications in solid lubricants and lithium primary batteries. Our cell viability findings indicated that the toxicological effects induced by the CXF are dependent on the dose, size, shape, and fluorine content of the CXF. In addition, we verified that CXFs have insignificant interactions with the cell viability assays-methylthiazolyldiphenyl-tetrazolium bromide (MTT) and water-soluble tetrazolium salt (WST-8), thus suggesting that the cytotoxicity data obtained are unlikely to be affected by CXF-induced artifacts and the results will be reliable. |
A Risk Based Approach to Evaluating the Impacts of Zayanderood Drought on Sustainable Development Indicators of Riverside Urban in Isfahan-Iran : In recent years, the Zayanderood River in Isfahan-Iran has been encountered by hydrological imbalance and drought. Literature review shows that long-term climate change, drought, and disruption of the rivers water supply has led to depletion of underground aquifers and, consequently, gradual subsidence of the river and serious damage to old buildings and structures along the riverbank. This fact would be followed up by adverse environmental, social, and economic e ff ect that could threaten the sustainable development of urban space. Therefore, it is necessary to use e ffi cient risk identification and assessment approaches toward a more e ff ective risk management. The goal of this study is to identify and prioritize the risks of river drought with regards to all three sustainable development areas including environmental, social, and economic. The research methodology was a mixed field method that included a set of questionnaires and interviews. To evaluate collected data, the analytic network process (ANP) method was used. Eighteen important risks were identified. Based on the results, decrease in the groundwater level, climate change, and gradual soil degradation were ranked first, second, and third, respectively. As this study examined the impacts of river drought on all three areas of sustainable development simultaneously and comprehensively, it is expected that the results will fill the existing theoretical and practical gap a ff ecting improvements in assessment and management of sustainable development risks. Introduction The Zayanderood River, which is a vital vein of fertility in the city of Isfahan, has experienced drought and hydrological imbalances for the last two decades. The Zayanderood is the biggest river in the central desert of Iran. Zayanderood plays an important role in supplying drinking, industrial, and agricultural water resources in Isfahan province. Unfortunately, the river's movement has been broken due to hydrological drought that has occurred in the last few years. As water is one of the most pressing human needs, drought and water scarcity are one of the biggest challenges facing the development of the country in the present and future. Therefore, Zayanderood drought is one of the most important environmental, social, and economic crises in Iran in recent years. Continuous trend of the drought will increase the intensity of the ecological changes in Isfahan, endangering its life and future. Development in the vicinity of the river is dependent on the riverbed and, in fact, has an interplay as changes and instability in any of them systematically affect the other one. The severe decline in groundwater resources, social tensions, drying up of the Zayanderood River, and Gavkhooni wetlands are the major consequences of drought in Isfahan, which are a serious threat to sustainable development of Isfahan. Although apparently Zayanderood drought has nothing to do with the construction, it should be noted that the long-term disruption of water in Zayanderood has been associated with a decrease in the level of underground aquifers and a gradual subsidence of the earth which can play a major role in damaging the structures and folds of existing buildings, especially historical sites. On the other hand, considerations on the possibility of gradual subsidence due to the drought is necessary in the calculation of new buildings, and construction engineers should pay particular attention to the design and implementation of buildings (in the context of structures or installations). All of these can be a restraining factor to achieve the sustainable development goals of Isfahan, which has been intensely focused by local authorities and managers in recent years. The occurrence of any of the environmental, social, and economic consequences (as the three main areas of sustainable development) of the emerging crisis certainly follow a series of long, medium, and short-term causal relationships. In principle, the continuation of the Zayanderood drought process has such a negative effect on environmental, social and economic dimensions that can therefore be critical to the sustainability of Isfahan. Among previous research work, there is no comprehensive research that simultaneously examined the effects of the river drought on all three areas of sustainable development. Therefore, the aim of this study was to prioritize effects of the drought on sustainable development indicators in the buildings and urban space located in the vicinity of the river (as identified risks). In order to achieve this goal, this study organized to thoroughly examine all three environmental, social, and economic aspects of sustainable development using the risk assessment model and Analytical Network Process (ANP) approach (as a multi-criteria decision-making method) to control the situation, better management and improvements by identification, prioritization, and assessment of the risks. An in-depth review of previous research showed that relatively less research has investigated the effects of the drought on all three sustainable development indicators. Throughout the literature, with careful study, it can be seen that only one or two aspects of sustainable development have been considered in a single study, and none of these researches have been taken into account in all three aspects. This is the latest gap between previous studies. The distinction between the economy, environment, and community reflected by the sustainability indicators have been studied in the literature, although the relative importance of the three dimensions on assessment of sustainable development needs wider scientific agreement and standardization. There is an indication that the environmental objectives and indicators of sustainable development are more coherent than the social ones. Also, most of the literature of sustainable development deal with either socio-economic or socio-environmental development factors of the nations. In addition, no research that represents a comprehensive approach of experts (experienced in the area such as geotechnical engineering, civil engineering, architecture, water resource engineering, economists and other related specialties) has been found. In fact, this attempt has been made to bridge the gap between previous studies. Literature Review Extensive study on the Zayanderood River drought has indicated that against the excessive concerns on the issue of the Zayanderood drought crisis management and due to disregarding various dimensions and consequences of this crisis, adequate control measures are not in place. Therefore, it is necessary to carry out a specific study on the different dimensions of the Zayanderood drought and its effective risk management. As such, this study categorized the investigation into two major groups: the first group examined the main roots and causes that have led to the Zayanderood drought, while the second group considered the consequences of this drought. Drought Causes Disregarding the effects of climate change has a negative impact on sustainable development. In an article on the effects of climate change on the flow of the Zayanderood River in Esfahan, Bowani and Murid stated that the results showed an overall decrease in precipitation and an increase in temperature. Research performed by Moradi and Nozari also investigated climate change as one of the causes of Zayanderood water scarcity and drought. On the other hand, it has been claimed that although climate change has no effects on temperature and rainfall in Isfahan, the relative humidity decreased, the number of dry months increased, and, in fact, the climate has since become drier. Globally, climate change has raised serious concern for many researchers. In this approach, the research developed by Rajkovich and Okour highlighted the importance of planning for future building stock by considering rapid global climate change instead of sole reliance on historical data. Resilience sustainable development approach with regards to climate change is globally attractive. Drought Effects The effects of drought may influence environmental, social, and economic indicators. Environmental impacts result in damage to air and water quality, degradation of landscape quality, and soil erosion. Some effects are only short term, but other environmental impacts can create long-lasting or even permanent effects in many different aspects. Climate change may be due to regional climate change and geology. The Mediterranean region is affected by climate change, which is mainly reflected in its effects on water supplies and lack of flow. In Lebanon, the so-called "hydrological drought" caused a significant decline in water resources (surface and groundwater) by 23 to 29 percent over the past four decades. Hydrological droughts reduce the water level of lakes and water reservoirs, reduce the quality of water, reduce the level of supply for electricity generation, and result in financial and social damages. Prediction and timely alerts may result in application of the suitable water resources management. In fact, changes in the annual natural precipitation table indicate that the decrease in groundwater level is due to the interaction of precipitation and river flow, which has reduced the natural productivity of groundwater. Evaluation of the relationship between drought and its impact on surface and groundwater resources shows that this relationship is significant with high correlation. Variations of flow in the study area have contributed significantly to the decline in groundwater levels. Therefore, considering the decrease or unchanged groundwater discharge in recent years, it can be concluded that the reduction of water flow in the Zayanderood has been significantly influenced by changes in groundwater level, and in the areas with the greatest reduction in the water flow, the reduction in groundwater level in areas near the river is more severe than in other areas. The results of the research by Mirasi et al. suggest that a 23 m drop in groundwater level is one of the major causes of soil subsidence. The results of the study by Diaz et al. indicate that buildings located in subdued areas are vulnerable, which can reduce the livability and, in some cases, endanger safety of the structures erected in the area. Any inhomogeneity can intensify the consequences of subsidence and cause serious damage to the building. In addition, soil inhomogeneities can worsen damage due to changes in the thickness or properties of the following non-reforming layers. Reducing soil moisture results in a decrease in groundwater levels. On the other hand, the effect of changes in groundwater level and soil moisture on the structural stability of buildings and ground water systems and soil moisture conditions is expected as a result of environmental changes. Most of the damage reported is to buildings where groundwater depletion occurs at shallow foundations. However, deep piles can also be affected. Simulation of the effect of the climatic changes caused by the increase in greenhouse gases on the Zayanderood flow has been done in other research in which a decrease of flow in April and May was reported, of which a decrease in precipitation in these months and an increase in temperature are the reasons. There is also a social impact during periods of drought. According to the results of Maleki and Ahmad Pour's research, during the period under study, it has been shown that drought has had negative effects on the number of annual visits to Isfahan. Citizens' collective memories of the river and its landscape are changing, and the identity of the river and the city is under serious threat. Zayanderood drought has also had economic, social, and psychological impacts on Isfahan businesses, and has also had environmental impacts. Zayanderood drought has an increasing impact on reducing tourism in Isfahan as well. In recent years, the crisis caused by microfluid in Isfahan province and city has been one of the most tangible, natural disasters affecting the daily life of citizens, and also the economy of the area. If the river drying process continues and the wetland water right there in the near future, Gavkhuni wetland could also become a major source of microfluidic production in Isfahan and even other provinces. Incidence and the spreading of disease are among other consequences created by the drought. The case becomes more critical due to the effects of infected society on the environment, as pointed by Antronico et al. The economic impact occurs in sectors that depend on water resources, in addition to sudden damage to wind erosion. Climate change, in general, (due to the need to provide cooling at the houses during the hot season) imposes an excessive load on the electricity supply. Climate change affects many aspects of building performance, as many parts of the existing and future buildings are likely to be affected. Hydrological drought and drying out of the river base, and climate change in the Zayanderood and its impact on sustainable development indices has been addressed in relatively large studies (e.g., [1,3,22,27,). Only one or ultimately two indicators of sustainable development in each research have been addressed so far. None of them have addressed this issue comprehensively with all three sustainability indicators. Cramer et al. emphasized the importance of considering a comprehensive and coherent assessment of risks affecting sustainable development as well. Figure 1 was developed by the authors throughout extensive study on the literature review. It shows the causes and effects of drought in the Zayanderood, which are categorized into three groups (social, environmental and economic related causes). The provided causes and effects are explicitly expanded by the authors and analyzed quantitatively, as explained in Section 4.1. Risk Management Formulating an efficient risk management system is a major challenge for construction project managers. In general, risk management is the process of assessing risk and then developing strategies for managing risk. In general, strategies for used risk management include transferring risk to other sectors, avoiding risk, mitigating the negative effects of risk, and accepting some or all of the consequences of a particular risk. In occupational health & safety assessment series (OHSAS) 18001, risk is a function of the probability and consequences of a specified hazardous event. The overall process of estimating risk and making decisions about risk tolerance is called risk assessment. The process of risk assessment acts as a bridge between proper risk assessment and the balanced management of major risks. Risk assessment that has been carried out in this study followed the UK's health and safety executive (UK's HSE) model including four steps: Identify risks; 2. Who may be harmed and how they will be harmed; 3. Risk assessment of risk; 4. Record the findings. Risk assessment is a rational way to quantify the risks and examine consequences of potential accidents on individuals, materials, equipment, and the environment. Unfortunately, due to the predominance of physical factors, social aspects and their effects are typically being ignored by many risk assessors. While, in order to applying the effectiveness of existing risk control methods and risk mitigation, it is a necessity to consideration and identifying of all aspects and possible risks. In general, risk assessment requires the calculation of two risk components; namely, severity of the event's outcome and the probability that the event will occur. There are three ways to gain probability weight or severity weight outcome: 1. Numerical methods that result in a number; 2. Qualitative methods that result in a certain quality in risk; 3. Semi-quantitative methods Most of these methods use the risk matrix. This study used a semi-quantitative approach to risk assessment. In this context, the theoretical framework developed by Connelly et al. illustrated risk management approach for climate change adaptation (see Figure 2). This research defined probability as the chance of occurrence of the risk, while the presence of a hazard is not indicating the risk, but rather, a hazard only becomes a risk when a system is exposed to the hazard and is vulnerable to it should it be exposed. The following formula was addressed in advance: Research Methodology The research method consists of two stages. The first step is to identify the major risks of river drought. This phase consists of a comprehensive study of previous studies in the field of research, to identify and classify all risks associated with river drought. Important risks were identified through structured interviews and questionnaire distribution among experts. In the next step, weight and prioritize risks are identified. The network analysis process (ANP) is used to weigh the risks identified in the previous step. The ANP method was selected based on a comprehensive study of methods that could show the relationship between risk and feedback. ANP approach is preferable in order to identify the problems of interdependence and feedback between various risk ranking alternatives. Typically, the experts targeted by the ANP method determine the relationships between the indicators (here the indicators are referred as risks) based on their experience and expertise. However, failure to allocate appropriate relationships between risks and the multiplicity of them accordingly increase the likelihood of errors in the judgment and calculations of the experts while answering the pairwise questionnaire used by ANP. During this phase, the network of relationships between risks is formed. Thereafter, the interdependence of risks is based on sustainable development indicators. Finally, the weight of each risk is determined by pairwise comparisons based on the questionnaire. Figure 3 shows the two main steps of this research. Data Collection Data collection is one of the most important parts of any research project. True accomplishment of this on a regular basis would result in good speed and accuracy of data analysis. The data collection in this study was a combination of library and field methods as described next. The information needed is collected in two ways: 1. Receive expert opinions through questionnaire and interview; 2. The research questionnaire will be provided to the research experts and their comments will be obtained. In this study, the sample size is selected using the Cochran formula according to the unknown population. To calculate the sample size, there must be an estimate of the size of p, which can be obtained from previous studies. Also, it can be estimated based on the experience of experts in the field or guiding study. If none of the above methods is feasible, assume p = 0.05 to obtain the maximum number of samples possible. In this study, the value of p was considered equal to 0.05. Wherein: N is the population volume; z = 1/96; p = q = 0. 5; d allowed error (error value). Thus, according to the unknown population, 65 people were selected as the sample size. Cronbach's alpha coefficient was calculated by SPSS software to determine the reliability of the questionnaire based on the collected data. The results are visible in the table below ( Table 1). The validity of the questionnaire was checked by expert opinions. Selection of Expert Panel The selection of the panel list and survey question formulations play a significant role in determining the reliability of the research. Experience and knowledge in the field of sustainable development and understanding of its issues are the most important criteria in deciding the credibility of the study. In order to ensure the credibility of this study, the respondents were carefully selected, based on criteria such as their degree, level of experience, and their profession (civil engineering, architecture, urban planning, academic economists and urban manager). The questionnaire was distributed to approximately 65 respondents. A total of 48 completed questionnaires by experts was collected, representing a success rate of 74%. Table 2 shows the background information of the respondents. These experts represent a vast spectrum of experts on environmental, social, and economic issues, and provide a balanced view for the questionnaire survey. As shown in Table 2, professional backgrounds of the participants mainly include civil engineering, architecture, and urban planning, although the presence of respondents from urban managers and academic economists reflects the comprehensiveness of respondents in all three aspects of sustainable development; namely, the economic, social and environmental aspects. Furthermore, more than 93% of them have more than five years experience in their sectors. Risk Assessment Risk assessment determines the quantitative and qualitative value of the risks. It is clear that the results of this step determine the ability to properly manage the identified risk factors according to the circumstances. Risk assessment and prioritization determine the areas in which risk management should be more focused. In this study, evaluation and prioritization will be based on multi criteria decision making. An ANP technique was used for this purpose. The ANP model was developed by Saaty to solve the analytical hierarchy process (AHP) model problem, which is an advanced model for decision making and analysis. This model is capable of calculating the consistency of judgments and flexibility in the number of levels of judgment criteria. The ANP model is in fact the generalized model of the AHP hierarchical process analysis method, which does not follow AHP assumption about lack of relationship between different levels of decision making. In this study, the Saaty judgment scale (Table 3) is used to express the significance of each risk. Moderate Importance Experience and judgment slightly favor one activity over another 4 Moderate Plus -5 Strong Importance Experience and judgment strongly favor one activity over another 6 Strong Plus -7 Very Strong or demonstrated Importance An activity is favored very strongly over another, its dominance demonstrated in practice 8 Very, Very Strong -9 Extreme Importance The evidence favoring one activity over another is of the highest possible order of affirmation. Description of Using ANP for Risk Assessment In order to consider the dependencies and feedback between the risks and the criteria, it is necessary to examine the risk assessment tools, techniques, and their capabilities. Given the high potential of the ANP method in decision-making applications, and considering dependency among factors, it was used for data analysis. Important risks identified were weighted based on their dependence and feedback. The weight and final ranking of risks were obtained through the Super Decisions software. Based on the assumptions and the research method, data analysis was started after identifying the criteria influencing risk prioritization. To do this, a questionnaire survey was provided to distribution among existing experts in the fields of architecture, civil engineering, and urban planning based in municipalities and affiliated companies. The research and ranking the risks of the Zayanderood drought by ANP method are summarized as follows: Step 1: Determine clusters, elements, and sub-elements to be initially used in the recommended model. The key selection elements and sub-elements are determined in this step by experts. In the model network decision, one set of elements is involved: one element and one sub-cluster are determined for the criteria. These elements are identified as optimal risk allocation criteria. Step 2: Build an ANP network structure, including clusters, elements, sub-elements, and alternatives utilizing Super Decisions software. Step 3: Obtain a pairwise comparison matrices between various groups and the various risk factors within the same group. These comparisons were collected in comparison matrices. The following question was asked of the expert team to compare each criteria group and criteria factors with attention to their impact on the risk assessment of each risk. The experts were asked to perform a pairwise comparison using an ANP scale. To reflect the interdependencies of this simple network, pairwise comparisons among all the groups and risk factors were performed, and these relationships were evaluated. The averages of the answers were inserted into the Super Decision software to calculate consistency of pairwise comparison matrices. Consistency rate (CR) was used to check consistency according to the pairwise comparison. If the value of CR was less than 0.1, it indicated that such a pairwise comparison matrix contained satisfactory consistency. Step 4: The next step is to create un-weighted, weighted, and limit super-matrices of all the elements within a network structure. The un-weighted super-matrix includes the local priorities insulated from the pairwise comparisons. Influence priority is assigned as zero when an element has no influence on another element. Multiplying the cluster weights to their relative blocks in the un-weighted super-matrix yields the weighted matrix. In this method, the component is weighted with its corresponding cluster matrix weight. Then, the weighted super-matrix must be converted to a limited matrix by raising the weighted super-matrix to powers. The results of the priorities are extracted and obtained from the limit matrix. The above computing process is accomplished using Super Decision software. Finally, the final ranking of each risk factor using ANP weights are obtained in this stage. Super Decisions implement the ANP. It is decision-making software which works based on two multi-criteria decision-making methods: AHP and ANP. The Super Decisions software is used for decision-making with dependence and feedback. This software provides tools to create and manage ANP models, enter judgments, obtain results, and perform sensitivity analysis on those results. Super Decisions software has been applied by many researchers in the fields of risk management and sustainable decision-making, such as water safety and health, social and economic risk assessment, and flood hazard (e.g., [42,45,). In this study, all of the second, third, and fourth steps were conducted by Super Decisions software. Area of the Research Iran is located in an arid zone and has faced a serious water shortage crisis over the past several years. Its precipitation is approximately one third of the global average, and distribution of monthly rainfall has changed in recent years. The drought in Iran has become one of the most important problems in the country, which is experiencing a range of drastic environmental, social, and economic problems in need of being urgently addressed. The Zayanderood is the largest river of the Iranian Plateau in central Iran. The Zayanderood riverside has always been the center of all social and economic activities in Isfahan, one of Iran's main tourist attractions. 80% of the Zayanderood extracted water is used for agriculture, 10% for human consumption (drinking and domestic needs of a population of 4.5 million), 7% for industry, and 3% for other uses. The Zayanderood once had significant flow all year long, unlike many of Iran's rivers which are seasonal. In the early 2010s, the lower reaches of the river dried out completely after several years of seasonal dryouts. After 14 consecutive years of hydrologic droughts and climate change, parts of the river in areas near Isfahan have turned into dry riverbed. Drought has damaged the agriculture sector severely in this region, because when drought occurs, the residential and industry sectors are given priority. On the other hand, the upstream section of the river after Zayanderood dam does not experience any water limitation in times of drought; therefore, the full impact of drought pressures is imposed on the downstream section of the river. Moreover, drought causes many problems for the population, as dust storms are frequently observed in those areas. Besides, the water shortage problem further complicates the daily life of the people. The area of research is shown in Figure 4. In principle, continuation of the Zayanderood drought process has negative effects on social, economic, and environmental aspects. Therefore, considering the importance of this issue, it can be concluded that this issue is vital for the sustainability of Isfahan. The study area comprises, on average, a 500 m radius of the northern and southern margins of the Zayanderood River in Isfahan. This area includes buildings with different uses that are clearly displayed on the user map. As shown in the map (Figure 5), the use of buildings in the study area includes residential, commercial, educational, administrative, religious, health, cultural, hotel, tourism, and outdoor uses, with the most residential use being in this area. Results and Discussion In order to examine the risks affecting the sustainable development indicators, examining the relevant literature thoroughly and comprehensively was attempted. Although there have been good researches on the subject under study, according to other studies, it can be said that in scale with other natural crises, the issue of river drought and its effects on sustainable development have never been studied. However, using the available research background and theoretical foundations, there were 26 risks arising from the effects of surface and groundwater drought on the river. These risks are divided into three main sustainable development groups (environmental, social, and economic). Out of the 26 identified risks, 17 are related to the environmental category, 6 are social-related, and 3 are economic. Therefore, it can be said from a general perspective that from previous studies, the effects of hydrological drought of the river base have the greatest impact on environmental and social indices, while at the same time there are many economic risks in regards to urban sustainable development (Table 4). A questionnaire (Appendix A) was designed to identify the sub-criteria and was distributed among specialists. The work was conducted through a structured interview to ensure that respondents had a full understanding of the questions. Then, the final sub-criteria were identified by collecting the questionnaires. It should be noted that the risk has been calculated as the results of multiplying three dimensions (effects, proneness, and exposure) that were included in the distributed questionnaires. Risks and their attributed weight have been illustrated in a scatter diagram, shown in Figure 6. Blue dots in Figure 6 represent the risk factors, and their attributed number is the number of each risk factor according to Table 4. As shown in Figure 6, eight risks have been located at the left side of the risk limit line. The risk assessor in this research opted to eliminate risk below 18 as unimportant or effective risks. The risk limit line is obtained by calculating the standard deviation of all values, which is 18.18. Reducing the quantity and quality of drinking water disapproval -10 Wastewater problems due to drought disapproval -11 Air pollution, dust According to the results of the initial questionnaire, 8 sub-criteria were not approved by experts and experts related to the problem. As a result, there are 18 sub-criteria (out of 18 factors, 10 are related to the environmental category, 5 are social-related, and 3 are economic) that influence the evaluation, which are then examined using ANP. After confirming the effective criteria, the second questionnaire (Appendix B) addressing the experts was conducted as well. The relationships between the sub-criteria were determined, and then the opinions of 48 research experts were collected based on the second questionnaire (Figure 7 is an example of the relationships depicted in the ANP model using Super Decision software). In this section, we compiled tables from the combination of 48 respondents who responded to this questionnaire, each of which has a pair of comparative matrix houses from the geometric mean of 48 respondents. The geometric mean of the different views makes the incompatibility rate of each matrix smaller than the incompatibility rate of the pairwise comparisons of each individual. Criteria and Sub Criteria Prioritization As can be seen in Table 5, the criteria and sub-criteria were ranked according to their final weight in the Super Decisions software. Figure 8 also shows the weighting of the main environmental, social sustainability, and economic sustainability indicators. Data Validation Numerous methods have been used to validate prioritization results from the ANP model, including use of the statistical method in the study by Fuertes et al., and comparison to other methods such as studies developed by Sun and Meng and Juan et al., and application of VIKOR (Vlse Kriterijumska Optimizacija Kompromisno Resenje which means multi criteria optimization and compromise solution, in Serbian) in the study by Mohammadi et al.. In order data validation, the authors found the use of a statistical method to be appropriate for this study. For this purpose, a survey is developed to evaluate the expert satisfaction from ranking with the ANP model while comparing that with the primary risk assessment carried out by the first questionnaire. The respondent of the evaluation survey included 2 members with civil engineering backgrounds, an economist, and 3 urban planning professionals (designers and planners). Evaluators are assessed and selected based on their level of experience, background, and their authorization. The results showed that the rate of satisfaction among the evaluators is 0.89. Evaluation criteria and responses of the experts are represented in Table 6. Conclusions Drought is one of the climatic events that exhibit different forms in the vast expanse of Iran, influencing the natural life of its inhabitants. Drought, rising temperatures, and evapotranspiration, increased consumption patterns, and poor management are the fundamental elements of a water crisis. Zayanderood River, because of these reasons, is faced with severe economic and social challenges, and management of water resources. Although much research has been done by previous researchers on the origin of drought and its associated risks, the originality and innovation of the present study is to adopt a risk-based approach to identify and prioritize the risk of drought impacts on sustainable development indicators of its surrounding buildings. This study seeks to improve previous results by examining dependence, feedback, and interaction between risks. The results of this study play a significant role in the management of the drought crisis. Such results enable the decision maker to make deeper decisions, such as focusing on important priorities and finding possible alternative solutions. Drawing risk-based policies can reduce the level of damage in this regard. The key findings and main results of the present study are summarized as follows: The first objective of this study was to identify and classify the risks associated with the Zayanderood drought affecting sustainable development indices. First, a comprehensive study of past literature was conducted to identify the risks associated with river drought, and in particular the risks of groundwater depletion. Several face-to-face interviews were held with experienced civil, architectural, and urban design professionals. Based on the findings of previous studies, 26 risks were identified, which were reduced to 18 important risks according to the results of the questionnaire distributed. These risks were classified into three groups according to sustainable development indicators. These include environmental, social, and economic. The most important results for this purpose are summarized below: 1. A comprehensive study on the Zayanderood was conducted to identify related risks; 2. To identify the significant risks structured interviews with industry experts has been done; 3. In general, 18 important and key risks associated with drought were identified; 4. Risks were categorized into three environmental, social, and economic groups. The second purpose of this study was to determine the weight of each of the major risks based on the dependence and feedback between the risks and the indicators. Analytic Network Process (ANP) method was chosen for data analysis because of its ability to consider the dependence between criteria and sub-criteria. The network structure was formed by a panel of experts in different fields, including civil engineering, architecture, and urban planning to illustrate the interaction between risks. At the same time, a pairwise comparison questionnaire was developed to determine the degree of importance of each risk. After the questionnaires were distributed and collected, Super Decision software was used for data analysis. The weight of each risk was obtained according to three environmental, social, and economic indicators. In fact, the weight of each risk reflects the impact of that risk on sustainable development indicators. Among the 18 identified risks, environmental risks were the most weighted and social, and economic risks ranked in second and third place. Identified risks were then assessed and prioritized. The set weights were considered their final ranking. The analysis the ranking of the risks associated with the Zayanderood droughts represented a significant impact of the drought on underground water level. The sub-criterion weight ad 0.1717 in this model. Climate change with the weight of 0.1325 and gradual subsidence of land with the weight of 0.1219 ranked in second and third place. Devastating effects on the structure which were considered in three sub-criteria, including of structural defects especially in old buildings, defects in installations, and creating cracks between structure and foundation attained fourth, sixth, and eighth rankings. On the other side, the effects on immigration with the weight of 0.0078, drying of well sheds with the weight of 0.0095, and negative impact on river identity with the weight of 0.0013 were identified as least important impacts of the drought, respectively. The analysis confirms the interplay between climate change and the Zayanderood drought in a way that any of them give rise to the other one. The fact can be interpreted from Figure 1 as well. Decrease in participation rate and increase of temperature act as both cause and effect of the drought. As the participation rate decreases and the temperature rises, the drought will increase, and with the continuing trend of drought and river evaporation the weather will become hotter and participation would be reduced. In later stages, this can lead to secondary effects, such as increased disease burden and lower quality of life. On the other hand, as the effect of the Zayanderood drought crisis has been noted in many previous studies (including ) the impact of this crisis on the level and storage of groundwater levels was determined as very high by this study. The issue requires serious consideration and specific control measures to manage the risk. Since, in addition to the destructive impact of this factor on other environmental factors such as climatic conditions, soil subsidence, damage to the installation network, and the structure of buildings, this factor causes other consequences in social contexts and material damages. Therefore, it can be concluded that the negative impact of the drought crisis on the groundwater table is the most important and fundamental consequence of this crisis. With this regard, particular attention should be paid to the issue of river drought based on sustainable development concerns. Due to the city of Isfahan facing the problem of climate change, it is suggested that urban buildings adopt and adapt more to the current and future conditions in the construction industry, refurbishment, and improvement in accordance with the circumstances. Creating gaps in the building and the ground 9 Reducing the quantity and quality of drink... 10 Wastewater problems due to drought Appendix B. An Example of Pairwise Comparison Questionnaire Dear Expert, The purpose of this pairwise comparison is to determine of the importance of sub criterion according to sub criterion of air pollution and dust in social group. For example: What is the importance of Increasing residents' illness affliction to the Degrading the quality of life, according to the due to air pollution and dust? (Intensity of Importance: 1 = Equal Importance; 2 = Weak /Light; 3 = Moderate Importance; 4 = Moderate Plus; 5 = Strong Importance; 6 = Strong Plus; 7 = Very Strong or Demonstrated Importance; 8 = Very, Very Strong; 9 = Extreme Importance). |
The influence of ion hydration numbers in aqueous solutions of electrolytes on the activation energies of molecular motions according to NMR relaxation data The temperature dependences of the rate of spin-lattice relaxation on protons for some aqueous solutions of electrolytes of a half-molar concentration were processes using an approximating function written as the sum of three exponential functions. It was suggested that activation energies E i characterized water molecule motions in various solution structure regions. The possibility of separately determining the activation energies of molecular motions in the regions of short- and long-range hydration in various solution substructures under the influence of ions was demonstrated. A semiquantitative description of the observed changes in E i for various ions is given. |
UZBEKISTAN AND RUSSIA. NEW STEPS TOWARDS MUTUAL UNDERSTANDING. BOOK REV.: PIVOVAR, E.I. (ED.), PUBLICATIONS OF THE INSTITUTE OF POST-SOVIET AND INTERREGIONAL STUDIES, ISS. 4: UZBEKISTANS STUDIES, RGGU, MOSCOW, RUSSIA This publication considers the latest issue of the Proceedings of the Institute of Post-Soviet and Interregional Studies. The Proceedings of the Institute of Post-Soviet and Interregional Studies is an annual publication that examines issues in the history of post-Soviet states closely connected to Russia in terms of policy, economy, and culture. It has been published since 2018; the first issue was dedicated to Ukraine, the second one to Kazakhstan, the third one to Azerbaijan, and, finally, the issue dedicated to the Republic of Uzbekistan was prepared and published in 2021. The author of the review emphasizes that Uzbekistan has a special place in the Central Asian region and, consequently, in Russian policy concerning Central Asia, as well as in the whole Russian policy in the post-Soviet space. Moreover, the author notes the importance of cooperation between Russia and Uzbekistan in scientific and educational spheres and importance of studying Russian history in Uzbekistan and the history of Uzbekistan in Russia. The author notes particularly that Uzbekistan Studies is a part of the educational cycle implemented by the Institute of Post-Soviet and Interregional Studies within the framework of a bachelor degree programme (Russia and Turkey in modern Eurasia: Foreign Policy, Society, Culture) and a master degree programme (History and Geopolitics of Modern Eurasia). Analysing publications that make up the issue under the study, the author emphasizes that the issue was prepared by the Institute of Post-Soviet and Interregional Studies in collaboration with the Institute of World History and the Institute of Oriental Studies of the Russian Academy of Sciences, Lomonosov Moscow State University, and scientific and educational institutions of Uzbekistan. According to the author, articles in that collection provide a new perspective on the history of Uzbekistan and contribute to its deeper understanding by the Russian academic community. |
Review on Floating Offshore Wind Turbines This paper provides a literature review of the research work done on floating offshore wind turbines, while discussing their technical, economic and environmental aspects. Through this study, research work in this technology is reviewed and future work recommendations are suggested. Centuries before, wind energy paved our way into the vast oceans. Its efficient utilization in the form of sails, helped us conquer the oceans with ships. Unfortunately, wind energy lost its charm in the oil era. But now as we realign our priorities for a greener future, wind energy is yet again turning out to be a reliable energy source. It can be our tool to shift to a cleaner energy supply and realize global renewable energy targets. To make the fossil-to-wind transition possible, the innovative concept of floating offshore wind energy is providing a sophisticated mechanism to harness the wind energy exponentially and will definitely help the mankind to reinforce a sustainable grip on the oceans once again. Floating wind turbines present an economical and technically feasible approach to access the deeper water sites to obtain the rich resource of wind power. Therefore, they have the potential to be the next generation of wind technology. With the installed floating wind power capacity to increase to 250 GW by 2050 (DNV GL Report- Floating Wind: The Power to Commercialize, 2020), it is safe to say, the future is floating. |
Authoritarian Traits as Predictors of Preference for Candidates in 1980 United States Presidential Election Responses of 42 males and 42 females to a questionnaire containing items from the dogmatism and F scales showed the right-wing candidate supporters were higher on authoritarianism than adherents of either moderate Republicans or President Carter; left-wing liberal candidate supporters were lowest. Data support the findings of previous research, attest to the reliability and validity of both the D scale and F scale, are consistent with the contention that the D scale is confounded by political ideology, and are inconsistent with Gergen's contention that findings in social psychology are a function of history. |
High physical activity is associated with post-traumatic stress disorder among individuals aged 15 years and older in South Africa Background Some research seems to suggest that physical activity (PA) was beneficial for post-traumatic stress disorder (PTSD). Aim This study examined the association between levels of PA and PTSD among individuals 15 years and above in South Africa. Setting Community-based survey sample representative of the national population in South Africa. Methods In all, 15 201 individuals (mean age 36.9 years) responded to the cross-sectional South African National Health and Nutrition Examination Survey (SANHANES-1) in 2012. Results One in five (20.1%) of participants reported exposure to at least one traumatic event in a lifetime, and 2.1% were classified as having a PTSD, 7.9% fulfilled PTSD re-experiencing criteria, 3.0% PTSD avoidance criteria and 4.3% PTSD hyperarousal criteria. Almost half (48.1%) of respondents had low PA, 17.4% moderate PA and 34.5% high PA. In logistic regression analysis, adjusted for age, sex, population group, employment status, residence status, number of trauma types, problem drinking, current tobacco use, sleep problems and depressive symptoms, high PA was associated with PTSD (odds ratio = 1.75, confidence interval = 1.112.75), PTSD re-experiencing symptom criteria (OR = 1.43, CI = 1.091.86) and PTSD avoidance symptom criteria (OR = 1.74, CI = 1.182.59), but high PA was not associated with PTSD hyperarousal symptom criteria. In generalised structural equation modelling, total trauma events had a positive direct and indirect effect on PTSD mediated by high PA, and high PA had a positive indirect effect on PTSD, mediated by psychological distress and problem drinking. Conclusion After controlling for relevant covariates, high PA was associated with increased PTSD symptomatology. Introduction Globally, the prevalence of post-traumatic stress disorder (PTSD) is significant and impacts morbidity and mortality. 1,2 Compared with the general population, individuals with PTSD are more likely to have low physical activity (PA). 3 In a systematic review from eight studies, four consistently found associations with lower PA in individuals with 'PTSD symptoms of hyperarousal'. 3 In additional studies, Whitworth et al. 4 found that 'strenuous intensity exercise' directly decreased 'avoidance/numbing and hyperarousal symptoms', and total exercise directly decreased avoidance and numbing symptoms. LeardMann et al. 5 found that engaging in PA, particularly high PA, decreased PTSD. All studies investigating levels of PA in relation to PTSD have been conducted in industrialised countries. In a previous review, Atwoli et al. 6 note that 'trauma and PTSD-risk factors may be distributed differently in lower-income countries compared with high-income countries'. In several intervention studies, PA seems to be able to reduce symptoms of PTSD and depression among individuals with PTSD. 2,7 Where limited access to traditional treatment modalities, such as psychotherapy and pharmacotherapy, of PTSD is available like in low-resourced settings such as in South Africa, PA intervention as an adjunct to PTSD treatment could be relevant. 2 Based on prior studies, 3,8 it was hypothesised that greater moderate and high PA levels would be associated with reduced overall PTSD symptoms and the three PTSD symptom clusters. The study aimed to http://www.sajpsychiatry.org Open Access examine the association between PA levels and PTSD among individuals aged 15 years and above in South Africa. Sample and procedure Cross-sectional data of the South African National Health and Nutrition Examination Survey (SANHANES-1) conducted in 2012 were analysed. 9 Household members aged 15 years and above were 'interviewed using a structured questionnaire on demographic and health variables'. 9 The individual study survey response rate was 92.6%. 9 Measures Trauma event exposure. Participants were asked, 'Have you ever experienced any of the following events?' (14 events, e.g. 'severe automobile accidents' and 'learned about the sudden, unexpected death of a family member or a close friend?' -Yes or No). 9 Post-traumatic stress disorder was measured with the '17-item Davidson Trauma Scale (DTS)' that assesses 'all primary DSM-IV symptoms of PTSD related to intrusion, avoidance and hyperarousal symptoms'. 10 Participants had PTSD 'if they score at least one re-experiencing, three avoidance/ numbing and two hyperarousal phenomena at a frequency of at least twice in the previous week' 10 (Cronbach's alpha 0.94). Physical activity was assessed with the validated 'General Physical Activity Questionnaire (GPAQ)'. 11,12 'It assessed days and duration of PA at work, for transport, and during leisure time in a usual week'. 12 Results were grouped into 'low, moderate and high PA according to GPAQ guidelines'. 12 Domain-specific PA (work, transportation and leisure time) were 'classified into three groups, no (or low) activity, and low and high groups by the median metabolic equivalent (METs) of those having performed such activities'. 13 Sleep problems was defined as 'severe or extreme/can't do' having the 'problem with sleeping, such as falling asleep, waking up frequently during the night, or waking up too early in the morning?' 14 Depressive symptoms were defined as 'severe or extreme/can't do' having the 'problem with feeling sad, low or depressed'. 9 Problem drinking was defined as scoring 3 or more in women and 4 or more in men on the Alcohol Use Disorders Identification Test-Consumption (AUDIT-C) 15 (Cronbach's alpha 0.89). Demographic data included sex, age, population group, employment and residence status. Current tobacco use included the use of 'tobacco smoking and use of other tobacco products'. 9 Psychological distress was defined as scores 20 or more on the 10-item Kessler scale 16 that was validated in South Africa 17 (Cronbach's alpha 0.93). Body pains were defined as 'moderate, severe, extreme/can't do' having bodily discomfort. 9 Data analysis Data analyses were conducted in STATA software version 15.0 (Stata Corporation, College Station, TX, USA), taking into account the complex study design. Multivariable logistic regression was used to estimate the effects of PA (and its domains) on PTSD (and PTSD symptom criteria), adjusted for age, sex, population group, employment status, residence status, number of trauma types, problem drinking, current tobacco use, sleep problems and depressive symptoms. Covariates were included based on the literature review. 3,4,5,8 Possible two-way interactions were tested, but no significant indirect effects of low, moderate and high PA on PTSD symptoms or any of the individual PTSD symptom clusters (ps > 0.05) were detected. To investigate pathways of associations between total trauma events and high PA and PTSD, we built generalised structural equation models (GSEMs). Models included variables, such as bodily pain, sleep quality, psychological distress, alcohol consumption and substance use, that were previously used in assessing indirect effects of PA on PTSD. 4,18,19 Maximum likelihood function with observed information matrix standard errors was used to fit the models using Akaike Information Criteria (AIC). Data that were missing were not included in the analysis and no collinearity was found. Ethical considerations Informed written consent was obtained from participants. The study protocol was approved by the research ethics committee (REC) of the Human Sciences Research Council (REC 6/16/11/11). One in five respondents (20.1%) had exposure to at least one traumatic event in a lifetime based on DTS criteria. Similar proportions of PTSD levels were found for the three PA domains (work, travel and leisure) (see Table 1). Associations between physical activity levels and post-traumatic stress disorder In Table 2). Associations between domains of physical activity levels and post-traumatic stress disorder In logistic regression analysis, adjusted for age, sex, population group, employment status, residence status, number of trauma types, problem drinking, current tobacco use, sleep problems and depressive symptoms, high work-related PA and moderate travel-related PA were associated with PTSD, while leisure-related PA was not associated with PTSD. High work-and leisure-related PA was positively associated with all three PTSD symptom criteria, while moderate and/or high travel-related PA was positively associated with all three PTSD symptom criteria (see Table 3). Structural equation model analysis Total trauma events had a positive direct and indirect effect on PTSD mediated by high PA (see Figure 1). High PA had a positive indirect effect on PTSD, mediated by psychological distress and problem drinking (see Figure 2). Discussion This is one of the first investigations in a middle-income country, South Africa, to assess the relationship between PA and PTSD. The current conditional prevalence of PTSD after trauma exposure found in this study was 2.1%, which seems a little lower than in the previous South African Stress and Health Study (3.5%). 20 While in this study 20.1% reported at least experiencing one lifetime trauma, the previous study reported a much higher exposure of 73.8%. 20 Differences may stem from the more comprehensive trauma exposure measure of the South African Stress and Health Study that used 27 different types of trauma exposure, 20 while this study only had 14 different types of traumatic events. This investigation found an association between high PA, after controlling for significant covariates, and PTSD total, PTSD re-experiencing and PTSD avoidance symptom criteria, but not with PTSD hyperarousal symptom criteria. These findings seem to be contrary to what previous studies found, namely associations between low PA participation and increased PTSD symptoms of hyperarousal, 3 and high PA and decreased PTSD symptoms 5 and decreased avoidance and numbing symptoms. 4 One possible explanation for this difference could be that trauma and PTSD-risk factors as well as ameliorating factors such as PA may be distributed differently in low-income countries, such as South Africa, compared with high-income countries where the previous studies originated. Several studies 2,4,8 found the beneficial effect of intensive exercise behaviour on PTSD and PTSD symptoms. However, when we analysed domain-specific PA, similar results were found that: in all three PA domains (work, travel and leisure-related PA) a positive association was found with PTSD and/or PTSD symptoms. Although some other studies found an indirect effect of PA, for example via smoking 18 and poor sleep quality, 19 this study found positive indirect effects of PA on PTSD symptoms, mediated by psychological distress and problem drinking. Previous research 4,21,22 has found that PA has beneficial effects on psychological distress and alcohol use, which was not supported by the findings of this study. However, several other studies 23,24 found a positive relationship between PA and alcohol use. Furthermore, this study did not confirm previous findings, 18,19,25 suggesting beneficial effects of PA on PTSD through tobacco use, sleep quality and bodily pain. Clearly, more longitudinal studies are needed to establish the direct and indirect links between PA activity and PTSD and PTSD symptoms. Study limitations Because the investigation was based on cross-sectional data, no causative inferences can be made. The assessment of our data was based on self-report, including PA. This may have led to an overestimation of PA levels. 26 Conclusion This investigation found in a national community-based sample in South Africa that, after controlling for relevant confounders, high PA was associated with overall PTSD symptoms, PTSD re-experiencing symptom criteria and PTSD avoidance symptom criteria. Future investigations in low-and middle-income countries are needed to replicate these results. |
Complication during roboticPCI: Iatrogenic guiding catheter dissection Roboticassisted percutaneous interventions (RPCI) is a revolutionary technology designed to improve operator safety and procedural precision. The secondgeneration CorPath GRX (Corindus) RPCI platform allows operators to manipulate the guiding catheter using robotic joystick controls. We report a case where robotic guide catheter manipulation caused a dramatic left main stem dissection. We highlight important concepts learned following this complication. |
Psychosocial adjustment, health-related quality of life, and psychosexual development of boys with hypospadias: a systematic review. OBJECTIVE A systematic review of studies on psychosocial adjustment, HRQoL (health-related quality of life), and psychosexual development of boys with hypospadias. METHODS Research was conducted on several online bibliographic databases. Articles were selected on the basis of predefined criteria. Methodological quality was assessed by two independent reviewers who applied a standardized checklist. When possible, data analyses were performed by calculating effect sizes. RESULTS Thirteen studies met the criteria for inclusion, whose methodological standard ranged from low to high quality. None of them has focused on HRQoL. Findings with regard to psychosocial and psychosexual adjustment were inconsistent, though they clearly showed that boys with hypospadias suffer from negative genital appraisal and sexual inhibitions. Overall, medical factors exerted a rather small influence. Psychosocial risk factors have hardly been examined so far. CONCLUSIONS The identification of psychosocial risk factors in methodologically sound studies is necessary to guarantee a comprehensive treatment for boys with hypospadias. |
Spatio-temporal rainfall variability in the Amazon basin countries (Brazil, Peru, Bolivia, Colombia, and Ecuador) Rainfall variability in the Amazon basin (AB) is analysed for the 19642003 period. It is based on 756 pluviometric stations distributed throughout the AB countries. For the first time it includes data from Bolivia, Peru, Ecuador, and Colombia. In particular, the recent availability of rainfall data from the Andean countries makes it possible to complete previous studies. The impact of mountain ranges on rainfall is pointed out. The highest rainfall in the AB is observed in low windward regions, and low rainfall is measured in leeward and elevated stations. Additionally, rainfall regimes are more diversified in the Andean regions than in the lowlands. Rainfall spatio-temporal variability is studied based on a varimax-rotated principal component analysis (PCA). Long-term variability with a decreasing rainfall since the 1980s prevails in JuneJulyAugust (JJA) and SeptemberOctoberNovember (SON). During the rainiest seasons, i.e. DecemberJanuaryFebruary (DJF) and MarchAprilMay (MAM), the main variability is at decadal and interannual time scales. Interdecadal variability is related to long-term changes in the Pacific Ocean, whereas decadal variability, opposing the northwest and the south of the AB, is associated with changes in the strength of the low-level jet (LLJ) along the Andes. Interannual variability characterizes more specifically the northeast of the basin and the southern tropical Andes. It is related to El Nio-Southern Oscillation (ENSO) and to the sea surface temperature (SST) gradient over the tropical Atlantic. Mean rainfall in the basin decreases during the 19752003 period at an annual rate estimated to be −0.32%. Break tests show that this decrease has been particularly important since 1982. Further insights into this phenomenon will permit to identify the impact of climate on the hydrology of the AB. Copyright 2008 Royal Meteorological Society Introduction The Amazon basin (AB) extends between 5°N and 20°S and from the Andes to the Atlantic Ocean, covering approximately 6 000 000 km 2. Its fresh water contribution to the global ocean is 15% and its average discharge at the delta is 209 000 m 3 /s (). The basin is divided into three great morphological units: 44% of its surface belongs to the Guyanese and Brazilian shields, 45% to the Amazon plain, and 11% to the Andes. This basin covers seven countries: Brazil (63%), Peru (16%), Bolivia (12%), Colombia (6%), Ecuador (2%), and Venezuela and Guyana (1%). The AB is one of the regions with the highest rainfall in the world and a major water vapour source (Johnson, 1976;Ratisbona, 1976;;Figueroa and Nobre, 1990). Also, it can undergo dramatic drought as observed in 2005 ). Nonetheless, owing to a lack of information, few studies describe the spatio-temporal rainfall variability in the AB countries, except for Brazil. Cooperation programmes between Institut de Recherche pour le Dveloppement/Institute for Research and Development (IRD) and local institutions have permitted, for the first time, the integration of data from the different Amazonian countries, highlighting a group of pluviometric stations unavailable so far, specially in the Amazon regions of the Andean countries (Bolivia, Peru, Ecuador, and Colombia). Nevertheless the need to consider comprehensive data set is important in the Andean regions. Rainfall tends to decrease with altitude, but the windward or leeward exposure of the stations to the dominant moist wind makes it difficult to find a simple relationship between rainfall and altitude (Johnson, 1976;;Guyot, 1993;;;Ronchail and Gallaire, 2006;). On the contrary, in Brazil, the spatiotemporal rainfall variability has been more widely studied and published than in the Andean countries. The highest values (3000-3500 mm/year) may be found in the northwest of the basin, on the border with Brazil, Colombia, and Venezuela, where the general large-scale relief shape, like the large concavities of the Andes eastern slope, creates favourable conditions for air convergence and large rainfall (Ratisbona, 1976;;Nobre, 1983;Salati and Vose, 1984;Figueroa and Nobre, 1990). Abundant rainfall is also registered near the Amazon River delta where the sea-breeze effect is important (). Salati et al. calculated a mean of 2400 mm/year in the central region of the AB and Marquez et al. and Fisch et al. a mean 2300 mm rainfall for the Brazilian AB. Different studies, for the whole AB, give values from 2000 to 3664 mm, with the greater part between 2000 and 2200 mm, as found by Marengo and Nobre. Callde et al. report a 2230-mm mean annual rainfall for the AB down tobidos (1.93°S 55.50°W, 800 km from the Amazon River delta), based on 163 rainfall gauges, including stations in the Andean countries, for the 1943-2003 period. Rainfall regimes in the Brazilian Amazon show an opposition between the north and the south with rainy months in austral winter and summer, respectively (Ratisbona, 1976;;;;Figueroa and Nobre, 1990;, among others). A rainy period is observed in MAM in regions close to the Amazon River delta. A better distribution of rainfall over the year characterizes regions towards the border of Peru, Colombia, and Brazil. Among the limited number of studies devoted to the spatial variability of rainfall regimes in the Andean AB that of Johnson is worth mentioning as this author analyses the seasonal regime of 107 rainfall gauges in Bolivia, Peru, and Ecuador. In Bolivia and southern Peru there exists a rainy period in austral summer and a dry period in winter, which is more intense in the west, inside the Andes (Johnson, 1976;;Guyot, 1993;Aceituno, 1998). Laraque et al. complement the work by Johnson and detail the wide variability of regimes in the Ecuadorian AB based on 47 rainfall gauges, with opposite regimes in nearby zones. A better yearly rainfall distribution can be observed in the lowlands in the northeast of Peru. Interannual rainfall variability in the AB partially depends on El Nio-Southern Oscillation (ENSO; ;Aceituno, 1988;Marengo, 1992;Marengo and Hastenrath, 1993;;;Liebmann and Marengo, 2001;, among others). In particular, below normal rainfall is recorded in the north and northeast of the AB during El Nio events, whereas excess rainfall occurs during La Nia. This signal decreases towards the west and the south of the basin, and an inverse and weak signal can be observed in the Amazon plain of Bolivia (Ronchail, 1998;Ronchail et al.,, 2005Ronchail and Gallaire, 2006), which may be related to the ENSO signal observed in the southeast of South America (the south of Brazil, Uruguay, and the northeast of Argentina). In the tropical Andes of Bolivia and the southern Andes of Peru, rainfall is below normal during El Nio event (Francou and Pizarro, 1985;Aceituno, 1988;Tapley and Waylen, 1990;Rome and Ronchail, 1998;Ronchail, 1998;;Garreaud and Aceituno, 2001;Ronchail and Gallaire, 2006), and the glacier meltdown accelerates during these years (;), while no clear signal can be found during La Nia events. In the north of the Peruvian Andes, no clear signal is found (Tapley and Waylen, 1990;Rome and Ronchail, 1998). The rainfall anomaly is not so pronounced in Ecuador () with a slight rainfall increase during El Nio event for Ronchail et al. and Bendix et al. and a deficit for Vuille et al.. The signal is also weak in the Colombian Amazon, where rainfall is abnormally abundant during La Nia events (Poveda and Mesa, 1993). Long-term variability in the AB has been extensively reported in the literature. Chen et al. find a rainfall increase in the AB countries since the 1960s using data from Global Historical Climatology Network (GHCN). This is in line with the increase in humidity convergence, described by Chu et al. and Curtis and Hastenrath. Nevertheless, this trend is not valid for Callde et al., who rebuilt a pluviometric series for the period 1945-1998 based on 43 pluviometric posts, and observe an slightly decreasing trend for the period, with the exception of high values recorded from 1965 to 1975. Marengo also finds this slight rainfall decrease in Brazil for the same period using data from Climate Research Unit (CRU), Climate Prediction Center Merged Analysis of Precipitation (CMAP), and 300 pluviometric stations from different local institutions. Also, Marengo and Nobre and Marengo show an opposition between the long-term rainfall evolution in northern and southern Amazon. In general, less rainfall has been recorded in the north since the late 1970s, whereas the opposite occurs in the south. These results are consistent with Ronchail, with respect to rainfall in Bolivia and with Ronchail et al., who show an increase in the water level of the Madeira River during the 1970s. These findings may also be observed at the centre of Argentina (, among others), and in the discharge of the Paran River in Paraguay (;Robertson and Mechoso, 1998, etc.). Marengo attributes the rainfall increase in the southern Amazon to an intensification of the northeast trade winds and to the increase in water vapour transport from the tropical North Atlantic to the centre of the Amazon. For a shorter period using CMAP data, Matsuyama et al. also present a decreasing rainfall trend in the north and an increasing trend in the south. Conversely, Zhou and Lau report a rainfall decrease as from 1986 to 1987 in the southwest of the basin, and an increase in the north. To account for this, the authors put forward the warming of the tropical South Atlantic and the shift of the intertropical convergence zone (ITCZ) to the south. Rainfall variability is related to changes in the ocean and the atmosphere as mentioned before. However, it has also been linked to deforestation. In the AB, deforestation has been considered as virtually non-existent till 1960 (), and the beginning of the 1970s (0.34% of total land area being deforested in 1976, ). A compilation of the major works on the impact of deforestation on the AB rainfall has been presented by D' Almeida et al.. It shows that the models developed at a macroscale (>105 km 2 ) and simulating a general deforestation, evaluate a 0.40-1.70 mm/day rainfall decrease (;;Dirmeyer and Shukla, 1994;Polcher and Laval, 1994;etc). Deforestation also causes the dry season to extend (;) and a strong rainfall decrease during the dry season (Silva ). Nevertheless, the present human activity in the AB generates an intense deforestation in the southern and eastern basin principally and little deforestation in other regions, in particular, in the NW (Le Tourneau, 2004). That is why meso-scale deforestation models (102-105 km 2 ) are relevant. On the one hand, they point out a rainfall decrease (Eltahir and Bras, 1994;etc), as well as a rainfall increase, as a result of increased albedo and causing convergence and convection in deforested zones (Chen and Avissar, 1994;Avissar and Liu, 1996;etc), particularly during the dry season (;). The aim of this paper is to provide a comprehensive study of spatio-temporal rainfall variability, using a new set of enriched data mainly originating from Peru, Bolivia, Ecuador, and Colombia. Likewise, it aims to identify the trend and evolution with time of the average annual rainfall in the basin countries. Within the framework of the Hydrology and Geodynamics of the Amazon Basin (HYBAM) programme, a rainfall variability analysis has been developed to assess the impact on discharge and sediment transport in the AB (Guyot, 1993;). This article first presents the data and the related spatial distribution, as well as an explanation of the different methods applied. The first part of the results focusses on spatial rainfall variability, then on regimes. In both cases the analysis is more detailed for Andean regions. Then, the space time interannual and pluriannual variability is analysed in relation to atmospheric circulation and to regional modes of ocean and atmosphere variability. Finally, the mean rainfall variability and trends are described for the whole AB during the 1975-2003 period. Discussions and conclusions are provided in the last section. Data and methods The HYBAM programme (http://www.mpl.ird.fr/hybam) has elaborated a monthly rainfall database, from in situ stations belonging to different institutions in charge of the meteorological and hydrological monitoring: Agncia Nacional deguas (Water National Office -ANA, Brazil), Servicio Nacional de Meteorologa e Hidrologa (National Meteorology and Hydrology Service -SENAMHI, Peru and Bolivia), Instituto Nacional de Meteorologa e Hidrologa (National Meteorology and Hydrology Institute -INAMHI, Ecuador) and Instituto de Hidrologa, Meteorologa y Estudios Ambientales (Hydrology, Meteorology, and Environmental Studies Institute -IDEAM, Colombia). Brazilian data are freely available at http://www.ana.gov.br. Data from SENAMHI, IDEAM, and INAMHI are available on request. The database made up of a total of 1446 pluviometrical stations on a monthly basis has been submitted to the regional vector method (RVM) (Hiez, 1977 andBrunet-Moret, 1979) to assess its quality. Thus, for the same climatic zone experiencing the same rainfall regime, it is assumed that annual rainfall in the stations of the zone is proportional in-between stations, with little random annual variation as a result of rainfall distribution in the zone. The basic idea of the RVM is as follows: instead of comparing pairs of stations by correlation or double mass, a fictitious station is created as some 'sort of average or vector' of all stations in the zone, to be compared with every station. To calculate this 'Vector' station, the RVM applies the concept of extended average rainfall to the work period, which is an estimation of the average possible value that would have been obtained through continuous observations during the study period. On the basis of the above, the least squares method is applied to find the regional annual pluviometric indexes Z i and the extended average rainfall P j. This may be calculated by minimizing the sum of Equation, where i is the year index, j the station index, N the number of years, and M the number of stations. P ij stands for the annual rainfall in the station j, year i; P j is the extended average rainfall period of N years; and finally, Z i is the regional pluviometric index of year i. The series of chronological indexes Z i is called 'regional annual pluviometric indexes vector'. Two methods have been developed in parallel by Brunet-Moret and Hiez, the main difference being the way in which the calculation of the extended average rainfall P j is carried out. The first one considers that the extended average of a station is calculated using the mean observed values, after deleting outliers, i.e. data differing most from those of nearby stations for a particular year. The second one considers that the extended average of a station is calculated on the basis of the most frequent values (the mode) in accordance with the neighbouring stations. Therefore, there is no need to eliminate the data that differ considerably from the average, as it is carried out in the first method. In this study, Brunet-Moret's method has been applied, and the comparison with the other method has not yielded noticeable differences. On basis of these concepts, it is possible to analyse the data following an iterative process of station selection within a specific climatic region. The selection is supported by climatological maps and the description of rainfall regimes, as reported in previous studies. The iterative process calculates the vector, revises the results, separates inconsistent stations, calculates the vector once more, etc. Rejected stations close to the border of a region may present the behaviour of a neighbouring region. As a result, they are taken into account to calculate the vector of a new climatic region. Each resulting region is associated with a 'regional vector' that represents the interannual pluviometric variability in the region, and it is also similar to the behaviour of all the stations which are part of this region. Consequently, this vector is a good indicator of the climatic variability in the region. Thus for each year, this index requires data in at least five stations, to find the longest analysis periods per region. The application of the RVM in the AB led to 756 stations (52% of the total) with data lasting more than 5-year continuous periods, and less probabilities of errors in their series ( Figure 1). On average, the data availability period is from 1975 to 2003, but, in the Andean countries, the series are generally longer and started in 1960 in Peru and in 1950in Bolivia. In Colombia and Brazil, most records started between 1975and 1980, with very few stations with data prior to 1965. The seasonal variability is analysed by means of percentage of rainfall on a quarterly basis from December-January-February (DJF) to September-October-November (SON). The seasonal variation coefficient (sVC) is calculated using the mean monthly rainfall. Likewise, the interannual variation coefficient (iVC) is computed using annual rainfall values. The different seasonal regimes are analysed based on rainfall indexes that relate monthly rainfall to annual rainfall. Thus, stations can be classified according to their annual cycle and not on an amount of water. Equation is used to calculate this index, where I i is the monthly index for the month i, PP i the monthly rainfall for month i, and PP A the total annual rainfall. An ascendant hierarchical classification (AHC) is applied to the monthly rainfall indexes to define the optimum number of clusters. The Ward method is applied to maximize inter-class variance. The K-means method is then applied based on the number of groups found through AHC. This method relies on consecutive iterations permitting to decrease intra-group inertia and to increase inter-group inertia. The number of iterations was 10, 15, and 25. Although groups can be created based on AHC, K-means permits to obtain several classifications and to identify stable and unstable stations (belonging to different clusters in different iterations). Only those stations belonging to the same cluster in every iteration were used to define the regimes. To measure the average rainfall in the basin and its interannual evolution, the Kriging interpolation method is applied. This method consists in establishing a variogram for each spatial point. This variogram evaluates the influence of the 16 closest stations according to distance. The Kriging method is the only one to take into consideration a possible spatial data gradient. Spatial and temporal structures of interannual rainfall variability are studied based on a varimax-rotated principal components analysis (PCA) (Dillon and Goldstein, 1984) on the RVM pluviometric indexes. The use of the RVM indexes rather than initial data allows long time series to be considered. The applied PCA is of the varimax-type. It circumvents the exaggerated influence of variables (vectors) with a high contribution to the factors. The analysis of rainfall trend relies on correlation coefficients; the Pearson coefficient which is parametric measures the lineal correlation among variables, whereas Spearman and Kendal coefficients are non-parametric and based on range and range probability of the data occurrence order, respectively (Kendall, 1975;Siegel and Castellan, 1988). Breaks and changes in the series are evaluated through different methods. The Bayesian Buishand method is based on changes of the series average; the critical values for the identification of breaks are based on Monte Carlo method which remains valid even for variables with a distribution different from normal. The Pettitt method is a non-parametric test based on changes in the average and the range of the series subdivided into sub-series. It is considered one of the most complete tests for the identification of changes in time series. Lee and Heghinian Bayesian test uses the average as an indicator of change thanks to an a posteriori Student's distribution (Lee and Heghinian, 1977). Finally, Hubert segmentation is based on the significant difference of average and standard deviation among periods; it is particularly well-suited to the search for multiple changes in series (). Geopotential, wind, and humidity data originates from the European Center for Medium Range Weather Forecast (ECMWF) () reanalysis project. The ECMWF ERA-40 reanalysis data used in this study has been obtained from the ECMWF data server. Reanalysis data result from a short-term operational forecast model and from the observation of various sources (land, ship, aircraft, satellite...). Data are provided four times a day, on a 2.5°latitude X 2.5°longitude global grid, at 23 pressure levels. The vertically integrated water vapour flux is derived from the specific humidity and the horizontal wind between the ground and 500 hPa (). Several regional climatic indexes are used to characterize the temporal patterns resulting from the analysis of annual rainfall. The Southern Oscillation Index (SOI) is the standardized pressure difference between Tahiti and Darwin. The Multivariate ENSO index (MEI) monitors ENSO in the Pacific using sea-level pressure, zonal and meridional components of the surface wind, sea surface temperature, surface air temperature and total cloudiness fraction of the sky (Wolter and Timlin, 1993). Both indexes are from the Climatic Prediction Centre of the National Oceanic and Atmospheric Administration (CPC-NOAA:http://www.cdc.noaa.gov/). Sea surface temperature (SST) data are also from the CPC-NOAA. Monthly SSTs are provided for the northern tropical Atlantic (NATL, 5-20°N, 60-30°W) and the southern tropical Atlantic (SATL, 0-20°S, 30°W-10°E). The standardized SST difference between the NATL and SATL is computed to feature the SST gradient in this oceanic basin. The Pacific Decadal Oscillation (PDO) Index is defined as the leading component of North Pacific monthly SST variability, poleward of 20°N for the 1900-1993 period (: http://jisao.washington.edu/pdo/). When PDO is positive, water is colder in the central and western Pacific and warmer in the eastern Pacific; with a negative PDO, the reverse is observed. This negative and positive PDO 'events' tend to last from 20 to 30 years. The PDO index has been mainly positive since 1976. The management of the pluviometric database, as well as the application of the RVM and the calculation of the average rainfall in the basin, has been carried out using the HYDRACCESS software, developed within the framework of the HYBAM programme (free download at www.mpl.ird.fr/hybam/outils/hydraccess en.htm; Vauchel, 2005). The calculation of changes in the series is made using the KHRONOSTAT software (free download at www.mpl.ird.fr/hydrologie/gbt/projets/iccare/khronost. htm; IRD, 2002). Spatio-temporal rainfall variability in the Amazon basin Rainfall gauges approved by RVM display a heterogeneous spatial distribution in the AB countries ( Figure 1). In Brazil, stations are evenly distributed. However, as the dense forest leads to poor access, the pluviometric stations have been mainly located along the rivers and highways. In the Andean countries there exists a great number of stations, often featuring long series, especially in mountainous regions, where access is easier than in the lowlands. On the contrary, stations are few and far between in the lowland regions of Peru, Ecuador, and Bolivia, on the border of Peru and Brazil, and in the northeast region of the basin, on the Brazilian border with Guyana and Surinam (Figure 1). Spatial variability of annual rainfall Particularly rainy regions (3000 mm/year and more) are located in the northeast, in the Amazon delta, close to the Atlantic Ocean (Figure 2), exposed to the ITCZ and in the northwest of the basin (Colombia, north of the Ecuadorian Amazon, northeast of Peru, and northwest of Brazil). Rainfall is also abundant towards the southeast, close to the average position of the South Atlantic Convergence Zone (SACZ), established during austral summer from the northwest of the Amazon to the Subtropical South Atlantic. Rainfall decreases towards the Tropics reaching more than 2000 mm/year in the southeast of Brazil and less than 1500 mm/year in the Peruvian-Bolivian plain and in the Roraima Brazilian state which is protected from the Atlantic humid flows by the Guyanese shield. This distribution is consistent with the results of Ratisbona, Salati et al., Marquez et al., Figueroa and Nobre, Fisch et al., and Marengo, among others. However, our rainfall map yields more information about the Andean countries. Figure 2 clearly shows lower rainfall in the high Andes regions, mainly in the centre and south. Figure 3 displays the relationship between annual rainfall and altitude for 391 stations located in the Andes. Only a limited number of stations are located over 2000 m asl with an excess of 1500 mm/year and, in general less than 1000 mm/year is measured over 3000 m asl. The same situation is found by Guyot and Ronchail and Gallaire in Bolivia and by Laraque et al. in Ecuador. At low elevation, abundant rainfall is related to the moist warm air and to the release of high quantity of water vapour over the first eastern slope of the Andes. The stations registering more than 3000 mm/year are located at less than 1500 m asl ( Figure 3). As a result, rainfall diminishes with altitude. Nonetheless, the least rainy stations such as Caracato (2650 m asl) in the Bolivian Andes with 255 mm/year and Sondorillo in the Andes of northern Peru, (1850 m asl) with 345 mm/year, are not the highest (Figure 3). Indeed, the prevailing eastern direction of the moist trade winds and the exposure of the stations on the leeward side of the mountains, account for low precipitation levels measured at low altitudes. For example, little rain is registered in Jan (620 m asl, 700 mm/year) which is surrounded by high mountains mainly towards the east (Figure 3). This is why a strong spatial variability is observed under 2000 m asl where rainfall varies from 500 to 3000 mm/year (Figure 3). Extreme values approved by the RVM analysis are in positions that favour strong air uplift, as Churuyacu (500 m asl) in Colombia with 5500 mm, close to a steep slope and Reventador (1470 m asl) in Ecuador with 6200 mm, located on a remote volcano. Also there exists a very rainy zone in the southeast of the Peruvian Amazon. For example, San Gabn station (820 m asl) gets an average of 6000 mm (Figure 3), and maximum values may be as high as 9000 mm/year (in 1967). It is located in a concavity in the Carabaya Mountain Range (south of Peru), close to steep slopes. It should also be mentioned that the RVM analysis has resulted in the rejection of several stations, particularly in very humid regions of the Andean countries. These stations located in remote areas feature scree and mudslide. As a result very scarce records have been kept. Thus, values in excess of 5000 mm in Chapar, east of Cochabamba, as mentioned by Roche et al. in Bolivia, have not been included on the map (Figure 2). Then, it is clear that the highest and lowest annual rainfall values in the AB are registered in the Andean region (Figures 2 and 3). Some cases illustrate the high spatial rainfall variability. In Ecuador, the Reventador station (1470 m asl; 6200 mm) is 80 km from Oyacachi (3200 m asl) whose annual rainfall is 1400 mm; the spatial variation between both stations is thus 58 mm/km. Also, between Puyo (960 m asl, with 4500 mm) on the border of the Andes, and Alao (3200 m asl, with 1000 mm) situated in an embanked valley, at a distance of 55 km, there is a 63 mm/km difference. In Peru, San Gabn (820 m asl; 6000 mm) is 110 km from Paucartambo (2030 m asl, with 530 mm). It is situated in a valley behind the Carabaya Mountain Range. In this case there is a 50 mm/km difference between both stations. In Bolivia, Cristal Mayu (880 m asl, with 4000 mm) is located 46 km away from Colomi (3280 m asl and 630 mm); the difference is still higher, 73 mm/km. The preceding examples show the important role of relief in determining the annual rainfall (Figure 3). Seasonal cycle The seasonal cycle is assessed with maps showing the quarterly percentage of rainfall ( Figure 4) and using the AHC and K-means cluster analysis based on monthly rainfall indexes ( Figure 5). AHC analysis enables the definition of an optimum number of nine clusters corresponding to nine regimes and the K-means technique gathers together stations experiencing the same regime. The seasonal cycle is also described using quarterly maps showing the mean 1979-1998 geopotential height at 850 hPa and the vertically integrated water vapour transport ( Figure 6). Tropical regimes are also depicted in Figure 5(b) (Northern Hemisphere tropical regime) and in Figure 5(c)-(e) (Southern Hemisphere tropical regimes). In the Northern Hemisphere, particularly in the State of Roraima (Brazil), the rainfall peak in JJA is related to the warming of the continent and of the tropical Atlantic and eastern Pacific Ocean surface temperature (). To the south, the rainy season in austral summer is related to continent warming (), to a low geopotential height in the Chaco region and to the onset of the South American monsoon (SAMS) and the related low-level jet (LLJ) along the Andes (Figure 6 ). On the contrary, the dry season in JJA is related to high geopotential height values and to the retreat of the SAMS (Figure 6(c)). In the south, tropical regimes differ according to the length of the dry season. In the tropical Andes, it lasts from May to September (Figure 5(c)); only 5% of the annual rainfall can be registered during this period. In the lowlands the dry season is shorter, lasting from June to August. In the Bolivian plain the dry season is rainier ( Figure 5(d)) than in the Mato Grosso ( Figure 5(e)). This is because extratropical perturbations skim through the Bolivian lowlands during winter (Figure 6(c)) (Oliveira and Nobre, 1986;Ronchail, 1989;Garreaud, 2000;Seluchi and Marengo, 2000). In the northeastern AB, autumn and spring (SON) are the most different seasons (Figure 4(b) and (d), respectively); more than 50% of annual rainfall can be measured in MAM, whereas less than 10% occurs in SON. This 'tropical maritime' regime involves a region from the Amazon delta to approximately 1000 km in the centre of the basin, at the confluence of the Amazon and Madeira River (Figure 5(f)). In this region, seasonality is mainly controlled by the Atlantic Ocean. In particular, the precipitations peak in austral autumn is related to the heating of the equatorial Atlantic and to the southernmost position of the ITCZ (;). On the contrary, in austral spring, the dry season is associated with the northward shift of warm waters and of the ITCZ. In the northwest of the basin, in regions close to the equatorial line, rainfall distribution over the year is more uniform, with percentages close to 25% during each quarter (Figure 4). In Ecuador, the very low rainfall seasonality is related to deep convection on the always warm surface () and to the geopotential height that is very low from austral spring to austral autumn (Figure 6(b)). However, two different regimes can be highlighted from the upper Negro basin to the lowlands of Ecuador; on the windward slopes of the Andes, a unimodal regime with a slight peak at the end of the austral autumn ( Figure 5(a)) is due to enhanced convection after the equinox and to a strong zonal water vapour transport (Figure 6(b)) (). A bimodal regime, with peaks near the equinoxes (April and October) and a slight decrease in austral winter is depicted in the intra-Andean basins in Peru and Ecuador, and in the Amazon plain, on the border of Peru, Brazil, and Colombia ( Figure 5(h)). The semi-annual rainfall cycle results from the zonal oscillation of the continental ITCZ, associated with the semi-annual cycle of radiation and temperature (;Figueroa and Nobre, 1990;Poveda, 2004;). These results are similar to those described in previous studies for Brazil (Ratisbona, 1976;;;;;Marengo, 1992;Zhou and Lau, 2001;). However new pieces of information are provided for the Andean regions, which had remained poorly documented. The sVC (Figure 7) shows the important seasonal variability of rainfall with values over 0.6 in inner and tropical Andean regions, in the southern Andes of Peru (in the region of Apurimac, in the upper Ucayali basin) and in southwestern Bolivia (in the region of Sucre, in the upper Mamore Basin). From the south of the Bolivian lowland to southern Peru, in a corridor between the Andes and the Brazilian shield, the relatively low seasonal variability is due to winter rainfall related to extra-tropical perturbations. A strong sVC may be noticed in other tropical regions of the basin, particularly in the southeast (Mato Grosso) and in the north of Brazil (Roraima). In the northeast of the basin, close to the Amazon delta, there is also a major seasonal variation with an sVC value in excess of 0.5. Between 5°N and 5°S, a strong decrease in sVC is observed from 60°W towards the west, with values under 0.1, mainly in the lowland forests of Peru and Colombia and in the west of the Brazilian Amazon (Figure 7). This evidences the constant presence of rainfall in this region, confirming what is shown in Figures 4 and 5(a) and (h). In the northern part of Peru, there is an important east-west increase in sVC between the Amazon plain and the regions close to the Andes, as well as between the north and the south (throughout the Ucayali basin). The Amazon basin of Peru and Ecuador down to Tamshiyacu (4.00°S and 73.16°W) extends over a surface of 726 400 km 2, with 53% over 500 m asl ( ). It experiences a high spatial variability of annual rainfall regimes ( Figure 8). The southern part of the basin displays a clear south tropical regime with a long dry season from May to September, such as Antabamba station (14.37°S 72.88W; 3900 m asl, Figure 8(a)), with an annual cycle beginning in August and a rainy period from December to March. In the upper basin of the Huallaga and Ucayali Rivers, a tropical humid regime at Quillabamba station (12.86°S 72.69°W; 1128 m asl, Figure 8(b)) features a much longer and intense rainy period (from December to May). At Pozuzo (10.05°S 75.55°W; 258 m asl) in the north, at a lower altitude, a higher rainfall value and a shorter dry period (JJA) is observed (Figure 8(c)). In the north, in the upper Maran River (Figure 8(d)), an intermediate regime between southern Tropics and the equator features a very rainy period from January to April, as in Julcn station (8.05°S 78.50°W; 3450 m asl). In the regions close to the equatorial line, longer rainy seasons are noticed; for example, the Gualaquiza station (3.40°S 78.57°W; 750 m asl, Figure 8(e)) close to the Andes presents a rainy season from February to July and no dry period. Towards the east in Iquitos (3.75°S 73.25°W; 125 m asl, Figure 8 is depicted with a slight rainfall decrease from June to September, and a very weak sVC, as shown in Figure 7. The spatial variability of rainfall regimes may be even greater as shown in studies about Ecuador. Stations with different regimes coexist in the same basin because of their different exposures to the easterlies. For example the minimum rainfall in Guaslan, in an intra-Andean basin, coincides with the rainfall peak in Baos, located on a windward slope (Figure 8(g)). This is due to an increase in the water vapour transport, in austral winter, which causes rainfall peaks in windward stations (;Bendix, Personal Communication). The average monthly rainfall calculated for the whole AB ( Figure 5(j)) presents a rainy period from December to April (between 220 and 270 mm/month) and less rainfall from July (105 mm) to August (95 mm). The sVC (0.34) is low and shows the influence of the northwest region, which, although not so extended, is very rainy and exhibits a low seasonality (Figures 2 and 5). Nevertheless, this rainfall cycle, with a drier season in winter, also reflects the influence of the extended southern tropical regions, from 5°S to the south of the basin, characterized by a marked dry season around July and August ( Figure 5). Interannual variability The interannual rainfall variability resulting from the iVC is particularly important in the mountainous regions of the Andean countries (Figure 9(a)), in the Tropics (Chaco and Roraima) and close to the Amazon delta. High values of iVC are also found on the elevated border of Peru and Brazil (Fitzcarrald Arch, 400-500 m asl, upper Juru and Purus Rivers, see Figure 9(a)). Regions with lower interannual variability are situated along the northwest-southeast axis of the AB, where rainfall is abundant. Isolated high values may be related to particular local conditions. The interannual-seasonal variability ratio (iVC/sVC) highlights a major uniformity of rainfall distribution during the year in the western equatorial regions of the AB (0°-05°S and 65°-77°W) (Figure 9(b)). In this region, interannual variability is three times higher than seasonal variability (iVC/sVC up to 3.0). On the contrary, in the south and east of the Amazon, seasonal variability exceeds interannual variability. Interannual variability is also addressed using a varimax-rotated PCA on rainfall index vectors resulting from the RVM analysis. On the one hand, the advantage of this procedure lies in the use of data summarizing the interannual variability of homogeneous zones already specified by the RVM. Thus, 25 different regions are defined, from which 9 belong to the Brazilian Amazon plain and 16 are located in the Andean countries ( Figure 1). In Brazil, regionalization is similar to that found by Hiez et al.. On the other hand, the use of annual pluviometric indexes from RVM allows the analysis period to be extended to 1964-2003 (see Chapter 2). PCAs are computed on a quarterly rainfall, i.e. DJF, MAM, JJA, and SON. The first three components of the PCAs generally summarize 45-50% of total rainfall variability. In JJA and SON, experiencing little rainfall except in the northwest, the main variability is pluridecadal, with a change at the end of the 1970s in JJA (Figure 10) and the beginning of the 1980s in SON (not shown). The first principal components (PCs) account for 26 and 18% of the explained variance in JJA and SON, respectively. High rainfall is registered during the first period in the whole AB. The signal is very strong in the northwest, whereas it is weak in the south. Low rainfall characterizes the second period. We use the ERA-40 reanalysis to take into account the differences in atmospheric circulation between both periods. Figure 11(a) displays the differences in the 850 hPa geopotential height and wind between 1986-1997, the driest period, and 1967-1976, the rainiest period. After the 1970s, an enhanced geopotential height can be observed over the western Amazon and the tropical Atlantic. Water vapour diverges from these regions leading to a reduced rainfall. Interestingly, as a low geopotential height prevails over eastern Brazil, water vapour converges towards this region (Figure 11(b)). Given the fact that El Nio events are related to dryness in northern Amazon and that a strong frequency of El Nio events has been observed since the end of the 1970s (Trenberth and Hurrell, 1994), it is assumed that the rainfall decrease in the north of the basin after that date can be attributed to the warming of the tropical Pacific. The time series of the first PC in JJA is negatively correlated with the JJA multivariate ENSO index (MEI) and PDO indexes (r = −0.69 and −0.66, respectively, both correlations being significant at the 99% level; Figure 12). Partial correlations show that both indicators combine to account for 65% of the total rainfall variance. At an interannual time scale, positive MEI values are associated with very low rainfall over the basin, whereas negative MEI values are concomitant with high rainfall. At a pluriannual time scale, low PDO values during the 1960s and 1970s are associated with high rainfall. The opposite can be observed during the 1980s and 1990s. Marengo and Marengo et al. already mentioned connections between the long-term rainfall variability in the AB and the PDO. The aforementioned long-term variability is also present in MAM and DJF seasons and but it is not the main mode of variability. Pluriannual variability in DJF and MAM, the rainiest seasons in many regions ( Figure 5), is observed at a decadal time scale. PC1 in DJF (27% of variance) and PC2 in MAM (16%) show the same time space modes of variability. In MAM, rainfall is important (weak) in the northwest (southeast) of the basin during the 1970s and 1990s, and the opposite can be noticed from the beginning of the 1980s till the beginning of the 1990s with a higher than normal rainfall in the southeast (Figure 13). The rainfall increase in southeastern Amazon at the end of the 1970s is related to a negative geopotential height (Figure 14(a)) over southern Amazon where there is an intensification of the northwest wind along the Andes and of the LLJ and to the convergence of water vapour from the Atlantic and northwest Amazon (Figure 14(b)). On the contrary, stronger than normal geopotential height prevails over northwestern Amazon where water vapour diverges. The rainfall increase in northwestern Amazon during the last decade is related to a reduced northwest wind and LLJ and to an increased water vapour convergence over the north (Figure 14(d)), promoted by a positive geopotential anomaly over most of the continent south of the equator line (Figure 14(c)). In DJF and MAM, the PC loadings are very weakly correlated to the PDO (r = −0.38, p > 0.95 in MAM). These results are consistent with Ronchail who finds a similar pluriannual variability in Bolivia and with Marengo and Nobre and Marengo who show opposite long-term evolutions in the north and south of the Brazilian AB. Also Lau and Wu describe a similar spatio-temporal pattern, with an increase in the annual rainfall along the tropical Andes, whereas the annual rainfall decreases in the eastern and southern parts of the Amazon, between 1979Amazon, between -1990Amazon, between and 1991Amazon, between -2002. However, our study yields some insights into the seasonality of the pluriannual rainfall evolution. In DJF and MAM, an interannual variability represented by PC2 in DJF and PC3 in MAM accounts for 13 and 10% of rainfall variance, respectively ( Figure 15). Strong positive values are displayed during the 1970s, in 1984-1985-1986, 1989(many of them La Nia years), and negative values in 1983, 1992. An opposition is pointed out between, on the one hand, the south of the Andean region (Peruvian and Bolivian Altiplano) and the northeast (in DJF) and east (in MAM) of the AB, and on the other, the southeast of the basin in DJF (the southwest in MAM) and the northwest of the AB. The two PCs are related to the interannual variability of ENSO and of the SST gradient between the NATL and the SATL (Figure 16). Correlation values between the DJF and MAM PCs and the seasonal MEI are −0.55 (significant at the 99% level), indicating that during El Nio events, rainfall is less abundant in the tropical Andes and in the east of the AB, as already described by Kousky et al., Aceituno, Marengo, Marengo and Hastenrath, Moron et al., Ronchail, Liebmann andRonchail et al., among others. El Nio events are associated with a rising motion over the eastern regions of the equatorial Pacific Ocean and subsidence over the northern AB (). Additionally, Garreaud and Aceituno show that the northward position of the Bolivian High during El Nio events prohibits the uplift of moist air towards the Altiplano, preventing rainfall in this region. On the contrary, rainfall tends to be slightly more abundant during El Nio events in western and Figure 11. JJA differences between 1986-1997 and 1967-1976 in (a) 850 hPa geopotential height (m) and wind (m/s), (b) vertically integrated water vapour flux (kg/m/s) between the ground and 500 hPa. The figures use ECMWF data. This figure is available in colour online at www.interscience.wiley.com/ijoc southern Amazon, as reported by Ronchail, Ronchail et al. (2002Ronchail et al. (, 2005, Bendix et al., Grimm (2003Grimm (, 2004, and Ronchail and Gallaire. The correlation between these PCs and the annual difference between the northern and SATL SSTs is also significant at the 99% level (r = −0.59 in DJF and r = −0.48 in MAM). Figure 16 shows that when this gradient is positive, i.e. when the north tropical Atlantic is warmer and/or the south tropical Atlantic is colder than usual, rainfall is less abundant in the northeast of the basin, as previously pointed out and explained by Molion (1987Molion (, 1993, Marengo, Moron et al., Nobre and Shukla, and Ronchail et al., among others. Tropical Atlantic Ocean warming causes a rising motion over this ocean and subsidence in the south of the AB, a shift to the north of the ITCZ and less rainfall over northeastern Amazon. The opposite can be noticed when the Atlantic SST gradient is negative. The partial correlations between, on the one hand, PC2 in DJF and SOI or MEI and NATL-SATL on the other, are significant, indicating that both climatic indicators are complementary to account for interannual variability. Together they make up 50% of rainfall variability as described by PC2 in DJF.. This is consistent with the negative trend reported by Marengo in Brazil. The annual rainfall decrease percentage is −0.30%/year (−30% rainfall in 100 years). This is lower than the average calculated in the Peruvian and Ecuadorian Amazon: −0.83%/year for the 1970-1997 period (). All break tests applied to the mean annual rainfall agree with a change in 1982 (Table I), related to the time evolution of the JJA and SON rainfall PC1s ( Figure 10) that shows lower rainfall values since 1983 in the north of the basin. The first period, before 1982, outlines an average of 2296 mm/year and the second one, after 1982, of 2160 mm/year. Another change is reported by the Buishand and Pettitt tests in 1989 (with slightly lower values after the break), in partial agreement with the rainfall increase in the northwest observed in PC1 in DJF and PC2 in MAM at the beginning of the 1990s (Figure 13). The first period, before 1989, totals 2250 mm/year average and the second, after 1989, 2139 mm. At a quarterly time scale, it clearly appears that rainfall decreases in DJF, JJA, and SON during the 1975-2003 period, with trends being significant at the 95, 90, and 99% level, respectively (Figure 17(b)). In other words annual rainfall decrease is due to the strong negative trend observed in JJA and SON ( Figure 10) in the extreme northwest of the basin that remains rainy during these seasons ( Figure 5(a), (b), and (h)). At the end of the century, a positive trend developed from 1992 to 2003 in MAM (at the 95% significance level) which is consistent with the MAM PC2 (Figure 13), whereas a weak negative trend was found in SON and no trend in DJF and JJA (Figure 17(b)). From a hydrological standpoint, a major finding is the increasing rainfall amplitude which has been observed between SON and MAM since 1992 (;). Conclusions For the first time, a database with in situ pluviometric information gathers together 1446 original rain gauges from five countries that form the better part of the AB. Monthly rainfall data have been collected for the 1964-2003 period within the HYBAM programmes, in 1964-2003 period) shows that the main data contribution is from the highlands of the Andean countries (Peru, Bolivia, Ecuador, and Colombia). Additionally, the stations are unevenly distributed, with a smaller number of posts in the plain of the Andean countries because of the remoteness of these regions. In the Andean regions of the AB, very high and low rainfall values (between 6000 and 250 mm/year) are recorded in nearby stations, as observed in the Himalaya chain by Dobremez. The strong spatial variability is due to rainfall decrease with altitude and to the leeward or windward position of the stations. The highest rainfall in the AB is observed in low windward regions (over 6000 mm/year) and conversely, low rainfall is measured in leeward and elevated stations (under 530 mm/year). In the lowlands, the northwest and northeast equatorial regions are the rainiest zones, with values over 3000 mm/year. Less rainfall is measured in the tropical regions. These results complement what is shown in many studies about rainfall distribution in the Brazilian Amazon and in particular a focus is given on east-west and north-south rainfall gradients in Peru. Rainfall regimes evidence the strong opposition between the northern and southern Tropics, because of the alternating warming of each hemisphere and to American monsoons. Next to the Amazon delta, a MAM maximum and a SON minimum are associated with seasonal migration of the ITCZ. In the northwest equatorial region there is a better rainfall distribution within the year with quarterly percentages of rain close to 25%. In the equatorial Andes, the distribution of rainfall regimes is highly complex and associated with the stations exposure: bimodal regimes in intra-Andean basins are found close to unimodal regimes in windward stations. This particular subject is more widely developed in Laraque et al.. Various intermediate regimes are described between equatorial and tropical regions; a focus on Peru is also proposed as very little information is available to this day. The RVM has supported not only the analysis of data quality, but also the creation of homogeneous regions, exhibiting the same interannual rainfall variability, and computation of 25 indexes (vectors) that summarize the pluviometric variability of 25 regions. PCA has been performed on quarterly indexes to identify the main spatial and temporal rainfall patterns. Three main modes of spatio-temporal variability have been defined and the related spatial patterns are widely dependent on the Andean country indexes. A long-term variability characterizes rainfall evolution from June to November. It shows a rainfall decrease since the end of the 1970s-beginning of the 1980s, in the whole basin and especially in the northwest. This change is due to the long-term increase of the near surface geopotential height over the western part of the Amazon. It is also associated with the long-and short-term variability in the Pacific Ocean (PDO and ENSO). During the rainiest seasons, DJF and MAM, the long-term variability is interrupted at the beginning of the 1990s, featuring a clear NW-SE opposition, with more rainfall in the NW during the 1970s and 1990s and less rainfall during the 1980s; the opposite occurring in the SE. This variability is driven by reduced water vapour transport by the northwest wind along the Andes and the LLJ during the 1990s, which promotes rainfall in the northwest. The opposite conditions causing enhanced rainfall in the south are observed during the 1980s. Finally, an interannual variability in DJF and MAM is related to the Pacific and Atlantic interannual variability. Rainfall is less (more) abundant in the northeastern AB during El Nio (La Nia) events and when the SST gradient is positive (negative) in the tropical Atlantic. Rainfall is also less abundant over the southern tropical Andes during El Nio, whereas, on the contrary, it tends to be more abundant in the western and southern AB. The mean rainfall at the outlet of the basin exhibits an average of 2200 mm/year for the 1975-2003 period. This value is consistent with different results yielding values between 2000 and 2200 mm for the AB (Marengo and Nobre, 2001;Marengo, 2004;). The trend during this period is significantly negative and break tests indicate changes in 1982 and 1989 with less rainfall afterwards. The seasonal mean rainfall over the basin shows different evolutions for the 1975-2003 period. Rainfall diminishes dramatically during the drier seasons (JJA and SON) and not so much in DJF and MAM. Opposite trends appear after 1992; rainfall increases in MAM, whereas it decreases in SON. The resulting increase in rainfall amplitude is consistent with the pluriannual variability shown by MAM PC2, i.e. with high rainfall values in the NW and low rainfall values in the south after 1992, and with the break detected in 1989 in the mean rainfall of the basin. Rainfall decrease is related to changes in the ocean and atmosphere as seen before. However, it may also be associated with deforestation. Unlike what could have been expected, a strong 1975-2003 rainfall decrease is observed during the dry season in the north of the basin, very rainy and undeforested, whereas it is weak in the south which is the most deforested region. To conclude, the assumed deforestation impact on rainfall does not seem to have taken place as expected in the Table I. Results of the break-detecting tests applied on the mean annual rainfall in the AB. 'X' indicates a break in the series. Mean, Standard Deviation and Variation Coefficient are given for the 1975-1982and 1983-2003periods. TEST 197519801985 most deforested areas. Nevertheless, this issue will have to be further addressed in the future. Our results are in line with those of Zhou and Lau who reported interannual, decennial, and interdecadal rainfall variability in South America during the 1979-1995 period. Nonetheless, the introduction of data from the Andean countries, where variability reaches a peak, has a major impact on the spatial structure of rainfall variability. In particular, our study complements the north-south rainfall variability reported by Marengo (1992Marengo (, 2004. The description of two modes of long-term rainfall variability leads to a better understanding of runoff evolution in the main stream of the Amazon River (, and, particularly with respect to the intensification of runoff extremes, without taking into account the changes in land use. These results make it possible to identify the location with the main spatial temporal rainfall variability in the AB and as a consequence, highlight those regions where future researches aiming to define the causes of rainfall variability will be conducted. It will be done in order to address such issues as that of knowing whether rainfall variability is related to climate variability, or to climatic change, or to changes in land use such as deforestation. A better insight into regional rainfall variability is also conducive to a greater understanding of the regional runoff variability in the sub-basins of the Amazon, and especially the frequent major floods and very weak lowflows that have recently been observed ). |
Layered Li(Ni0.2Mn0.2Co0.6)O2 synthesized by a molten salt method for lithium-ion batteries Sub-micron size Li(Ni0.2Mn0.2Co0.6)O2 was synthesized by the molten salt method at 800 °C and 900 °C using LiOH:LiNO3 eutectic salt for the first time and the phases were characterized by X-ray diffraction (XRD), SEM, density and BET surface area. Rietveld refinement of the XRD data showed 5 and 3% cation-mixing in the compound synthesized at 800 °C and 900 °C, respectively. Galvanostatic chargedischarge cycling at 30 mA g−1 between 2.5 and 4.4 V vs. Li at room temperature showed the second cycle discharge capacities of 119 and 133 mA h g−1 for the phases synthesized at 800 °C and 900 °C, respectively. The capacity retention was 81% and 87%, respectively between 2 and 50 cycles. After reheating the 900 °C sample for another 2 hours at 900 °C, the XRD pattern shifted obviously to low angle, which indicated a reduction of Ni3+ to Ni2+ and was further proved by X-ray photoelectron spectroscopy (XPS) measurement. The re-heated compound showed an improved discharge capacity of 159 mA h g−1 (2nd cycle) and it retained a capacity of 123 mA h g−1 at the end of the 50th cycle, corresponding to a capacity-retention of 77%. Cyclic voltammetry studies on the above compound clearly showed the redox peaks due to Ni2+/4+ and Co3+/4+ and the 4.5 V-structural transition was not suppressed. The cathodic performance of the phases improved upon cycling to the cut-off voltage of 4.3 V. |
External Dose to Recovery Teams Following a Wide-area Nuclear or Radiological Release Event Supplemental digital content is available in the text. Abstract The common radionuclide 137Cs is a gamma-ray source term for nuclear reactor accidents, nuclear detonations, and potential radionuclide dispersal devices. For wide-area contamination events, one remediation option integrates water washing activities with on-site treatment of water for its immediate reuse. This remediation option includes washing building and roadways via firehose, collecting the wash water, and passing the contaminated water through chemical filtration beds. The primary objective of this study was to quantify the dose incurred to workers performing a remediation recovery effort for roadways and buildings following a wide-area release event. MicroShield® was employed to calculate the dose to workers at the roadway level and to calculate total dose rates while performing washing activities. This study finds that for a realistic contamination scenario for a wide area of a large urban environment, decontamination crews would be subjected to <220 Sv per person, much less than the 50,000 Sv limit for occupational dose. By extrapolation, one decontamination team of 48 people could continue washing operations on a total of 2.8 km2 before reaching their incurred annual dose limits. Though it is unrealistic to assign one team that entire area, we can conclude external dose will not limit worker deployment given the range of contamination levels adopted in this study. |
INTERNATIONAL DISPUTES AND CRISES AND METHODS OF OVERCOMING THEM After short introduction the author deals with international disputes (their concept and resolution) and then turns his attention to international crises (concept, resolution, crisis management). He notes that unresolved disputes often lead to crises, and these, on their part, if not resolved in time, lead to various conflicts in the political, ideological, economic etc. field and sometimes even end with armed conflict. Therefore, it is necessary to resolve international disputes as soon as possible, preventing them from becoming crises, as, if there is already a crisis, it is important not to allow it to develop into conflict. All this must be done with having into account specificity of each situation and only by peaceful means. |
Hannah Arendts political thinking on emotions and education: implications for democratic education ABSTRACT This paper asks: when political emotions are invoked in the classroom, can this be done without the process of democratic education degenerating into a form of emotional and/or political indoctrination? The source of inspiration for addressing this question is Hannah Arendts political thought on emotion and education. The aim of the article is to show that despite the tensions and weaknesses that have been identified over the years about Arendts views on both emotions and political education, she provides compelling insights against the possibilities of political education degenerating into moral-emotional rhetoric. Arendt highlights the dangers of constructing political emotions in the classroom as the foundation for political action, while acknowledging the constructive role for the emotions in the development of political agency. The paper concludes that Arendts insights on emotions and political education can help educators avoid potential pitfalls in efforts that (re)consider the place of political emotions in the classroom. |
Does an extended stroke unit service with early supported discharge have any effect on balance or walking speed? OBJECTIVE To evaluate the effect of an extended stroke unit service with early supported discharge on balance and walking speed, and to explore the association between initial leg paresis, initial movement ability and balance one year after stroke. DESIGN A randomized controlled trial comparing early supported discharge with ordinary stroke unit service. PATIENTS A total of 62 eligible patients after stroke. METHODS The outcome measures were Berg Balance Scale and walking speed at 1, 6, 26 and 52 weeks after stroke. RESULTS We found no significant differences between the 2 groups during follow-up. There was a significant improvement on Berg Balance Scale (p=0.013) and walking speed (p=0.022) in the early supported discharge group, but not in the ordinary service group, from 1 to 6 weeks' follow-up. All patients with initial severe leg paresis suffered from poor balance one year after the stroke. The odds ratio for poor balance was 42.1 (95% confidence interval; 3.5-513.9) among patients with no initial walking ability. CONCLUSION These results do not conclusively indicate that early supported discharge has an effect on balance. A strong association was found between initial severe leg paresis, initial inability to walk and poor balance after one year. |
Acquired intracranial arterial aneurysm and stroke after vessel dissection in a child with coarctation of the aorta Vascular events in patients with coarctation of the aorta have been extensively reported and account for the majority of morbidity and mortality in untreated patients. The exact mechanism for this association is not completely understood and may include acquired anomalies or congenital abnormalities of intracranial vessel. Here we report a case of intracranial internal carotid artery dissection with subsequent formation of acquired large carotid aneurysm in a child with severe systemic hypertension and coarctation of the aorta. Endovascular aneurysm exclusion was pursued and it was able to control this potentially lethal complication. This case supports the notion of acquired nature of intracranial vessel abnormalities and underscores the clinical role of interventional neuroradiology in a subset of patients with congenital heart disease. |
. OBJECTIVE To assess the natural pregnancy and to determine the morphofunctional aspects of ovaries of rabbits submitted to bilateral oophorectomy and orthotopic allogeneic or autologous intact and sliced ovarian transplantation without a vascular pedicle. METHODS Fifty-six female New Zealand White and California rabbits were studied. The ovaries were removed and orthotopically transplanted or replaced without vascular anastomoses: Group 1 (n = 8), only laparotomy and laparorrhaphy were performed; Group 2A (n = 8) intact ovaries were reimplanted on both sides; Group 2B (n = 8) both ovaries were sliced and orthotopically reimplanted; Group 2C (n = 8), an intact ovary was reimplanted on one side and a sliced ovary on the other side; Group 3A (n = 8) intact ovaries were transplanted on both sides, Group 3B (n = 8) both ovaries were sliced and orthotopically transplanted, Group 3C (n = 8), an intact ovary was transplanted on one side and a sliced ovary on the other side. Three months later, the females were paired with males for copulation. Estradiol, progesterone, follicle stimulating hormone and luteinizing hormone levels were assessed. The morphological aspect of the ovaries was studied and the number of pregnancies and litters were also determined. The number of successful pregnancies and the number of litters was compared between the groups by the chi-square test. One-way ANOVA and the Tukey-Kramer tests compared the hormonal dosages. The significance was of p < 0.05. RESULTS Pregnancies occurred in seven (87.5%) rabbits of Group 1, in 37.5% in Groups 2A and 3A, in 50% of groups 2B, 2C and 3B, and in 62.5% of group 3C. Hormone levels and histology confirmed the vitality of all ovaries. CONCLUSION Intact or sliced orthotopic allogeneic and autologous ovarian transplantation without a vascular pedicle is viable in rabbits, and preserves their fertility and hormonal functions. |
Design Parameters for Diesel Hydro Desulfurization (DHDS) Apart from all gaseous contaminants in air, sulfur dioxide is considered to be the principal pollutant. The emission of sulfur dioxide is generally caused by combustion of sulfur bearing fuels. This further gets converted into sulfuric acid in atmosphere as a result of its reaction with oxygen and moisture. This air borne acid is responsible to damage steel buildings, bridges and machinery. Hence there is a major need to decrease sulfur percentage in the environment in order to lead a better and healthy living. As we can see the world runs on energy, for which fossil fuels are the main constituents. As these all are the complex network of hydrocarbons, when they are combusted or utilized, the products or effluents are somewhat harmful to the atmosphere. Sulfur is one such effluent, which seriously harms the environmental conditions. So this has to be recovered up to appreciable levels. In India diesel is mostly utilized as fuel and it is up to 70%. So in order to reduce the sulfur content diesel is chosen as fuel oil. The reduction of sulfur is done by hydro desulfurization and the process is termed as diesel hydro desulfurization (DHDS). This is achieved by converting the mercaptane state sulfur to hydrogen sulfide (H2S) by reacting it with hydrogen alone on a cobalt molybdenum catalyst. By this the sulfur content is reduced from 1 wt% to 0.25 wt%. This is further reduced to 0.05 wt% using trickle bed reactors. |
Masked by annotation: Minor declarative complementizers in parsed corpora of historical English This article discusses some of the potential problems derived from the syntactic annotation of historical corpora, especially in connection with low-frequency phenomena. By way of illustration, we examine the parsing scheme used in the Penn Parsed Corpora of Historical English (PPCHE) for clauses introduced by so-called minor declarative complementizers, originally adverbial links which come to be occasionally used in complementizer function. We show that the functional similarities between canonical declarative complement clauses introduced by the major declarative links that and zero and those headed by minor declarative complementizers are not captured by the PPCHE parsing, where the latter constructions are not tagged as complement clauses, but rather as adverbial clauses. The examples discussed reveal that, despite the obvious advantages of parsed corpora, annotation may sometimes mask interesting linguistic facts. |
Spirulina culture trial for better resilience to COVID-19 in Toamasina A first experimental study on the production of spirulina ( Arthrospira platensis) has been carried out at the Multifunctional Laboratory the University of verify the feasibility of such a crop in an area where climatic parameters could be a limiting factor in the production of spirulina. With the aim of monitoring the growth of this alga under laboratory conditions in muros and in a controlled greenhouse extra muros, this manuscript starts from the hypothesis that it would be possible to practice spirulina cultivation in Toamasina. Cultivation was carried out successively from a 1.5-liter inoculum, then in 30-liter containers, before transfer to a large 3 m 3 extra muros greenhouse container. Periodic checks of the temperature, turbidity and salinity of the culture medium, as well as regular monitoring of the growth and productivity of the algae were carried out over a period of 180 days (d). The algae are growing at an average rate of 2.073 g. m -2.d -1, or the equivalent of 5.01 mg.l -1.d -1. A harvest of 4.31 g.m -2.d -1, with a specific growth rate of 0.0028 h -1 and a generation time of 251.73 h were recorded during the experiment. Compared with the values obtained in Tulear, one of the spirulina-producing areas in southwestern Madagascar, these values turn out to be low but promising, given the climatic conditions of Toamasina where the sky is often overcast, with less brightness, more humid air and a rainy climate. For better growth and sustained productivity, controlling climatic parameters, coupled with the recovery of local materials are recommended in the case of extra mural cultivation. This trial constitutes an interesting avenue in the fight against COVID-19 insofar as spirulina is known for its immune-stimulatory and antiviral actions. Improving the nutritional quality of a predominantly vulnerable population of Toamasina via this alga will thus contribute to increasing its social resilience in the face of this pandemic. A first experimental study on the production of spirulina (Arthrospira platensis) has been carried out at the Multifunctional Laboratory of the ISSEDD-University of Toamasina. Its purpose is to verify the feasibility of such a crop in an area where climatic parameters could be a limiting factor in the production of spirulina. With the aim of monitoring the growth of this alga under laboratory conditions in muros and in a controlled greenhouse extra muros, this manuscript starts from the hypothesis that it would be possible to practice spirulina cultivation in Toamasina. Cultivation was carried out successively from a 1.5-liter inoculum, then in 30-liter containers, before transfer to a large 3 m 3 extra muros greenhouse container. Periodic checks of the temperature, turbidity and salinity of the culture medium, as well as regular monitoring of the growth and productivity of the algae were carried out over a period of 180 days (d). The algae are growing at an average rate of 2.073 g. m -2.d -1, or the equivalent of 5.01 mg.l -1.d -1. A harvest of 4.31 g.m -2.d -1, with a specific growth rate of 0.0028 h -1 and a generation time of 251.73 h were recorded during the experiment. Compared with the values obtained in Tulear, one of the spirulina-producing areas in southwestern Madagascar, these values turn out to be low but promising, given the climatic conditions of Toamasina where the sky is often overcast, with less brightness, more humid air and a rainy climate. For better growth and sustained productivity, controlling climatic parameters, coupled with the recovery of local materials are recommended in the case of extra mural cultivation. This trial constitutes an interesting avenue in the fight against COVID-19 insofar as spirulina is known for its immune-stimulatory and antiviral actions. Improving the nutritional quality of a predominantly vulnerable population of Toamasina via this alga will thus contribute to increasing its social resilience in the face of this pandemic. Introduction Spirulina is known for its richness in proteins, minerals, iron and vitamins, as well as its content of vitamins A and B12; it strengthens man's immune system in a powerful way. Depending on its origin, it contains 55 to 70% of excellent quality proteins (;Falquet, 2000). It also contains an interesting quantity of unsaturated fatty acid of the omega 6 family, chlorophyll (which has a positive influence on the manufacture of red blood cells and purifies the blood), minerals and trace elements. Studies also show the antiviral action of spirulina in vitro and in mice, especially in the case of influenza A (). Spirulina plays a role in the inhibition of virus penetration and inhibition of the replication phase of viruses. A study published in the journal Nature () shows the effectiveness of a spirulina extract to stop the spread of the influenza virus. Spirulina extract prevents the formation of viral plaques and thus stops the infection that could result. Chen et al confirm that spirulina is most effective in containing the virus when it is used in the early stages of infection. These claims have been supported by McCarty & DiNicolantonio (in press), who argue that apart from the anti-inflammatory and antioxidant properties of blue-green algae such as spirulina, it would contribute to the reduction in the mortality rate of mice infected with RNA viruses, including influenza and coronavirus. Given these nutritional and medical virtues of spirulina, it is interesting to turn to this alga in the fight against the coronavirus, especially in Toamasina, a Malagasy city where the COVID-19 pandemic has been rife since March 2020, and where over 75% of the population live below the poverty line defined by the World Bank (Banque Mondiale, 2019;).According to official figures as of August 7, 2020, Madagascar, including Toamasina, has been recording since March 2020, 12,526 positive cases for the novel coronavirus, with 134 deaths (WHO, 2020a). In addition to the criteria of vulnerability to COVID-19 linked to smoking (WHO, 2020b;Vardavas & Nikitara, 2020;Cai, 2020;), age (CDC COVID-19 Response Team, 2020) and co morbidities (), it appears that more than 3 out of 4 positive individuals are socio-economically vulnerable to this pandemic (). Affecting respectively 42% and 51% of Malagasy children under 5 years old, the deficiencies in vitamin A and iron, as well as the importance of the ratio of vulnerable people / coronavirus positive cases in Toamasina (), as well as the increase of the malnutrition rate to 50.1%, justify the promotion of spirulina production in the city. Strengthening the physical resilience of the population through a supply of vitamins and protein via this alga would help reduce the vulnerability of the most disadvantaged groups. Thus, the local production of spirulina not only makes this alternative a reality, but also facilitates access to this product, which has so far been imported from other producing localities such as Tulear and Majunga. Therefore, this manuscript is based on a hypothesis according to which spirulina culture would be technically feasible in Toamasina, with the possibility of having a good yield. Its purpose is to verify the feasibility of such a culture in an area where climatic parameters could be a limiting factor in the production of spirulina., the coolest months are found in July -August (16.1 to 18.3 °C for 2017), while the hottest period of the year is between December and February (30.9 °C to 31.9 °C in 2017). The city is located on the east coast of Madagascar, a country where more than 75% of the population earns less than USD 1.9 per day (Banque Mondiale, 2019). Socio-economically, the majority of the population can be described as "vulnerable" due to its low income, limited access to basic medical care (prevention and treatment) and education, as well as poor nutritional quality, negatively impacting its standard of living and socioeconomic well-being (). Experimental protocol The small cultivated aquatic being was an unbranched, spirally coiled, filamentous prokaryote 0.3 mm long, consisting of juxtaposed cells (Jourdan, 2006;Fox, 1999 ;Fox, 1996). Having the shape of a tiny coil spring, the species Arthrospira platensis object of the present protocol, has an average of 7 turns with a filament diameter of about 10m. However, its morphology and length could vary according to the conditions of the culture medium, including light intensity, temperature, mineral content, etc. The starting inoculum was a strain of Arthrospira platensis from Tulear (South-West of Madagascar). The test took place between mid-August 2017 and mid-February 2018, Cultivated usually in vitro in a 1.5 liter balloon, the culture was carried out over time, by contribution of 'new medium', in 5 liter containers after 30 days (D30) of culture, then in 10 liter containers after 60 days (D60). Transfer to 30-liter B1, B2 et B3 tanks took place between the 60th (D60) and the 90th day (D90) of cultivation. Finally, controlled greenhouse production in a large 3 m3 tank (45 cm high and 2.9 m in diameter) began at D90, until the end of the trial (D180) in February 2018. It should be noted that the renewal of the medium consists of a supply of culture water (subjected to room temperature) rich in N, P and K, with sodium bicarbonate dosed (NaHCO3) at 8 g.l -1, sea salt (NaCl) at 5 g.l -1, urea (CO(NH2)2) at 0.1 g.l -1, phosphoric acid (H3PO4) at 1 ml.l -1, sea water at 16 ml.l -1, and iron (Fe2(SO4)3) dosed at 0.2 ml.l -1. Daily checks of the temperature, turbidity and salinity of the culture medium, as well as regular monitoring of the growth and productivity of the algae were carried out during the experimentation period. The essential physicochemical parameters (temperature, salinity were measured using a portable water analysis kit WTW Multi 3430 SET F IDS. Other parameters such as turbidity and concentration of the medium were measured using the Secchi disc. The shape and number of filaments were assessed by microscopic observations and counts. The development of the culture was assessed from the Specific Growth Rate (SGR = ) and the splitting time or generation time G ; these parameters are given by the following formulas: With: : Specific growth rate (in h -1 ) xf and xi : Final and initial number of filaments tf-ti: Time interval between two measures (in h) G: Time of filament splitting or regeneration time (in h) Productivity, which is the increase in biomass per unit volume or area per unit time, was also calculated to measure dry weight per liter per day and dry weight per m 2 per day. The formulas used were as follows : Variable physico-chemical parameters but kept acceptable Subjected to the same culture conditions, the three 30-liter tanks B1, B2 and B3 recorded temperatures between 27.1 °C and 21.3 °C during the first 2-3 months (figure 1). Between D60 and D90, the values given by the Secchi disc vary between 3 and 7 cm, which are inversely proportional to the concentration of spirulina. As for the salinity, it oscillates between 9.8 to 18.7%o. Between D90 and D180, the temperatures of the culture medium hover around 25.2 and 29.5 °C in a controlled greenhouse extra muros. Contrary to the case of the 30 liter tanks, the salinity in the large 3 m 3 container decreases, oscillating between 9.4 and 11.5%o (figure 2). Furthermore, it was noted that the variation in the value of the three essential parameters (temperature, turbidity and salinity) of the culture medium occurs after each addition of liquid. The evolution of the physico-chemical parameters also follows that of the season during which the test was carried out: the transition to the hottest months of the year (from mid-August to mid-October to mid-November-February, respectively D60 -D90 to D90 -D180) is marked by exposure of extra muros devices to higher temperatures, therefore to more marked evaporation, resulting in an in the salinity of the culture medium and a decrease in the value of the Secchi disc. Despite these various parametric variations, the measured values are acceptable in the absence of significant mortality (<10-15%) of algae cultivated both in vitro and in a controlled greenhouse. Promising growth parameters With an average Specific Growth Rate of = 0.0012 h -1, the 30 liters tanks recorded a SGR between 0.0010 and 0.0013 h -1 and a dry weight production of 0.11 g.m -2.d -1, that is 0.00029 g.l -1.d -1 after 60 to 90 days of culture which has a generation time oscillating between 519.47 and 667.18 h, with an average G of 600.02 h. Between D90 and D180, the mean values reach = 0.0028 h -1 and G = 251.73 h, with a dry weight production of 2.073 g.m -2.d -1, that is 0.0051 g.l -1.d -1. At the end of the trial, that is to say on D180, the harvest at the level of the large extra-muros container of 3 m 3 resulted in a yield of 4.31 g.m -2.d -1 of dry spirulina, that is the equivalent by fresh weight of 889.18 g for the whole of a container of 6.67 m 2 of surface, or else 215.93 g in dry weight for a desiccation rate of 23.63%. Figure 3 : Daily evolution of the number of filaments in the culture medium between D60 and D90 Regarding the growth rate of the culture, interpreted through the values of the Secchi disc obtained, it turns out that between D90 and D180, the algae grow at the rate of 1.91 cm.d -1 ± 0,36 (95% confidence interval), compared to 0.56 cm.d -1 ± 0,37 (95 % confidence interval) at the time the culture entered its first 60-90 days. Student's statistical test shows that the two speeds of growth differ significantly at the probability threshold of 5% (d = 1.36 >t0,05 ; Sd = 0.36). Unfavorable abiotic factors but possibilities for improvement with promising results The luminous intensity of solar origin, one of the parameters which are involved in the success of a spirulina culture, being less provided in Toamasina where one registers 162 hours of sunshine per month, that is to say 5.4 hours per day, against 3600 hours per year in Tulear, or 10 hours per day (Infoclimat, one of the potential producing sites of spirulina in Madagascar. Containing chlorophyll, spirulina needs light to develop. Average annual temperature is around 24.4 °C in Toamasina, it rises to 25.5 °C in Tulear where the air is drier (<60-65% average annual humidity against >75-80% in Toamasina). Compared to results obtained by Rambolarimanana and Noniarimalala in Tulear, Toamasina presents: a SGR 2 times lower than that of Tulear ; a G 2 times lower than that of Tulear; a number of filaments per milliliter 2 times lower than that of Tulear; a daily yield in dry weight per liter 3.8 times lower than that of Tulear; a daily yield in dry weight per area 1.2 times lower than that of Tulear. Niangoran Requiring a moisture content of less than 9%, the crop should be stored and protected from prolonged exposure to the open air. Rambolarimanana, in its prefeasibility study on spirulina cultivation in Toliara, demonstrates that the valorization of local materials and production techniques is a promising socio-economic orientation. For example, product drying devices, such as greenhouses, lighting systems and production bins, can easily be made locally, making the crop inexpensive, easily duplicated and reproducible among a wide range of sufficiently trained populations. However, the supervision of researchers, the proper conservation and control of strain production are essential in the context of the action research to be undertaken. With such technical arrangements, spirulina culture in Toamasina could be even more promising than that carried out in the present trial. Sustained production of spirulina with socio-economic and health stakes of COVID-19 in Toamasina As part of the fight against COVID-19 in Toamasina, the stakes and challenges related to spirulina cultivation consist of profitable and less expensive production, with products accessible to a wide range of affluent populations. Increasing the nutritional and health resilience of vulnerable groups should be a priority strategy for dealing with the pandemic. In addition to the economic poverty and vulnerability criteria put forward by the World Bank (Banque Mondiale, 2019) and, other medical criteria such as being diabetic, suffering from renal failure, or hypertensive, in short those qualified as co-morbid factors (), are also important medical issues in the fight against COVID-19. The value of producing spirulina accessible to vulnerable people lies, among other things, in the fact that phycocyanin and phycocyanobilin, obtained from this alga protect against diabetic nephropathy and renal failure; they have an inhibitory power against oxidative stress (;Baynes, 1991). In addition, spirulina strengthens the human immune system, thanks to polysaccharides; it could also exert a certain antiviral activity, which is linked to the sulfoquinovosyldiacylglycerol rich in sulfolipids (Falquet & Hurni, 2006;Girardin-Andrani, 2005). Rabe demonstrates in his prefeasibility study on the production of spirulina in 5 basins of 75 m 2 in Toamasina, that such a culture would have an Internal Rate of Return (IRR) of 24.37% and a Profitability Index of 1.12. Sold on the market at a price of 200 Ariary per gram, spirulina from other localities in Madagascar remains difficult to access for a vulnerable household in Toamasina with 5-6 people in charge, whose daily income does not exceed 1.9 USD (;Banque Mondiale, 2019). By producing spirulina locally (see also recommendations above), a product 1.2 times cheaper Rabe Conclusion This paper tried to show to what extent the production of spirulina could be technically feasible in a coastal zone with (a priori) unfavorable climatic parameters such as Toamasina. Its interest lies in the fact that such a practice could be adapted, not only to local ecological conditions, but it could also be reconciled with local socio-economic, nutritional and health needs, in particular to deal with COVID-19. Also, for a sustained production of spirulina in Toamasina, the popularization of techniques compatible with local socio-ecological realities, as well as the promotion of a Research-Action approach, allows to better meet the spirulina needs of vulnerable people to face the COVID-19 challenges. Conflicts of Interest The authors have no conflict of interest to declare. Although a member of the Board of the Journal, the 2 nd author did not participate in the review process of this manuscript. |
Faith Is Three Parts Formaldehyde, One Part Ethyl Alcohol Mercy keeps her finger in a jar on the nightstand. In the morning, it twists to feel the lines of sunlight that slip through her blinds. She likes to watch its gentle convulsions and holds her other fingers up to share the warmth. Since she cut off her finger, she has worked in the diocese business office, filing and answering phones. Mostly, she answers questions from parents about the parish schools and fields requests for priestly appearances. While at work, she doesn't think about her finger too much. It is just her left pinkie finger, so it was never very useful anyway; she can still type seventy-five words a minute. In fact, some people don't even notice it is missing. Those who do usually look appalled and ask, almost reverently, how it happened. Then she has to lie, all the while praying for the Lord to forgive her, and tell them that she had her hand slammed in a screen door as a child and they had to amputate. This invariably provokes Oh, what a shame and you such a pretty young woman. Usually, she tries to keep her hand close to her side, hidden inside the loose cuff of her shirt because of the shame this falsehood |
Disappearance of non-trivial net baryon density distribution effect on the rapidity width of $\Lambda$ in p+p collisions at Large Hadron Collider energies Pseudorapidity distributions of all primary charged particles produced in p+p collisions at various Relativistic Heavy Ion Collider (RHIC) and Large Hadron Collider (LHC) energies using UrQMD-3.4 and PYTHIA8-generated events are presented and compared with the existing results of UA5 and ALICE collaborations. With both the sets of generated data, the variation of rapidity widths of different mesons and baryons of p+p collisions at various Super Proton Synchrotron (SPS) and LHC energies with the rest masses of the studied hadrons are presented. An increase in the width of the rapidity distribution of $\Lambda$, similar to heavy-ion data, could be seen from SPS to the highest LHC energies when the entire rapidity space is considered. However, at LHC energies, in the rapidity space where $B-\bar{B} = 0$, the shape of the rapidity distribution of $\Lambda$ takes the same Gaussian shape as that of $\bar{\Lambda}$ and the widths of both the distributions become same confirming the disappearance of net baryon density distribution effect on the rapidity width of $\Lambda$. Further, a multiplicity dependent study confirms that the jump in the width of the rapidity distribution of $\Lambda$ disappears for the highest multiplicity class at LHC energy. This observation confirms that the light flavoured spectator partons play a significant role in $\Lambda$ production in p+p collisions at LHC energies. Introduction Estimation of the widths of the rapidity distributions of identified charged particles is of quite significance as they believe to carry a number of information about the dynamics of high-energy nuclear collisions. A systematic study on the variation of the widths of the rapidity distributions of various identified particles with their rest masses at different beam rapidities from AGS to SPS energies, reveals a jump in the rapidity width of with both UrQMD-generated and available experimental data. The universal mass ordering of the rapidity width of the identified particles, seems to get violated at resulting separate mass scaling for mesons and baryons. Such a jump in the rapidity width of was attributed to the net baryon density distribution effect. Production of (uds) having two leading quarks, and not (ds) having all produced quarks, is influenced by the net baryon density distribution as a considerable fraction of at these collision energies are due to associate production. Subsequently, when this study was extended to higher SPS, RHIC, and LHC energies, with UrQMDgenerated and available experimental results [3,, where the collisions are much more transparent, but B −B is still greater than zero, this jump in the width of the rapidity distribution of is yet found to exists and is considered to be a universal feature of heavy-ion collision data from the AGS and SPS to RHIC and LHC energies. Thus the width of the rapidity distribution of produced in heavy-ion collisions is found to have a non-trivial non-kinematic contribution due to its associate production. In A+A collisions, up to the highest available LHC energies, with UrQMD-generated data, a situation could never been reached with B −B = 0 and thus it could not be ascertained if the rapidity width of is free from net baryon density distribution effect or otherwise if the s are produced through pair production only. It has been reported in by ALICE collaboration that/ ratio becomes unity in p+p collisions at 7 TeV at mid-rapidity. It may, therefore, be expected that for such situation B−B would become zero and thus the s are produced through pair production only. Here it is worth mentioning that the colliding protons at the ultra-relativistic energies are considered not only to be composite but also extended objects consisting of many partons, which resembles a situation similar to heavy-ion collisions where the collision ranges from central (high multiplicity) to peripheral (low multiplicity) one. While in A+A collisions, the spectator regions are composed of hadronic matter, in case of p+p collisions the spectators are made up of partonic matter only. As it has been seen in for heavy-ion collisions that the spectator hadrons play a significant role in particle production, particularly the production, it would be interesting to see how these spectator partons of p+p collisions play its role in particle production. Thus, multiplicity dependent study on the width of the rapidity distribution of () produced in p+p collisions at LHC energies could be of quite significance. It has been claimed in a number of reports that UrQMD is quite successful in describing heavy-ion collisions data over a wide range of energies. In it has also been shown that UrQMD is equally successful in describing the experimental results of p+p, Au+Au, and Pb+Pb collisions from 17.3 GeV at the SPS to 1.8 TeV at Fermilab. Further in it has been shown that UrQMD is quite successful in describing the pseudorapidity distributions of all charged particles in Pb-Pb collisions at LHC energies. It is therefore expected that UrQMD could be successfully applied for small system like pp at LHC energies as well. On the other hand, PYTHIA has been found to be quite successful in explaining the p+p experimental results particularly at higher energies like that of LHC. In this work an attempt has therefore been made, with UrQMD-3.4 and PYTHIA8 (Monash and 4C tuned) generated p+p events, to estimate the widths of the rapidity distributions of various produced particles including and at different beam energies to examine how the rapidity width of (and) behave in p+p collisions for situation for which B −B = 0. Further, a study on the widths of the rapidity distributions of different hadrons for various multiplicity classes at the highest LHC energy using PYTHIA8 Monash-generated data has been carried out to evaluate the role of spectator partons, particularly on production. MC event generators For the present study, events were generated using the latest version of Ultra-relativistic Quantum Molecular Dynamics (UrQMD) Monte-Carlo (MC) event generator for p+p collisions at various colliding energies from SPS to the highest LHC energies and the events statistics are presented in Table 1. UrQMD is a many-body microscopic Monte Carlo event generator based on the covariant propagation of color strings, constituent quarks and diquarks with mesonic and baryonic degrees of freedom [14,. It includes PYTHIA to incorporate the hard perturbative QCD effects. The current version of the UrQMD model also includes the excitation and fragmentation of color strings, formation and decays of hadronic resonances, and rescattering of particles. On the other hand, PYTHIA, a standalone event generator, consists of a coherent set of physics models, which describes the evolution of high-energy collisions from a few-body hard-scattering processes to a complex multiparticle final state. The present version of the PYTHIA model contains a library of hard processes, models for initial and final state parton showers, multiple parton-parton interactions, string fragmentation, particle decays and beam remnant. The available high precision experimental data, allow detailed model comparisons and motivate the effort on model development and tuning of the existing models towards more precise predictions. One of the dedicated tunings of PYTHIA8 event generator is PYTHIA8 4C, which is used to predict the Run 1 LHC data and the modified parameters are reported in. As reported in, the latest PYTHIA8 version is tuned with Monash with the present available LHC data of Run 2. For the present study, we also generated 50 to 80 million p+p inelastic events at each studied energies using PYTHIA8 4C and Monash tuning. Results In our earlier work, it has been shown that UrQMD-3.4-generated (pseudo)rapidity distributions of produced particles show a good agreement with the experimental results at all the SPS, RHIC, and LHC energies for Au+Au and Pb+Pb systems. With the present sets of generated data, pseudorapidity distributions of all primary charged particles are compared with the various existing experimental results of p+p collisions of UA5 and ALICE collaborations. Pseudorapidity distributions of all primary charged particles using UrQMD-3.4 and PYTHIA8-generated events of p+p collisions at √ s = 53 (only UrQMD), 200, 546, and 900 GeV are compared with the results of UA5 collaboration and is shown in the left panel of Fig. 1. Though the MC data reproduces well the shape of the experimental pseudorapidity distributions Table 1. Event statistics of UrQMD-3.4-generated p+p collision data at all the studied energies. of UA5 collaboration, around mid-rapidity, the models somewhat under-predict the experimental values at all the studied energies of UA5 collaboration. However, in the right panel of Fig. 1, all sets of generated data (UrQMD-3.4, PYTHIA8 4C, and PYTHIA8 Monash) at √ s = 900 GeV show a better agreement with the experimental results of ALICE collaboration. It may be mentioned here that the cause of observed small difference in the pseudorapidity distributions of UA5 and ALICE collaborations (e.g., at √ s = 900 GeV) is discussed in and is attributed to the facts that− (i) UA5 collaboration used a 1/M x variation of single diffractive cross sections and (ii) inconsistent UA5 internal data as discussed in. It may further be noted that, while the comparison of PYTHIA8 4Cand PYTHIA8 Monash-generated pseudorapidity distributions of all primary charged particles with experimental data of ALICE collaboration could also be found in, UrQMD-3.4-generated pseudorapidity distributions at LHC energies, to the best of our knowledge, have been compared for the first time in this report. From Fig. 1 it is clearly seen that PYTHIA8 Monash tuned pseudorapidity distribution of primary charged particles lies between PYTHIA8 4C and UrQMD-3.4 predicted results. Considering the fact that all the three sets of model-predicted pseudorapidity distributions are in good agreement with the experimental results of appropriate energies, further analysis is carried out with all the three, i.e., UrQMD-3.4, PYTHIA8 4C, and PYTHIA8 Monash model-generated data. UrQMD-3.4-and PYTHIA8 (4C and Monash)-generated rapidity distributions of −, K −,p,,, −, and − at all the studied energies for p+p collisions are parameterized by the following double Gaussian function : where the symbols have their usual significance. Using this fitting function, widths of the rapidity distributions of generated data of all the studied hadrons are estimated and listed in Tables Table 2. Widths of the rapidity distributions of all the studied hadrons with UrQMD-3.4-generated p+p events at all the SPS, lower RHIC, and all the LHC energies. At LHC energies, rapidity widths of s are estimated using double Gaussian function excluding the extreme two peaks, whereas rapidity widths estimated using triple Gaussian function including all the three peaks are shown in the parentheses. Table 3. Widths of the rapidity distributions of all the studied hadrons with PYTHIA8 Monash-generated inelastic p+p events from 564 GeV to the top LHC energies. At LHC energies, rapidity widths of s are estimated using double Gaussian function excluding the extreme two peaks, whereas rapidity widths estimated using triple Gaussian function including all the three peaks are shown in the parentheses. It could be readily seen from Figs. 2 and 3 that, as reported in for heavyion collisions from AGS to LHC energies, a similar jump in the rapidity width of does exist also in the case of minimum bias p+p collisions from low SPS to the highest LHC energies for both UrQMD-3.4 and PYTHIA8 (4C and Monash tuned) models generated data. Such an increase in the width of the rapidity distribution of (uds) and not (ds), was attributed to the dependence of production on net baryon density distribution or otherwise on associate production of. In p+p collisions, as shown in the left panel of Fig. 4, the net baryon density is found to be maximum at extreme rapidities even at low SPS energies and minimum but not zero at zero-rapidity. Unlike in heavy-ion collisions, the spectator regions of p+p collisions consist of partonic rather than hadronic matter and therefore the observed maximum net baryon density at extreme rapidity space in p+p collisions are not due to the leading hadrons but due to the hadrons produced out of leading partons (partons of the spectators). As the collision energy increases, the minimum value of the net baryon density decreases reaching zero at lower LHC energies ( √ s = 0.9 TeV) (Figs. 4 and 5). Further, with the increase of the collisions energies, the minimum net baryon density region extends over more rapidity space around mid-rapidity. At SPS and RHIC energies, B −B>0 at all rapidity space and thus the rapidity distribution of follows the net baryon density distribution pattern resulting a jump in the rapidity width vs. mass plot. At LHC energies, B −B = 0 over a wide rapidity range and the rapidity distributions of both and follow the same Gaussian pattern over the rapidity space for which B −B = 0. Thus in p+p collisions at LHC energies, the rapidity distributions of become independent of net baryon density distribution pattern and resemble exactly that of over the rapidity space for which B −B = 0. Thus, in a scenario where net baryon density is zero, the widths of rapidity distributions of and become equal or in other words, the rapidity width of does not exhibit any non-trivial (non-kinematic) contribution for a situation for which B −B = 0. To have a cross-check on the above observation, another 80 million pp inelastic events were generated at the highest LHC energies ( √ s = 13 TeV) using PYTHIA8 Monash tuned event generator. The rapidity distribution of, and net baryon density, and the variation of the widths of the rapidity distribution of various hadrons as a function of the rest masses have plotted in Figs. 6 and 7 respectively. It is evident from Fig. 6 that the net baryon density, which arises due to the difference in produced hadrons and their anti-particles created out of spectators' partons, i.e., beam remnants, is more in the extreme rapidity regions for low multiplicity classes. A non-zero value of B −B at extreme rapidity for low multiplicity pp collisions suggests the presence of more quarks than antiquarks in the extreme rapidity regions favouring the production of more baryons than antibaryons. Thus, at LHC energies, the beam protons, which are considered to be extended balls of partons seems to be composed of more quarks than antiquarks favouring baryon asymmetry at extreme rapidity regions. The hadron (particularly ) production in such low multiplicity p+p events, has a substantial contribution from the partons of the spectators, which could be the main reason of the increase in the width of the rapidity distribution of as shown in Fig. 7. Further, it is clearly seen from Fig. 7 that the width of the rapidity distributions of all the studied hadrons decreases with increasing multiplicity or otherwise centrality and for the highest multiplicity class when there is no beam remnants, the jump in the rapidity width of almost disappears. Therefore, for the most central collisions, in absence of spectator, the production of s becomes independent of the nature of the net baryon density distribution and hence, the width of the rapidity distribution of becomes the same as that of (Fig. 8). Such observation reveals that in high multiplicity p+p collisions, where the spectator part is either absent or negligible, the s might be pair produced. Summary In this work UrQMD-3.4 and PYTHIA8-generated pseudorapidity distributions of all primary charged particles of p+p collisions are compared with the existing experimental results of UA5 and ALICE collaborations which show good agreement between model generated and experimental results for the studied energies. An increase in the rapidity width of, as observed in heavy-ion data, could be seen with the generated data of p+p collisions as well at all the SPS energies. When the study is extended to RHIC and LHC energies, a similar increase in the rapidity width of could be observed for the consideration of full rapidity space indicating that the increase in the rapidity width of is a general characteristic of both p+p and A+A collisions from SPS to RHIC and LHC energies. For p+p collisions at various RHIC and LHC energies, even though B −B = 0 at the mid-rapidity, a small non-zero B −B at higher rapidity regions is responsible for the observed jump in the rapidity width of. A plot of widths of the rapidity distributions of and as a function of rapidity window (Fig. 9), shows that at the rapidity windows for which B −B = 0, the widths of rapidity distributions of and become same. Also, widths of the rapidity distributions of and become the same for high multiplicity p+p events over the entire rapidity regions (Fig. 8), where the system has the maximum probability of multiple parton interactions. Such observations suggest that at LHC energies at and around mid-rapidity as well as for high multiplicity p+p events over the entire rapidity region, and are mostly pair produced. |
Exergy Analysis of an Ejector Cooling System by Modified GouyStodola Equation In this paper, exergy destruction analysis of a heat-assisted ejector cooling system has been carried out using a modified GouyStodola equation. The modified GouyStodola equation provides a more accurate and realistic irreversibility analysis of the system than the conventional GouyStodola formulation. The coefficient of structural bond (CSB) analysis has also been executed to find the component whose operating variables affect the systems total irreversibility at the most. Exergy analysis revealed that the maximum exergy loss happens in the ejector followed by the generator and condenser. The model predicted 40.84% of total irreversibility in the ejector at the designed conditions. However, total exergy destruction is found to be the most sensitive to the evaporator temperature. The CSB value of 12.97 is obtained in the evaporator using the modified exergy method. The generator appears to be the second sensitive component with the CSB value of 2.42, followed by the condenser with the CSB value of 1.628. The coefficient of performance of the system is found to be 0.18 at the designed conditions. The refrigerant R1234yf is considered in the system. |
Review: Studying Physics, getting to know Python: RC circuit, simple experiments and coding with Raspberry Pi The article "Studying Physics, getting to know Python: RC circuit, simple experiments and coding with Raspberry Pi" introduces a hands-on, integrative studies approach to teaching electronics, physics, and computer science. I believe it provides all of the steps (and code) needed to reproduce the basic exercises in the classroom and there is room to add on new ideas. The course is designed to use Raspberry Pis; because of the affordable nature of the equipment (less than a textbook). I could see proposing to use this setup as an online lab where students purchase the equipment and run the lab while they are quarantined at home. Unfortunately, as written the article is over 8,0000 words when counting figures which is significantly over the 3000 word limit given in the CISE guidelines for department papers: Up to 3000 words in length, including the abstract, references, bios, figures (see below), and all other text in the article. When counting words, note that tables and figures should be counted as 250 words each. The article is over 8,000 words when counting the figures and I would be concerned that cutting it to 3000 words would take away from one of the article's strengths. A compromise would be to provide the examples in an online git repository and reference the repository in the article. |
Victorian Poetry's Modernity The title of this special issue, "Whither Victorian Poetry?" poses a somewhat paradoxical question. To ask of an object "whither" is to imply the possibility of change, yet the object specified is defined by its temporal closure and completion. How can "Victorian Poetry," now over a century past, change or have a future? The way out of this impasse is, needless to say, through interpretation, criticism, and scholarship--the activities through which we give a future to what would otherwise live in a completed and static existence. In thinking about the question posed by this issue's title, I became convinced that Victorian poetry particularly invokes the paradoxes of temporality, interpretation, and the construction of pastness and futurity. Kathy Psomiades has recently argued that "almost all versions of the standard story about the field in the twentieth century end with the invocation of some point in the recent past, or perhaps just now arising, or anticipated in the near future, when Victorian poetry receives its proper due at last." She confirms my instinct that the question of the "future" of Victorian poetry is particularly over-determined. Here I want not to try to repay this debt supposedly owed Victorian poetry, but to consider the ways that both this work, and the scholarly field devoted to it, define their relation to temporality and to constructions of the present and future. In particular, I want to consider the relation of Victorian poetry to modernity and the modern, and to wonder what it would mean to bring this body of poetry more closely into conversation with both nineteenth-and twentieth-century accounts and theories of European or trans-national modernity. I am thinking of "modernity" in the sense defined by Jurgen Habermas in his essay "Modernity--an Unfinished Project." Habermas explains that "the word 'modern' in its Latin form 'modernus' was used for the first time in the late 5th century in order to distinguish the present, which had become officially Christian, from the Roman and pagan past.' Distinctions between the "modern" era and an ancient past "appeared and reappeared" over the centuries until, after the French Revolution, another and historically new "form of modernist consciousness was formed," a "radicalized consciousness of modernity which freed itself from all specific historical ties." Habermas locates the clearest moment of emergence for this new form of modernist consciousness in the mid-nineteenth century: The spirit and discipline of aesthetic modernity assumed clear contours in the work of Baudelaire.... Aesthetic modernity is characterized by attitudes which find a common focus in a changed consciousness of time. This time consciousness expresses itself through metaphors of the vanguard and the avant-garde. The avant-garde understands itself as invading unknown territory, exposing itself to the dangers of sudden, shocking encounters, conquering an as yet unoccupied future. The avant-garde must find a direction in a landscape into which no one seems to have yet ventured. As Habermas defines it, such a "discipline of aesthetic modernity" possesses a particular relation to time and temporality. No longer, as in pre-nineteenth-century manifestations of the modern, does the modern relate itself "to the past of antiquity, in order to view itself as the result of a transition from the old to new." Such a stable, progressive model of temporality gives way instead to the expression of "the experience of mobility in society, of acceleration in history, of discontinuity in everyday life." Such an emergent nineteenth-century modernity "revolts against the normalizing functions of tradition" and takes as its historical given "the transitory, the elusive and the ephemeral" (pp. 3-5). Is Victorian poetry modern in this sense? In the most compelling and authoritative recent full-length study of the genre, Isobel Armstrong says yes. |
Malassezia pachydermatis fungaemia in a neonatal intensive care unit Malassezia pachydermatis, a nonobligatory lipophilic yeast, has occasionally been implicated in nosocomial fungaemias. This study investigated a cluster of eight cases of M. pachydermatis infection and colonization in a neonatal intensive care unit over a 6 mo period. All patients were preterm with very low birthweight and suffered from various underlying diseases. Prolonged use of indwelling catheters and parenteral lipid formulations were important predisposing factors for their infection. All M. pachydermatis strains were susceptible to amphotericin B, fluconazole and itraconazole but resistant against flucytosine. |
Factors Influencing Customer Behavior of Butter Oil Substitute in Vietnam The study was aimed at determining the impacts of factors that influence the purchasing decision of Butter Oil Substitute (BOS) in coffee roasting industry. The study was carried out in Ho Chi Minh City on 88 customers using face to face interview and structured questionnaire as the instruments for data collection. Questions were designed to find out how consumers behave in relation to BOS for coffee roasting. The study showed that the purchase of BOS in coffee roasting industry is influenced mostly by the customers price consciousness, relationship between buyer and seller, and customer service. The study can be used as references for the planning of marketing strategies and as the basis for future researches in the customer behavior with regard to bakery customers (another application of BOS) and specialty fats in general. |
Investigating Mercury's South Polar Deposits: Arecibo Radar Observations and HighResolution Determination of Illumination Conditions There is strong evidence that Mercury's polar deposits are water ice hosted in permanently shadowed regions. In this study, we present new Arecibo radar observations of Mercury's south pole, which reveal numerous radarbright deposits and substantially increase the radar imaging coverage. We also use images from MESSENGER's full mission to determine the illumination conditions of Mercury's south polar region at the same spatial resolution as the north polar region, enabling comparisons between the two poles. The area of radarbright deposits in Mercury's south is roughly double that found in the north, consistent with the larger permanently shadowed area in the older, cratered terrain at the south relative to the younger smooth plains at the north. Radarbright features are strongly associated with regions of permanent shadow at both poles, consistent with water ice being the dominant component of the deposits. However, both of Mercury's polar regions show that roughly 50% of permanently shadowed regions lack radarbright deposits, despite some of these locations having thermal environments that are conducive to the presence of water ice. The observed uneven distribution of water ice among Mercury's polar cold traps may suggest that the source of Mercury's water ice was not a steady, regular process but rather that the source was an episodic event, such as a recent, large impact on the innermost planet. |
Migration - a mixed blessing. Regarding migration, the current thinking is that certain aspects of migration have important implications for population planning. Focus here is on the role of migration and its influence on integrated development programs. Although individuals who migrate to cities are generally from the more privileged socioeconomic groups within the rural area, it is not accurate to identify them as the "cream of the rural population." Present population policies do consider the fact that 70 to 80% of the people live in rural areas, yet population policies give only lip service to migration policies. In response to a question as to whether urbanization is conducive to pro- to anti-natal tendencies in migrating families the responses varied. 1 opinion was that there is no evidence that urbanization and the natality behavior of migrating families is significantly related, while other opinions identified a relationship between anti-natal behavior and migration. Rural development and rural growth centers do seem to help alleviate population problems of rural and urban areas, but their success is very dependent on the kind of rural development programs and the extent of services provided through the growth center. The following are among the advantages to "planned migration" that can be used to strengthen population policies: 1) effective utilization of manpower; 2) balanced regional development; 3) further exploitation of natural resources; and 4) reducing the various problems in urban regions. Many do believe that international migration is a feasible solution to population problems in the global context. |
Design of a 324MHz 200 kW CW Waveguide-to-Coaxial Adaptor for Radio Frequency Quadrupole Microwave System A 324MHz 200 kW waveguide-to-coaxial adaptor has been designed and fabricated for a microwave coupler in a radio frequency quadrupole system. Optimization of the adaptor is performed by the numerical study and experimental test. High-power measurements show that the reflection coefficient of the adaptor is less than −30 dB at the RFQ operating frequency, and there is no breakdown in the 228 kW pulse test. *e measurement results are consistent with the simulation results, indicating that the adaptor has good high-power transmission performance.*is work provides theoretical and experimental bases for a rectangularto-coaxial adaptor design, especially in high-power steady-state operation. Introduction Linear particle accelerators play an important role in many fields such as radiological medicine and nuclear experiments. e radio frequency quadrupole (RFQ), proposed firstly by Soviet scientists Kapchinskiy and Tepliakov in 1969, is an important prefocus accelerating structure in linear accelerators due to its longitudinal and lateral focusing and longitudinal acceleration of the particle bunch. e accelerating energy in the RFQ cavity comes from the coupling loop of the coaxial antenna. us, a waveguide-to-coaxial adaptor which connects a transmission line with a coaxial antenna is needed to satisfy highpower steady-state operation of the RFQ accelerator. A rectangular-to-coaxial adaptor for the high-power RFQ system has been studied. It must have small reflection at the operation frequency and must be also capable of handling up to 200 kW. Although studies of a waveguide-tocoaxial adaptor are reported widely for broadband and compact structures, a hundred kW-order high-power adaptor study has uniquely been performed at this RFQ system. In this study, the requirement for power transition is more stringent. It is getting obvious that higher power needs a bigger coaxial and that it imposes more limitations on the design. For usual adaptors, Teflon material is often used to fix internal structures. is material limits the power capacity of the adaptor design. In this article, the newly designed adaptor is fixed by a metal strip and matched by a multistepped structure. e size of the coaxial waveguide is decided by the coupling antenna of the RFQ cavity. e position and size of each part have been optimized by computer simulation software. Finally, a high-power test was conducted. Structure Design e schematic diagram of a rectangular-to-coaxial adaptor is shown in Figure 1. A Teflon structure filling the internal and external conductors is used to fix the internal conductor. e inner conductor and waveguide compose an open circuit, which exhibits a capacitive reactance, whereas the section between the short-circuit surface and the inner conductor presents an inductive reactance. In this design, microwave reflection would be reduced by appropriate adjustment of position and depth of the inner conductor. In addition, better impedance matching would be achieved with a tuner at the end of the inner conductor. In high-power adaptors used in RFQs, power capacity is limited by the Teflon structure. us, a redesign with a metal strip used to fix the inner conductor has been realized. e introduction of the metal strip leads to impedance mismatch, and consequently, it is necessary to add an additional matching structure, which is used to reduce microwave reflection while avoiding microwave breakdown at highpower input. e new design of the high-power adaptor is presented in Figure 2. e outer diameter of the coaxial inner conductor is 65.3 mm while the inner diameter of the outer conductor is 150 mm. e size of the rectangular waveguide is 584.2 mm 292.1 mm. e symbols h and l indicate the depth and distance which determine the position of the inner conductor. e inner conductor is supported by a metal strip in which two ends are connected to the waveguide walls, as depicted in Figure 2(b). erefore, the size of the wide side is considered as the length of the strip, while w st and h st represent its width and height, respectively. e microwave performance of the adaptor is affected by the existence of the metal strip, and its dimensions must be optimized to reduce this effect. In order to achieve impedance matching at the frequency of 324 MHz, a multistepped pillar structure that consists of two parts is placed on the waveguide wall. e part one is a pillar located in the midline of the wide side with a pillar diameter of 50 mm which is slightly smaller than the coaxial inner conductor. e h sc and p sc represent its height and its distance to the short-circuit surface, respectively. e part two is a multistepped pillar and consists of two concentric cylinders with different radius. e height and diameter of these cylinders are h 1, h 2 and d 1, d 2, respectively. As shown in Figure 2(b), the position of part two is indicated by p x and p y. e reflection coefficient of the adaptor can be reduced to −40 dB by adjusting the dimensions of the matching structures. In order to avoid high electric field at the tip, all corners are chamfered. RF Transmission Analysis. e proposed design and its optimization are performed by the finite element method. e reflection coefficients before and after optimization of position and size of the tuning pillars are shown in Figure 3. Curve (a) shows the reflection coefficient of the adaptor without the copper strip and the two matching pillars. e best value at the operating frequency is higher than −15 dB which is too high for microwave transmission. Curve (b) shows the reflection coefficient of the adaptor with copper strip. In curve (a), it can be seen that the optimized copper strip plays the role of fixing the inner conductor and does not significantly affect the reflection coefficient. Curve (c) is obtained after optimization of the matching pillar on the wide waveguide wall, and the minimum value achieved −46.9 dB. In order to achieve impedance matching at 324 MHz, the multistepped pillar is optimized, and the result is shown in curve (d). It is apparent that the designed adaptor shows a good performance with a reflection coefficient lower than −30 dB at a frequency point of 324 MHz. All the optimized parameters are listed in Table 1 in which the units are millimeters. Steady-State ermal Analysis. ermal stability of the structure is checked up to 200 kW with simulation software. e steady-state thermal analysis is performed with COMSOL Multiphysics (trial version) considering air convection with the environmental temperature of 20°C, and the related results are shown in Figure 4. e maximum reached temperature is 50°C on the surface of the inner conductor when the temperature of the adaptor has stabilized. e thermal deformation of the internal structure is negligible at this temperature. Since the heat generated by the internal structure can be quickly dissipated through the waveguide wall, a newly designed adaptor can be operated stably in high-power input 200 kW. Power Capacity Calculation. Calculation of power capacity is needed to prevent microwave breakdown when designing high-power microwave devices. e critical electric field of the air breakdown should be calculated first for the reason that it is the primary factor limiting the power capacity of the adaptor. e critical breakdown field E B can be calculated by where p * and (p * ) may be approximated by and E B /p * ≈ 30 V/(cm * Torr), which constitutes the attachment controlled breakdown criterion for CW operation, International Journal of Antennas and Propagation is only valid in the limit of p * � 0, where is the wavelength. e threshold condition for CW operation becomes us, the critical electric field for breakdown would be 18.2 kV/cm at 324 MHz when the temperature is 50°C and the atmospheric pressure is 760 Torr (1.01 10 5 Pa). e simulated electric field amplitude distribution with the maximum electric field of 18.2 kV/cm is shown in Figure 5. e input power to the adaptor in this simulation is 608 kW, and the maximum electric field is located on the top of the multistepped pillar. Considering the working environment of this adaptor, we adopted a safety factor of three as the ratio of power capacity and transmission power, and the above results show that the new adaptor satisfies the demand for high-power microwave transmission. Low Power Test. According to the above analysis, the highpower waveguide-to-coaxial adaptor was fabricated and tested. e photograph of the test system is shown in Figure 6. e coaxial port is connected to the matching load, and the rectangular port is connected to the network analyzer through another coaxial-to-rectangular adaptor. A comparison between measured and simulated results is shown in Figure 7, in which the frequency with a reflection coefficient below −20 dB is MHz. e result shows that the adaptor has sufficient performance in the operation frequency of the RFQ microwave system. e measured curve is in rather good agreement with the simulation results. e difference in the frequency of the minimum reflection coefficient is attributed to the fabrication process and measuring accuracy. High Power Test. High-power testing is essential to verify the electrical performance of the proposed adaptor. e adaptor was only tested under a pulsed microwave source since there are no CW microwave sources in our laboratory. e pulse width is 500 s, and the duty cycle is 1.25%. e test experiment at 2.2 kW and 228 kW was operated, and the results are illustrated in Figure 8. It is observed that no breakdown occurred during the pulse time. e adaptor can run for a long time under the condition of 500 s pulse width and 1.25% duty cycle. Figure 9 is a photo of 200 ms detection signal when running for a long time. at is to say, there is no breakdown of the adaptor in long time operation. Conclusion In this work, a high-power waveguide-to-coaxial adaptor for a 324 MHz 200 kW RFQ microwave system is designed, fabricated, assembled, and tested. e structures and dimensions are first investigated and optimized through electromagnetic simulation to reduce microwave reflections. en, the heat analysis is conducted to ensure that additional water-cooled construction is unnecessary in operation conditions. Finally, the high-power tests show no breakdown generation in long pulsed operation. e critical breakdown field simulation shows that the design requirement of power capacity can well be met theoretically with the structure that a copper strip could be used instead of Teflon to fix the inner conductor. For the reason that all matching structures are in good contact with the waveguide walls, the maximum temperature of the inner conductor is no more than 50°C under 200 kW power input. Furthermore, the adaptor has been validated with a highpower pulsed source under laboratory conditions. More multifactor analysis and tests have not been carried out, for example, a long-time high-power CW test, which could also limit the working power regime of the adaptor. Synthesizing the simulation analysis and test results, it can be concluded that the performance of the 200 kW CW waveguide adaptor is sufficient to meet the real requirements of the pulsed source microwave transmission system. Data Availability e simulation and experimental data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest. International Journal of Antennas and Propagation 5 |
Altered sensorimotor cortical oscillations in individuals with multiple sclerosis suggests a faulty internal model Multiple sclerosis (MS) is a demyelinating disease that results in a broad array of symptoms, including impaired motor performance. How such demyelination of fibers affects the inherent neurophysiological activity in motor circuits, however, remains largely unknown. Potentially, the movement errors associated with MS may be due to imperfections in the internal model used to make predictions of the motor output that will meet the task demands. Prior magnetoencephalographic (MEG) and electroencephalographic brain imaging experiments have established that the beta (1530 Hz) oscillatory activity in the sensorimotor cortices is related to the control of movement. Specifically, it has been suggested that the strength of the postmovement beta rebound may indicate the certainty of the internal model. In this study, we used MEG to evaluate the neural oscillatory activity in the sensorimotor cortices of individuals with MS and healthy individuals during a goaldirected isometric knee force task. Our results showed no difference between the individuals with MS and healthy individuals in the beta activity during the planning and execution stages of movement. However, we did find that individuals with MS exhibited a weaker postmovement beta rebound in the pre/postcentral gyri relative to healthy controls. Additionally, we found that the behavioral performance of individuals with MS was aberrant, and related to the strength of the postmovement beta rebound. These results suggest that the internal model may be faulty in individuals with MS. Hum Brain Mapp 38:40094018, 2017. © 2017 Wiley Periodicals, Inc. |
Correlates of hospitalized patients' perceptions of service quality. The purpose of this study was to examine hospital patients' perceptions of service quality in relation to four independent variables: (a) nurses' perceptions of human resource practices, (b) nurses' perceptions of autonomy in practice, (c) patient satisfaction with nursing care, and (d) patients' perceptions of organizational climate for service. The sample consisted of 102 nurse-patient dyads in an acute care hospital. Patients responded to the Modified Health Care Service Performance Instrument, the revised LaMonica-Oberst Patient Satisfaction Scale, and the Organizational Climate for Service Semantic Differential. Nurses responded to the Employee Turnover Diagnostic and the Dempster Practice Behaviors Scale. Two of the four correlational hypotheses were supported. Patient satisfaction with nursing care and patients' perceptions of organizational climate for service were each positively related to patients' perceptions of service quality. A multivariate regression hypothesis was not supported. Failure to support two theoretically based correlational hypotheses may be related to methodological problems experienced with dyadic research. |
LOCAL ASYMPTOTIC BEHAVIOR OP DENSITIES The paper generalizes and strengthens results on sufficient coraitions for local asymptotic normality by Hajek, Ibragimov and Has'minskij, Roussas, and a result by Jeganathan on local asymptotic mixed normality. In examples, local asymptotic normality is shewn for regression problems, for a family of distributions related to a Robbins Monro type approximation method, and for a certain family of stochastic processes. The latter includes examples by Stomatelos. |
Effects of biodegradable plastics on soil properties and greenhouse gas production ABSTRACT Microplastics cause environmental problems. Biodegradable plastics have become popular because they aim to avoid such problems. However, their decomposition in the soil may have an impact. This study aims to investigate the effects of biodegradable plastics on the physicochemical properties of soil, specifically the production of CO2 and N2O in the soil and plant growth. Three kinds of biodegradable plastics in the forms of 1) nonwoven fabric sheets made of poly-lactic acid (PLA) and polybutylene-succinate (referred to hereafter as fabric), 2) laminate sheets made of polybutylene adipate terephthalate (PBAT) and pulp (hereafter laminate), and 3) drinking cups made of PLA (hereafter cup), were cut into small pieces (<5 mm), added to soil, then water-holding capacity was determined and incubated aerobically for 4 weeks at 30°C in the dark. Soil and gas samples were collected weekly to measure soil pH, nitrate-nitrogen content, CO2, and N2O productions. These plastics were also tested in a pot experiment with Komatsuna (Brassica napa var. perviridis). We tested for seed germination, plant growth, leaf color, and fresh weight at harvest. Results showed that the water retention capacity was higher in the fabric plastics as compared to the cup plastics and the control. Soil pH with the fabric plastics dropped during the initial 2 weeks of incubation, then recovered to a similar pH to the control (without plastic). Nitrate contents in the soil with laminate plastics were lower than those in the control, while CO2 production in the soil with the laminate plastics was higher than that in the control and the other plastics during the incubation period, and even higher than the one of added plastic-C. N2O was produced rapidly within 1 week of incubation in the soil with the laminate plastics, and cumulative N2O production from incubation was more than that of the control. The seed germination and plant growth tended to be suppressed in the pot experiment with the fabric and laminate plastics. The results indicate that the influence of these biodegradable plastics on soil properties, greenhouse gas production, and plant growth on the kind of plastic and the timing. |
On the occurrence and ecological features of deep chlorophyll maxima (DCM) in Spanish stratified lakes Deep chlorophyll maxima (DCM) are absolute maxima of Chlorophyll-a concentration among the vertical profile that can be found in deep layers of stratified lakes. In this manuscript I review the principal mechanisms that have been argued to explain the formation of DCM, which include, among others, in situ growth of metalimnetic phototrophs, differential impact of grazing between the different lake strata, and passive sedimentation to the layers where water density and cell density are equalized. The occurrence of DCM in Spanish lakes, as well as the main ecology characteristics of the oxygenic phototrophs that form DCM in these lakes is also reported. Cyanobacteria, either filamentous or unicellular, and cryptophytes, are the main components of most DCM found in the reported Spanish lakes, although diatoms, chrysophytes, dinoflagellates, and chlorophytes also contribute to these chlorophyll maxima. These organisms cope with strong physical and chemical gradients, among which those of water density, light and inorganic nutrient availability, and sulphide concentrations appear to be the most determinant factors influencing planktonic community structure. |
Algorithm and architecture co-design of Mixture of Gaussian (MoG) background subtraction for embedded vision Embedded vision is a rapidly growing and challenging market that demands high computation with low power consumption. Carefully crafted heterogeneous platforms have the possibility to deliver the required computation within the power budget. However, to achieve efficient realizations, vision algorithms and architectures have to be developed and tuned in conjunction. This article describes the algorithm / architecture co-design opportunities of a Mixture of Gaussian (MoG) implementation for realizing background subtraction. Particularly challenging is the memory bandwidth required for storing the background model (Gaussian parameters). Through joint algorithm tuning and system-level exploration, we develop a compression of Gaussian parameters which allows navigating the bandwidth/quality trade-off. We identify an efficient solution point in which the compression reduces the required memory bandwidth by 63% with limited loss in quality. Subsequently, we propose a HW-based architecture for MoG that includes sufficient flexibility to adjust to scene complexity. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.