content
stringlengths 1
311k
|
---|
Bridging the Gap Between Research and Policy and Practice Far too often, there is a gap between research and policy and practice. Too much research is undertaken with little relevance to real life problems or its reported in ways that are obscure and impenetrable. At the same time, many policies are developed and implemented but are untouched by, or even contrary to evidence. An accompanying paper describes an innovative programme in Canada to help bridge this gap. This commentary notes the growing acceptance of such initiatives but highlights the challenges of sustaining their benefits. I n the summer of 1967 the city of Detroit erupted in violence. In the course of a single week, 43 people were killed and over 2000 buildings were destroyed. The predominant view, in statements by politicians and media coverage, was that this was simply a reflection of immaturity and deviancy among the African-American population involved in the riots. A professor of psychology at the University of Michigan, Nathan Caplan, was not so sure. He went to the neighbourhoods worst affected by the rioting, even while it was still taking place along with a local journalist, Philip Meyer. Within a week they had recruited and trained 30 African-American researchers to undertake a survey. Within a month, the results were analysed and published. 1 Today, the importance of doing policy relevant research is widely accepted. Then, the reaction was very different. Caplan describes how: "My academic colleagues had a habit of interpreting reality as though it's just a special case within theory. God forbid that anything they did became useful or that they actually spoke to anybody. " 2 His findings flatly contradicted many of the assumptions that had been taken for granted. The likelihood that someone would riot was not, as had been assumed, associated with economic status, education or, as many believe, recent migration from the southern states. Instead, it reflected experience of police brutality, overcrowding, and lack of jobs. The findings were very influential in the work of the National Advisory Commission on Civil Disorders, or Kerner Commission, set up by President Johnson to investigate the race riots that took place that year. Later, Meyer would write what has become a seminal text on the use of social science methods for journalists. 3 There were, however, even then researchers who bemoaned the lack of contemporary relevance of much research. Indeed, as one paper at the time noted, it almost seemed as if researchers did not want anyone to read what they were writing, producing texts that were essentially incomprehensible to anyone outside their discipline, 4 communicated in conferences attended by a few like-minded individuals, only appearing in print years later. The attitudes that Caplan confronted still, however, persist in some quarters. Of course, there will always be research whose importance is unclear at the time. As long ago as 1892 a correspondent in the journal Nature noted how the importance of some research may only be recognised much later, asking "if universities do not study useless subjects, who will?" 5 Yet, even if the research is speculative, that is no excuse for failing to communicate it. Too often papers are written in language that seems designed to render the findings obscure, producing results that emerge years after the problem they were intended to address, if indeed there ever was an actual problem, has been solved. Even if those involved say that they want their research to have an impact in the real world, they are unwilling to set aside time to engage with those who might use it. And while the situation has improved in recent years, even those researchers who see the importance of engagement face substantial barriers to doing it. 6 The inaccessibility of research findings matters. Policies and practices are often enacted either in apparent ignorance of the evidence or even in direct opposition to it. The examples are numerous, with the experience of a single country, the United Kingdom, justifying an entire book filled with examples tellingly titled "The blunders of our governments. " 7 The problem, as Lindblom and Cohen have noted, is that " in public policy-making, many suppliers and users of social research are dissatisfied, the former because they are not listened to, the latter because they do not hear much what they want to listen to. " 8 The accompanying paper by Sim et al describes a novel attempt to overcome this problem. 9 The Canadian Institutes of Health Research have established a health system impact fellowship, co-locating postdoctoral fellows in a health system organisation and an academic institution. The goal was to enable the fellows to understand the challenges facing health system organisations, develop their competencies in working in a policy environment, and strengthen the health system's ability to use evidence. The authors of the paper describe a range of positive experiences, but it is clear that the programme brought benefits for both the individuals concerned and the organisations in which they were working. However, they conclude with a rather concerning observation: "it is unresolved how the training fellowship will impact future academic success. " This, surely, is the challenge. In many countries, the academic career structure is such that many of those obtaining PhDs will not, ultimately, pursue a career in academia. Many will use the skills and knowledge they have acquired in other ways, whether they are scientists, such as those in the pharmaceutical industry, to those with training in the arts, working in the culture and heritage sector. In many respects, these occupations are a continuation of what they studied at university. But many others will move into jobs that are far away from what they spent years studying. As a result, in many countries there is an appreciable group of people with the ability to understand complex research but who are not using their skills, coexisting with an even larger number of policy-makers and practitioners whose work would benefit from much of the existing research, were it not for the large gap that divides them. What are needed are people who can close this gap. Crucially, this gap should be closed from both directions. Ideally, those policy-makers and practitioners that can use the research should have it translated for them, while the researchers that generate the knowledge should be informed of the needs of users so that they can ensure that what they are doing is actually useful. This role is, increasingly, being recognised by research funders, and not just in Canada, who encourage practical placements within research training fellowships and who expect evidence of impact of the research they are funding. In the United Kingdom, for example, part of the assessment of universities for core government funding is based on impact case studies. 10 It is also being recognised by some policymakers, such as the international organisations and health ministries that have come together with universities in the European Observatory on Health Systems and Policies 11 and its counterparts in North America and the Asia Pacific region. In parallel, there has been sustained growth in research on knowledge translation. This has several strands that, to some extent reflect the diverse disciplinary backgrounds of those involved and which are not always as well developed as they might be. One set, led mainly by clinical epidemiologists and health service researchers, address the process of communication between researchers and practitioners and policy-makers, stressing the importance of mutual dialogue and trusted relationships, ensuring that researchers understand the questions being asked and practitioners understand what questions researchers can answer. An example is the SUPPORT project, a stepwise process of getting evidence into practice. 12 Others reflect critiques of the concept of knowledge translation, 13 focusing on thinking from other disciplines including philosophy, sociology, and political science, which argue for more attention to be devoted to the social construction of knowledge, power relationships, 14 and the role of tacit and contextually specific knowledge. 15 One of these, largely led by psychologists, focuses on cognitive biases, noting how two people given the same information may interpret it completely differently. 16 This strand also includes research on how the dominant narratives, which often shape how people interpret evidence, are framed and propagated. 17 Another, involving a mix of political scientists, information specialists, and public health researchers, among others, is exploring how some groups actively seek to undermine the communication of accurate evidence. These include vested corporate interests, with most attention having focused on the tobacco industry, 18 but now recognising that its tactics are employed by others, such as alcohol, food, and soft drink manufacturers. 19,20 More recently, they have begun to turn their attention to those who exploit concerns about health evidence, for example that relating to vaccines, for other purposes. These include pursuit of political goals, seeking to undermine trust in authorities and, hence, democratic institutions, or to exploit the money making opportunities of the internet. 21 In these ways, the science of knowledge translation has become much more complex. The report by Sim and colleagues shows the benefits that can accrue from a scheme to bridge the gap between research and policy. The challenge now is to find a way of sustaining it, ensuring that those employing the individuals who emerge from this programme find environments, on whichever side of the divide, that truly appreciate what they bring, as well as the creation of organisations that can occupy the middle ground, interpreting and translating the messages that should flow both ways between researchers and policy-makers and practitioners. Ethical issues Not applicable. |
An IoT Based Rapid Detection and Response System for Vehicular Collision with Static Road Infrastructure Road accidents take place everywhere and involve a collision between two dynamic or a dynamic and static object. The proposed project is designed to detect as well as respond to accidents between moving vehicles and stationary road infrastructure such as traffic lights, street lamps etc. The system is consequently installed on this standalone road infrastructure and is powered by a rechargeable lithium-ion battery. The system uses sound, pressure and motion sensors for detection purposes. The sensors are calibrated using the average values observed in previous accidents as their operating threshold values for detection and determination. They are connected with an Arduino and a Wi-Fi module which communicates with nearby medical facilities and highway authorities as a response mechanism in the event of a detected accident. If implemented, it has the potential to effectively reduce the loss of life and limb in the event of road accidents, especially on isolated highway stretches. |
A Super-Resolution Probe To Monitor HNO Levels in the Endoplasmic Reticulum of Cells. Selective detection of nitroxyl (HNO), which has recently been identified as a reactive nitrogen species, is a challenging task. We report a BODIPY-based luminescence ON reagent for detection of HNO in aqueous solution and in live RAW 264.7 cells, based on the soft nucleophilicity of the phosphine oxide functionality toward HNO. The probe shows high selectivity to HNO over other reactive oxygen/nitrogen and sulfur species. Luminescence properties of the BODIPY-based chemodosimetric reagent make it an ideal candidate for use as a reagent for super-resolution structured illumination microscopy. The viability of the reagent for biological in vivo imaging application was also confirmed using Artemia as a model. |
Efficacy and safety of premedication with oral ketamine for daycase adenoidectomy compared with rectal diazepam/diclofenac and EMLA® Background: Because of its painattenuating and sedative properties oral ketamine has been used as premedication in children and adults. We wanted to compare in children scheduled for adenoidectomy safety and efficacy of oral ketamine with a premedication that causes similar preoperative sedation and relief of pain at the venepuncture site. We also evaluated the effect of i.v. glycopyrrolate added to these combinations. |
On the steadystate drop size distribution in stirred vessels. Part II: Effect of continuous phase viscosity In Part I we used silicon oils with viscosities across six orders of magnitude to investigated the effect of the dispersed phase viscosity on the DSD of dilute emulsions. In this study we extended Part I by using three glucose aqueous solutions to thicken the continuous phases approximately an order of magnitude while keeping the Power number constant. It was found that increasing the continuous phase viscosity decreases the maximum drop size despite having drops well above the Kolmogorov length-scale. Our results are in disagreement with the mechanistic models for the turbulent inertia regime. The results were explained using the full turbulent energy spectrum proposed by Pope 2 instead of the Kolmogorov -5/3 spectrum. Our analysis revealed that most of the steady-state drop sizes do not fall in the isotropic turbulence size range. |
Tools and processes for tracking IRB approvals as a coordinating center for large multicenter clinical research networks A data coordinating center (DCC) for a large multicenter research network takes on a variety of roles and responsibilities that contribute to the smooth and successful function of large clinical trials. Major responsibilities of a DCC can be categorized into four broad areas: trial operations, data management and analysis, quality control/quality assurance, and human subjects protection and regulatory affairs. 1 Findings from a Federal Demonstration Partnership survey of more than 6,000 faculty members who were lead investigators on ABSTRACT INTRODUCTION A data coordinating center (DCC) for a large multicenter research network takes on a variety of roles and responsibilities that contribute to the smooth and successful function of large clinical trials. Major responsibilities of a DCC can be categorized into four broad areas: trial operations, data management and analysis, quality control/quality assurance, and human subjects protection and regulatory affairs. 1 Findings from a Federal Demonstration Partnership survey of more than 6,000 faculty members who were lead investigators on federally funded research grants suggested that investigators spend almost half of their time allotted for research projects on administration-related activities. 2 This echoes the administrative burden on both DCC staff and research network staff to ensure smooth network functioning. The survey also found that Institutional Review Board (IRB) approval processes constituted the highest administrative burden for research involving human subjects. 2 By nature, research networks often conduct multiple complex trials concurrently and involve large numbers of staff across all participating sites. Networks that involve human subjects research have an additional responsibility to protect the rights, integrity, and confidentiality of clinical trial subjects and follow Good Clinical Practice guidelines. 3 Research coordinators at each clinical center are often assigned the responsibility of obtaining approval from their local IRBs to conduct each research trial in addition to attending to many other matters essential to trial operations. Thus, a DCC plays a key oversight role in ensuring that all participating centers have consistently met all requirements from their local review boards to be able to ethically conduct the research activities at their center. Tracking all active IRB approvals for all studies and across all participating clinical centers to ensure that all centers have obtained the necessary and appropriate permissions from their local IRBs to conduct each trial is a pivotal ethical obligation in clinical human subjects research. RTI International serves as the DCC for multiple multicenter research networks and has thus prioritized this responsibility and developed innovative ways to reduce the burden associated with this important task. Historically, tracking of regulatory documentation had to be done on paper and involved complex filing systems for many of these research networks; however, the advent of recent technological developments and capabilities has expanded the platform and ability of DCCs such as RTI to develop advanced means of carrying out this responsibility. Given the increasing magnitude of the workload at clinical centers that are managed by staff with diminishing resources is of utmost importance to maintain regulatory oversight of the centers participating in each trial. Of greatest significance to the DCC is ensuring that all clinical centers have the appropriate permissions to be able to transfer study participantrelated data to the DCC. This is a critical responsibility that must be carried out to promote the safety and wellbeing of human subjects, ensure adherence to the ethical values and principles underlying research, and guarantee that only ethical and scientifically valid research is implemented. Thus, it is important to have adequate documentation that all IRB approvals were obtained to allay concerns from regulatory agencies and the general public about the responsible conduct of research. 4 From a practical protocol implementation and conduct standpoint, it is also of critical importance for the DCC to ensure that site IRB approvals are valid throughout the entire life cycle of the protocol because failure to do so can also limit the scientific quality of the study. In general, all clinical center IRB-related activities are time consuming, and those activities often divert funds that were originally intended for scientific aspects of the protocols. 5 Thus, strategies and tools to reduce the burden of IRB-related tasks for clinical research center staff could result in cost savings for the entire multicenter research network. Having to conduct the review process at each local IRB in a multicenter research network also results in additional frustration associated with cumulative increases in time, cost, and IRB activityrelated workload across all centers. 6 Federal regulations surrounding the expiration dates of IRB approval are not at all forgiving; thus, lapses in IRB approval are detrimental to the study as a whole because all research activity must be halted, and in some cases existing subjects must be discontinued from the study when IRB approval lapses. 7,8 Additionally, there is often significant variation in the date that each clinical center actually receives approval from its IRB 9 ; this means that ensuring that the approval periods are valid for all research sites for a single protocol is always a moving target. The challenge of trying to manage those approval periods and prevent lapses across multiple centers is exponentiated when carrying out the same tasks for multiple protocols being conducted concurrently in a single research network. Automated tracking tools could also be especially helpful in practice-based research networks because clinicians in those research centers may be more unfamiliar with the regulatory requirements and procedures. 10 There is relatively little knowledge regarding the overall compliance of centers in large multi-center research networks because there are few reports in the literature regarding IRB continuing reviews and lapses in approval. 7 However, Tsan et al. looked at the compliance rates for continuing review requirements for protocols at the Department of Veterans Affairs and found that the rate of lapse in IRB continuing reviews between 2010 and 2013 remained relatively high and constant (around 6%-7%). 7 Another practice-based research network with 19 IRBs had study enrollment interrupted at four sites for periods of time between 2 and 13 weeks because they did not receive renewed approved and stamped consent forms back from their IRB in time. 11 Given the integral importance of this responsibility, it is surprising that there is limited information on both the extent of IRB compliance and documentation of strategies and techniques that other research networks and coordinating centers have implemented to carry out this responsibility. Collins et al. described the experience of a DCC for the DIG trial. 12 Abbott et al. have identified factors that have been associated with a reduced study cycle time and elaborated on collaborative efforts to effectively maintain multicenter clinical trials. 13 In Table 1 we identify challenges with traditional DCC processes, which were generalized from the activities described by Collins et al in the DIG Trial and Abbot et al. 12,13 Traditional DCC process 12 Challenges Study startup includes a long checklist to be completed 13 Costly delays between protocol development and first study participant enrollment. Acquire written confirmation directly from the IRB that the center had valid approval to conduct research on an annual basis and send reminders directly to each of the research centers' IRBs 12 Increases the number of individuals associated with the study. Seek copies of the minutes from the IRB meeting where approval was granted 12 Increases number of communications that must be documented, tracked, and updated over time. DCC enters dates for the initial review and scheduled continuing review into a database and created a program to generate a list of centers due to expire within 2 to 3 months. Manually send letters to the research staff reminding them that the continuing review was coming up. 12 Not feasible to implement in a large research network conducting numerous protocols at any given time with any significant resource constraints. Provide basic study progress information for each research site that can be used to provide updates to the local IRB. 12 Difficult to standardize the type of information the DCC would provide to the research sites. 11 Especially in a research network conducting multiple protocols at the same time, it is essential to look for techniques to increase efficiencies while still ensuring the same level of compliance. Although the Office for Human Research Protections (OHRP) has indicated that it is ultimately the responsibility of the investigator at the clinical center to ensure that they remain in compliance with the regulatory requirement and approval periods, OHRP and others have also suggested that the IRBs themselves should develop administrative procedures such as a computerized tracking system to reduce local lapses and noncompliance. 7,8 Many IRBs are starting to use electronic systems to manage their own regulatory and approval activities. However, most of these systems do not currently address the issue of coordinating and conveying the approval information across multiple clinical centers and to DCCs if the clinical center is participating in a larger research network. METHODS In close collaboration and consultation with the DCC study coordinators, RTI informatics specialists developed an in-house Microsoft Access database 14 that can be used to track receipt of IRB approvals from multiple clinical centers. The database is not only capable of tracking whether a current and valid IRB approval for the clinical center is on file at the DCC, but it has also been designed to track the actual approval start and expiration dates within the system. The customizable and easy-to-use system has been set up to receive input for an unlimited number of centers and so that the multiple individual participating hospitals within the center that are governed by separate IRBs can be tracked separately. The database has the capacity to generate automatic reports showing all IRB approvals the DCC has received and entered into the system, but it also has the functionality to produce separate reports organized by individual protocol or clinical center. The Access database tracking system is also able to highlight any IRB approvals that will expire within a user-defined time period (e.g., within 6 weeks) and any individual IRB approvals that may have already expired on the automated reports. In these cases where the clinical center may be delinquent in submitting its IRB documentation to the DCC, the DCC has oversight to disengage access to privately managed data entry systems until active documentation of IRB approval is submitted. The database has been constructed in a way that enables user-friendly data entry and navigation to generate reports for the DCC coordinators who may have limited experience with programming or maneuvering through large databases. The user menu is straightforward and simple, requiring only minimal training for new users who wish to use the database. Upon the opening of the database, users are directed to the menu that displays the basic functions that can be performed. Users can add or change the master lists of protocols and clinical centers whenever a new protocol becomes active or a new clinical center joins the research network. There is a form to enter DCC IRB approval information if needed so that all IRB information for the entire research network can be stored in a single location. The most commonly used functions in the database are the "Site IRB Information" and "Reports" options. These are the areas in which users can enter updated information or generate printouts of the information that has been entered to date. The database has been structured in such a way that the first screen the user sees after opening the database file is a report of all expired or soon to expire IRB approvals by clinical center. Any expired IRB approvals are highlighted in yellow so DCC coordinators are easily reminded of clinical centers that they need to follow up with to obtain updated IRB documentation (Figure 1). Informatics specialists also designed several features where the individual Site IRB Reports and Site Expiration IRB Reports can be created and saved in PDF format for posting on the private website with the click of a single button. This built-in feature results in efficiency for the coordinators because they do not have to repeat the report-generation process for each individual clinical center. Other reports that are available for printing within the system are copies of all active protocols, all active centers, IRB Approval status for all clinical centers by protocol, IRB Approval status for all protocols by clinical centers, and cumulative listings of all site IRB approvals organized by clinical center or protocol ( Figure 2). Additionally, there are multiple formats in which DCC coordinators can view and enter data. RTI informatics specialists created a "Snapshot View" form where all IRB approval date information records can be seen for all centers and all protocols at the same time/in the same screen. This facilitates the data entry process for the DCC coordinator, especially when a single clinical center has submitted multiple updated IRB approval documents and information for multiple studies should be entered at the same time ( Figure 3). Figure 5: Sample IRB information report for an individual clinical center. 15 Alternatively, there are separate features and forms for a more controlled data entry environment and specific pieces of information (e.g., dates for a specific protocol for a specific clinical center) can be entered in a paper form-like page. This is advantageous because it helps to ensure that data are being entered in the correct location for the correct clinical center and protocol and that other records are not being altered by mistake. Once the database and automated reporting structure was set up, the DCC study coordinators established more formal procedures and processes for both collecting updated IRB approvals from clinical centers and informing the clinical centers of the status of the IRB approvals that the DCC currently has on file ( Figure 4). Figure 5). RESULTS The development and implementation of the IRB tracking database has increased efficiencies for both the participating clinical centers and the coordinators at the DCC tasked with the responsibility of ensuring that all clinical centers have sent in valid IRB approvals for all studies in which they are participating and transmitting data. The database and process of posting the automated IRB reports to the private website has drastically reduced the volume of e-mail regarding IRB approvals sent to the clinical centers for DCCs managed by RTI. The DCC coordinator no longer has to send individual e-mails to the clinical centers notifying or reminding them that an IRB approval is about to expire. Instead, they send a single monthly e-mail to all clinical centers at the same time notifying them that the site IRB reports have been updated and research staff should review them to see if any additional action is required from their center. This benefits the staff at the clinical centers as well because they no longer have to monitor their e-mail for reminders about specific IRB approvals that are about to expire and can instead visit the central report listing on the private website and check periodically for updates needed for their center at that single location. The automated reports that highlight approvals that are about to expire have also reduced the burden on the DCC coordinator to manually check the expiration dates of numerous studies. The automated process reduces the likelihood of oversight of the dates that have already passed or are rapidly approaching because the records that are affected are automatically highlighted by the Access database system. Previously, spot checking the dates by hand was associated with an increased chance for oversights simply because by nature the document contained a large number of dates. Implementation of the database and report generation processes also facilitated preparation of annual IRB renewal submission to the DCC IRB. For this annual submission to the DCC IRB the DCC must submit renewal dates for each of the clinical centers and for all of the open protocols; thus, the automated reports that can be generated by specific protocol greatly facilitated this process. In the past DCC coordinators have had to manually look up and supply this information to the DCC IRB. However, the DCC IRB has since allowed the reports generated from this system to suffice as records of the dates each center received IRB approval for each protocol. Having a central IRB database as a DCC also makes management of the regulatory processes easier for DCC staff internally. If a question arises about an IRB approval for a specific clinical center, all DCC staff can go directly to the database to look up the specific information needed. Instead of looking through multiple folders and sifting through old e-mails for documentation and records of IRB approval that were sent in, DCC staff can run a customized report to look up the information in question, or they can navigate within the database to drill down to the specific information they may need for a particular study and particular center. The database is advantageous because it creates a central location that can be updated, maintained, and accessed by multiple users simultaneously. If needed users can share the burden of entering and updating data. Users can also be more confident that they are reviewing the most up-todate IRB approval information because the process has been set up in such a way that allows efficient accrual, processing, and entry of new information ( Figure 6). DISCUSSION Although an initial investment is needed to design a database system to track IRB approvals, the development and formalization of the process to use the database will result in significant time and cost savings throughout the tenure of the DCC. The very nature of an Access database allows for flexibility in the number of studies being tracked, the number of clinical centers involved in the research network, and the changes in composition of the research network over time. These inherent capabilities make the database a low-cost resource over time that can be used to provide both a current and historical picture of the IRB landscape across clinical centers. Ultimately, both the database and processes that have been developed at RTI to track all IRB approvals from clinical centers as a DCC are assets for helping all clinical centers avoid gaps in IRB approvals for numerous studies and a means of ensuring adherence to the ethical standards and requirements for participation in human subject research. On January 25, 2018, the National Institutes of Health (NIH) issued a policy indicating that all of its agencies' research networks should move toward the single IRB model to lessen the burden of seeking approval from local IRBs and to enable the research to "proceed as quickly as possible without compromising ethical principles and protections for human research participants." 16 Although NIH has laid out this goal of moving toward this new streamlined review process for the future, it will likely still take time for all research networks to effectively implement the single IRB model in practice. As research networks begin to make this transition toward the single IRB model, they may do so in stages or in pilot studies. Such stages may involve a subset of consenting clinical centers to test out approval for a newly implemented research study using a rotating Lead IRB for each study. Rather than having one single IRB to approve all new studies, the Lead IRB responsibility rotates between centers to distribute the burden of review across all participating centers. Tools such as the database will still be relevant in this environment because research networks will still need to track which site IRB is serving as the Lead IRB, and the approval and expiration dates, for each study. In the pilot stages when only a subset of centers might be participating in the single IRB process the database can also be used to keep track of that information to show which clinical centers are participating in the single IRB process and which are still operating under the regulations of their own local institutions. Furthermore, multicenter studies that are not necessarily conducted in an NIH-funded research network or a formal research network with another sponsor might not necessarily be able to move to the single IRB approach yet either. Even in light of the recent policy shifts, IRB tools and tracking processes such as those developed by RTI are still relevant and useful in the current research environment, especially considering the number of ongoing multicenter studies that were not set up under the single IRB approach model and where local IRB compliance needs to be monitored through completion. NIH has also recently collaborated with other agencies to develop a single IRB platform for multisite clinical studies: the NCATS Streamlined, Multi-site, Accelerated Resources for Trials (SMART) IRB platform. Both the RTI IRB Tracking Access database and SMART IRB platforms were developed with the same primary objective: "to provide flexible resources that investigators nationwide can use to harmonize and streamline IRB review for their own multi-site studies." 17 If the single IRB approach is successfully adopted by all multicenter studies and research networks, the digital tools and processes for IRB tracking will still be used in that environment to carry out similar monitoring functions with the major difference being that there may just be fewer clinical centers to monitor. |
The disposition and metabolism of 3,4',5-tribromosalicylanilide and 4'5-dibromosalicylanilide in the rat. 1. The metabolism of two pesticides, 3,4',5-tribromosalicylanilide (TBS) and 4',5-dibromosalicylanilide (DBS) has been studied after oral administration to rats. 2. Approximately 65% of the dose of TBS is absorbed, and then excreted as glucuronide and sulphate conjugates of two hydroxylated metabolites. One of these has been identified as 4'-hydroxy-3,5-dibromosalicylanilide, while evidence suggests that the other metabolite is 5-hydroxy-3,4'-dibromosalicylanilide. 3. In contrast, only 11% of the dose of DBS is absorbed; it is then excreted mainly as glucuronide and sulphate conjugates of the parent molecule. |
Studies of the pathology of velogenic Newcastle disease: virus infection in non-immune and immune birds. The pathology of velogenic viscerotropic Newcastle disease virus infection was compared in 7-and 20-week-old groups of non-immune birds and birds with two levels of immunity as determined by the haemagglutinin inhibition test. In non-immune birds the bursa at 7 and 20 weeks was the only lymphoreticular organ to show sustained reticular and lymphoid cell reactions until death took place. Caecal tonsil and spleen were extensively necrotized on day 4 after contact exposure, and similar changes occurred in lung and proventriculus. There was evidence of lymphoid recovery in birds which survived for 18 days. In immune birds the spleen showed two main responses. The first, acute reticular cell response around the ellipsoids indicated that renewed exposure to antigen was often associated with localized cell degeneration. The second, immunological, reaction was rapid formation of germinal centres which occurred somewhat earlier in 20-week-old birds (4-5 days). Especially from the second week, reticular (dendritic) cell and lymphoid hyperplasia occurred diffusely in the bursal medulla of both age groups although marked atrophy and cellular depletion, probably of physiological origin, was a feature of 20-week-old birds with high antibody levels. In the gastro-intestinal tract, germinal centre formation was most marked in the caecal tonsils at 20 weeks. With the Indonesian ITA strain of ND virus, degenerative and inflammatory changes in the brain were mild in all groups up to day 18 irrespective of immune status. |
Weekday-Weekend Sedentary Behavior and Recreational Screen Time Patterns in Families with Preschoolers, Schoolchildren, and Adolescents: Cross-Sectional Three Cohort Study Background: Excessive recreational screen time (RST) has been associated with negative health consequences already being apparent in preschoolers. Therefore, the aim of this study was to reveal parent-child sedentary behavior, and RST patterns and associations with respect to the gender, age category of children, and days of the week. Methods: Our cross-sectional survey included 1175 parent-child dyads with proxy-reported RST data collected during a regular school week during the spring and fall between 2013 and 2019. The parent-child RST (age and RST) relationship was quantified using Pearsons (rP) correlation coefficient. Results: Weekends were characterized by longer RST for all family members (daughters/sons: +34/+33 min/day, mothers/fathers: +43/+14 min/day) and closer parent-child RST associations than on weekdays. The increasing age of children was positively associated with an increase in RST on weekdays (+6.4/+7.2 min per year of age of the daughter/son) and weekends (+5.8/+7.5 min per year of age of the daughter/son). Conclusions: Weekends provide a suitable target for implementation of programs aimed at reducing excessive RST involving not only children, but preferably parent-child dyads. Introduction Sedentary behavior and screen time, as one of the components of repetitive 24-h behavior, such as physical activity, eating, and sleep, are closely linked to the health of children and young people. A longer duration of screen time (including television viewing, video games, and computer use) is significantly associated with higher cardiometabolic risks, lower fitness, self-esteem, and adiposity in preschoolers and schoolchildren with obvious differences between girls and boys. Although studies in preschoolers are less frequent than in schoolchildren and adolescents, it is documented that excessive screen time is associated with higher risk for overweight and obesity, and decreased scores of psychosocial health and cognitive development. An updated systematic review of the associations between sedentary behavior and health indicators in preschoolers confirms the results of a previous systematic review and continues to support the minimization of screen time for disease prevention. Moreover, this review encourages the discovery of moderators of health promotion, with respect to gender, and highlights the potential cognitive benefits of interactive non-screen time behaviors, such as reading and storytelling. It should be noted that only studies from Northern and Western European countries were represented in these systematic reviews. Central and Eastern European countries were not included at all, although the prevalence of overweight and obesity, as well as the screen time of preschoolers from Central and Eastern Europe, is higher than among preschoolers from Northern and Eastern European countries. Parents as "gatekeepers" of children's health-related behavior can best support their children in compliance with sleep rules and restricting excessive RST, while support 2 of 13 in compliance with physical activity guidelines is weaker. Nevertheless, the direct involvement of parents in the process of treating childhood obesity in accordance with Family Development Theory leads to an effective reduction of the excessive Body Mass Index (BMI) of schoolchildren and adolescents. Subsequently, only compliance with the RST recommendations of the 24-h movement guidelines was significantly associated with reduced odds of a high BMI z-score, excess fat mass%, and visceral adipose tissue. Although the above findings concerned schoolchildren and adolescents, it can be reasonably assumed that the amount of daily RST as well as the involvement of parents in controlling sedentary behavior of their offspring also play an important role in the prevalence of overweight in preschool children. Similar to schoolchildren and adolescents, preschoolers spend more than 50% of their objective monitored waking hours being sedentary regardless of the day of the week; there is an observable gender difference though, as girls are more sedentary than boys. While many correlates of sedentary behavior have been highlighted in schoolchildren and adolescents, recent systematic review was unable to identify any consistent correlates of sedentary time in preschoolers other than positive associations with parental sedentary behavior. Repeatedly reported correlates associated with excessive screen time in schoolchildren and adolescents include the number of accessible screen-time devices, the presence of a television in the bedroom, fewer family rules about television viewing, infrequent family meals, less walkable neighborhoods, fewer appealing outdoor areas, and concerns about neighborhood safety. However, objective instrumental monitoring of sedentary behavior in preschool children is not yet widespread, as in schoolchildren and adolescents, or does not cover weekdays and weekend days with a limited differentiation of sitting and reclining positions. From the existing parent-reported screen time studies, positive relationships between parents' and their preschooler children's screen time, with differences by gender and days of the week, are evident. However, these studies used only a non-continuous categorized screen time score primarily for the analysis of compliance with screen time-related guidelines: max. one hour per day for preschoolers and max. two hours per day for older children and parents. In the countries of Central Europe, including the Czech Republic, there are incomplete findings about sedentary behavior and especially the RST of families with preschool children, regarding the days of the week or the gender of family members. Therefore, the aims of this study are to: (i) reveal parent-child sedentary behavior and weekdayweekend RST patterns in family members with preschoolers in comparison with families with schoolchildren and adolescents; (ii) find out the associations between RST/BMI/age of family members; (iii) quantify the change in RST with increasing age of children. In this study, we will test the hypotheses: (i) whether there is a difference in the daily sedentary time ((ii) in the daily RST) in daughters and sons (mothers and fathers) between weekdays and weekends in families separated by age categories of offspring; (iii) whether the parentchild association in daily RST varies with respect to the days of the week and gender; (iv) whether there is a significant association between daily RST and BMI ((v) daily RST and age) of children with respect to the days of the week. The sedentary behavior of children is influenced by many more variables besides parental behavior on weekdays (e.g., kindergarten program, school regimen, or participation in leisure-time organizations, clubs, sports), while at weekends their parents apparently might have more chances to stimulate health-enhancing behaviors in their offspring. Therefore, we expect closer association between the RST of parents and their children on weekends than on weekdays. Design of the Study and Ethics This is a cross-sectional study in which, using the same methodology, parent-child indicators were cross-sectionally monitored in three groups of families (the first group-families with preschool children, the second group-families with children aged 7-11.9 years, and the third group-families with adolescent children aged from the Czech Republic. The study design, content and format of the family logbook, feedback for participants, and method used for all the measurements were approved by the Ethics Committee of the Faculty of Physical Culture review board for families with school-aged children (reference number 50/2012) on 12 December 2012, and for families with pre-school children (reference number 57/2014) on 21 December 2014. The ethical principles of the 1964 Helsinki Declaration and its later amendments were adhered to throughout the research. Written informed consent was obtained from all participants and their parents (guardians) prior to the start of the data collection. Selection of the Participants and Procedure Invitation letters were sent to 3540 families from the Czech Republic, of which 65.3% agreed to take part in the research (written informed consent received). Families were selected through two-stage random sampling. In the first stage, nine out of 14 administrative regions, three of each in the highest, middle, and lowest terciles for gross domestic product in the Czech Republic, were randomly recruited. In the second stage, three public kindergartens located in rural areas and seven in urban locations and 15 public "basic schools" located in rural areas and 36 public "basic schools" in urban locations were randomly selected. Random selection of administrative regions and subsequently kindergartens and basic schools was performed using a random number generator from tables in computer memory. The "basic schools" in the Czech Republic provide compulsory education from grades 1 to 9. The participants were largely white Caucasian (>98%), which is representative of the ethnic demographics of the Czech Republic. All the participants who confirmed their participation in the research in writing were acquainted in detail with the course of the monitoring of sedentary behavior and the method used for determining the body height and weight during an information meeting with the researchers in schools. Each family received an envelope with a family logbook for recording the anthropometric data (gender, calendar age, body height, and weight) and sedentary behavior of all family members. Data were collected in three stages during the spring and autumn months between 2013 and 2019 under comparable daily climate conditions. In the first stage, sedentary behavior was observed in families with children aged 7-11.9 years (April-May and September-October, 2013). In the second stage, the research was conducted in families with preschoolers (September-October, 2014 and April-May, 2015). Lastly, the research was carried out in families with children aged 12-16 years (September-October, 2018 and 2019 and April-May, 2019). Study flowchart of participants can be found at Supplementary Material Scheme S1. Measures Together with the researchers, the parents filled in the age and gender identifiers of the family members in the first part of the family logbook and were introduced to the second part of the family logbook for recording sedentary behavior. The second part of the logbook, concerning sedentary behavior, consisted of seven items ( Table 1). The RST represents the sum of two types of recorded sedentary behavior, sitting/lying watching television (videos) and sitting/lying in front of a computer screen (notebook, tablet, smartphone) not for school/work purposes. The duration and type of sedentary behavior were recorded in the second part of the family logbook by the parents, together with their children, each evening. The accuracy of the recording of the duration of each type of sedentary behavior was fixed at 10 min. The parents' proxy reporting of the time their five-to-six-year-old children spent watching television each day exhibits an acceptable 7-4-day test-retest reliability (ICC = 0.78, p < 0.001) for capturing sedentary behavior on regular weekdays and weekend days. The full list of anthropometric and sedentary behavior-related items of family members recorded by parents is depicted in Table 1. Anthropometric measurements were conducted in the participants' homes. The parents were instructed in detail by the researchers how to measure their own body height and weight, as well as the height and weight of their children, through an enclosed illustrated in-struction leaflet for home measurement. The instruction leaflet for home measurement of the body height and weight, the correct upright posture against a wall (barefoot), and the correct reading of the resulting body height were depicted. The body weight measurements were illustrated barefoot and with the participants only in their underwear. The parents recorded the body height and weight values to the nearest 0.5 cm and 0.5 kg in the first part of the family logbook. Parental measurements of children's body height and weight at home demonstrate almost perfect agreement with direct measurements of body height (using a portable rigid stadiometer) and body weight (Tanita weight scale) and the BMI derived from home measurement of body height and weight shows good diagnostic ability for identifying underweight and overweight/obesity categories in children. Data Management and Statistical Analysis The criteria for inclusion of the data in the final set were as follows: (i) complete anthropometric data (year and month of birth, gender, body height, and weight); (ii) attendance in kindergarten or basic school according to schedule (children), paid employment (parents other than maternity leave) on at least four school/working days a week; (iii) recording of the structure of sedentary behavior (type, duration) on at least four school/working days and one weekend day; (iv) at least one parent-child pair (mother-child, father-child) per family. Inclusion criteria for the minimum number of days per week for a valid capture of sedentary behavior were determined according to the recommendations of thematically related studies in children and adults. After checking for extreme and erroneous values for the anthropometric indicators, the BMI values for each family member were calculated separately as the body weight (kg) divided by body height (m) squared. Similarly, the daily sedentary time/RST values were checked for extreme and erroneous data, and the average daily sedentary time/RST was calculated separately for weekdays and weekends. To maintain the comparability of family-related sedentary behavior data with previous studies focused on pedometer-based family physical activity, the same procedure for controlling and calculating anthropometric indicators and sedentary time/RST variables was retained. If daily sedentary time /RST was recorded during four weekdays, data for the one missing weekday that were based on the participant's personal mean scores were added. Those participants whose daily sedentary time /RST data were missing for more than one day were excluded from the analysis. The average daily sedentary time/RST was calculated separately for weekdays and for weekends as the sum of the individual daily sedentary time/RST divided by the appropriate number of days. The normality of the distribution of daily sedentary time/RST variables was tested using the Shapiro-Wilk and Kolmogorov-Smirnov tests. Neither the Shapiro-Wilk test nor the Kolmogorov-Smirnov test confirmed the normal distribution of daily sedentary time/RST variables. As a result of the non-normal distribution of daily sedentary time and daily RST, the Wilcoxon test was used to compare weekday-weekend daily sedentary time/RST in each of the participants of families with preschoolers, 7-11.9 year old children, and 12-16 year old adolescents. Descriptive characteristics for the daily sedentary time/RST are presented in the form of means and a 95% confidence interval. Bivariate Pearson correlations (r P ) were conducted that examined the association between the parents' and children's RST/BMI/age separately according to the gender of the participants and days of the week. To quantify the change in daily RST with increasing age of children, linear regression analyses were performed separately for weekdays and weekends. The Statistical Package for the Social Sciences (SPSS) for Windows v.22 software (IBM Corp. Released 2013. Armonk, NY, USA) was used for all data management and all statistical analyses. The alpha level of significance was set at the minimum value of 0.05 for all the statistical analyses. Results Research data were received from 1,899 families, and 89 distant relatives, teachers, grandmothers, etc., who were excluded from the study. Of the total number of families, 724 families were excluded for non-compliance with any of the following inclusion criteria: (i) the impossibility of linking the parent-child sedentary behavior/RST record (n = 88), (ii) children <4 years old or ≥16 years old (n = 151), (iii) missing data about gender (n = 64), (iv) missing data about body height or weight (n = 169), (v) insufficient number of days with a sedentary behavior/RST record that covered less than four working days and one weekend day (n = 252). The final dataset contained 1,175 families (179 families with preschoolers, 665 families with children aged 7-11.9 years, and 331 families with adolescents aged 12-16 years) with complete and correct anthropometric and sedentary/RST data. The summary sample anthropometric characteristics of the final set of participants are presented by means and standard deviations in Table 2.. Daughters and Sons Different weekday-weekend day patterns of overall sedentary behavior were detected in offspring in different age categories. While for preschoolers, both daughters and sons, there is no obvious difference in the duration of daily sedentary time between school and weekend days, for schoolchildren, both girls and boys, significantly (p < 0.001) more sedentary time is evident on school days than on weekends (Figure 1). Moreover, the overall daily sedentary time of the 7-11.9 year old schoolchildren on school days exceeds the sedentary time on weekends by more than 75 min on average, and for the12-16 year old adolescents this difference is already more than 100 min (Figure 1). Different weekday-weekend day patterns of overall sedentary behavior were detected in offspring in different age categories. While for preschoolers, both daughters and sons, there is no obvious difference in the duration of daily sedentary time between school and weekend days, for schoolchildren, both girls and boys, significantly (p < 0.001) more sedentary time is evident on school days than on weekends (Figure 1). Moreover, the overall daily sedentary time of the 7-11.9 year old schoolchildren on school days exceeds the sedentary time on weekends by more than 75 min on average, and for the12-16 year old adolescents this difference is already more than 100 min (Figure 1). However, in the case of daily RST, the weekday-weekend day patterns in all the age categories of children that were analyzed are very different from the weekday-weekend patterns of their sedentary time (Figures 1 and 2). The weekend daily RST is significantly longer in all age categories of children (p < 0.001) than the daily RST on school days (daughters: on average by 32 to 38 min per day; sons: on average by 24 to 39 min per day). Moreover, for the comparison of the daily RST from the youngest to the oldest age category of the offspring, a typical upward trend is typical, i.e., the amount of all-day RST for preschoolers on weekends corresponds to the all-day RST for 7-11.9 year old schoolchildren on school days and their significantly higher (p < 0.001) amount of the RST on weekends corresponds to the amount of RST for adolescents of older school age on school days ( Figure 2). However, in the case of daily RST, the weekday-weekend day patterns in all the age categories of children that were analyzed are very different from the weekday-weekend patterns of their sedentary time (Figures 1 and 2). The weekend daily RST is significantly longer in all age categories of children (p < 0.001) than the daily RST on school days (daughters: on average by 32 to 38 min per day; sons: on average by 24 to 39 min per day). Moreover, for the comparison of the daily RST from the youngest to the oldest age category of the offspring, a typical upward trend is typical, i.e., the amount of all-day RST for preschoolers on weekends corresponds to the all-day RST for 7-11.9 year old schoolchildren on school days and their significantly higher (p < 0.001) amount of the RST on weekends corresponds to the amount of RST for adolescents of older school age on school days (Figure 2). Mothers and Fathers For the parents of children in all the age categories that were analyzed, the total daily sedentary time on weekdays is significantly (p < 0.001) longer than the total daily sedentary time on weekends (mothers: on average by 84 to 126 min per day; fathers: on average by 94 Mothers and Fathers For the parents of children in all the age categories that were analyzed, the total daily sedentary time on weekdays is significantly (p < 0.001) longer than the total daily sedentary time on weekends (mothers: on average by 84 to 126 min per day; fathers: on average by 94 to 141 min per day) (Figure 3). Regarding the daily RST of parents, as with children, a longer daily RST on weekends than on weekdays is also evident, but with some differences between mothers and fathers ( Figure 4). Only in the mothers of preschool children was a significantly higher RST revealed on weekends than on weekdays. However, for both the mothers and fathers of older children and adolescents, an RST was evident that was significantly longer on weekends than on weekdays (mothers: on average by 45 to 55 min per day; fathers: on average by 14 to 23 min per day) (Figure 4). Mothers and Fathers For the parents of children in all the age categories that were analyzed, the total daily sedentary time on weekdays is significantly (p < 0.001) longer than the total daily sedentary time on weekends (mothers: on average by 84 to 126 min per day; fathers: on average by 94 to 141 min per day) (Figure 3). Regarding the daily RST of parents, as with children, a longer daily RST on weekends than on weekdays is also evident, but with some differences between mothers and fathers (Figure 4). Only in the mothers of preschool children was a significantly higher RST revealed on weekends than on weekdays. However, for both the mothers and fathers of older children and adolescents, an RST was evident that was significantly longer on weekends than on weekdays (mothers: on average by 45 to 55 min per day; fathers: on average by 14 to 23 min per day) (Figure 4). Associations between Daily Parent-Child RST, and Children RST, BMI, and the Calendar Age The analysis of the associations between the daily RST of parents and their offspring pointed to closer associations between mothers and children of both genders than fathers and their offspring, and closer associations between the daily RST of parents and their offspring on weekends than on school days/workdays ( Figure 5). The closest associations Associations between Daily Parent-Child RST, and Children RST, BMI, and the Calendar Age The analysis of the associations between the daily RST of parents and their offspring pointed to closer associations between mothers and children of both genders than fathers and their offspring, and closer associations between the daily RST of parents and their offspring on weekends than on school days/workdays ( Figure 5). The closest associations between daily RST in mothers and their offspring were found in preschoolers and the least close mother-offspring associations in RST in families with adolescents. In mothers and their offspring, both daughters and sons, a significant (p < 0.01) association between the daily RST and the BMI, was confirmed (r PMOTHER x DAUGHTER = 0.228, r PMOTHER x SON = 0.181). In addition, a significant (p < 0.01) association between the daily RST and calendar age was also confirmed in the children (r PDAUGHTER = 0.251, r PSON = 0.295). Associations between Daily Parent-Child RST, and Children RST, BMI, and the Calendar Age The analysis of the associations between the daily RST of parents and their offsprin pointed to closer associations between mothers and children of both genders than father and their offspring, and closer associations between the daily RST of parents and thei offspring on weekends than on school days/workdays ( Figure 5). The closest association between daily RST in mothers and their offspring were found in preschoolers and the leas close mother-offspring associations in RST in families with adolescents. In mothers and their offspring, both daughters and sons, a significant (p < 0.01) association between th daily RST and the BMI, was confirmed (rPMOTHER x DAUGHTER = 0.228, rPMOTHER x SON = 0.181). I addition, a significant (p < 0.01) association between the daily RST and calendar age wa also confirmed in the children (rPDAUGHTER = 0.251, rPSON = 0.295). Associations between Children's Daily RST and Calendar Age The results of a more accurate linear logistic regression analysis of the association between RST and children's age, which allows the quantification of the potential minute changes in daily RST with increasing age of children, are presented in Figure 6. Given the positive significant associations between daily RST and children's age ( Figure 5), a significant increase in daily RST is expected with increasing age of the children. Linear logistic regression analysis confirms a significant increase in daily RST with increasing age of daughters and sons. The course of the daily increase in RST is similar on school and weekend days-for daughters/sons on average by about six or seven minutes per calendar year. Thus, during childhood, between four and 16 years of calendar age, the daily RST potentially increases by more than 70 min per day for daughters and by approximately 90 min per day for sons (Figure 6). regression analysis confirms a significant increase in daily RST with increasing ag daughters and sons. The course of the daily increase in RST is similar on school and w end days-for daughters/sons on average by about six or seven minutes per calendar y Thus, during childhood, between four and 16 years of calendar age, the daily RST po tially increases by more than 70 min per day for daughters and by approximately 90 per day for sons (Figure 6). Discussion The key findings springing from the results are as follows: (a) completely diffe weekday vs. weekend patterns of sedentary time versus RST in all family participant the lowest daily time spent on RST in preschool children, with a clear increase in daily in older children aged 7-11.6 years and 12-16 year old adolescents on both school and w end days; (c) significantly close parent-child RST associations on weekends, regardle the gender of family members. Consistent monitoring and analysis of the sedentary be ior of offspring, and especially their ST, is important because of the 24-h movement gu lines (i.e., physical activity, sedentary behavior, and sleep); parents can best suppor sleep guidelines and ST restriction rules. In addition, in 14-18 year old Czech ad cents, it was revealed that compliance with only the ST recommendations of the 24-h m ment guidelines was positively associated with reduced odds of a high BMI z-score ( ratio = 0.38, 95% confidence interval : 0.17-0.89), excess fat mass% (OR = 0.34, CI: 0.13-0.93), and visceral adipose tissue (OR = 0.27, 95% CI: 0.10-0.74). Previous studies have shown that preschool children have a pedometer-meas daily number of steps comparable to those of school-aged children and adolescents, the closest association in parent-child daily steps among families with preschool and o children and adolescents. However, daily sedentary time and RST have not yet compared across the age spectrum of children in the Czech Republic regarding pare sedentary behavior. Similar to the results from Australia, the United States, or ada, the lowest levels of sedentary time and RST were found in preschool children c Discussion The key findings springing from the results are as follows: (a) completely different weekday vs. weekend patterns of sedentary time versus RST in all family participants; (b) the lowest daily time spent on RST in preschool children, with a clear increase in daily RST in older children aged 7-11.6 years and 12-16 year old adolescents on both school and weekend days; (c) significantly close parent-child RST associations on weekends, regardless of the gender of family members. Consistent monitoring and analysis of the sedentary behavior of offspring, and especially their ST, is important because of the 24-h movement guidelines (i.e., physical activity, sedentary behavior, and sleep); parents can best support the sleep guidelines and ST restriction rules. In addition, in 14-18 year old Czech adolescents, it was revealed that compliance with only the ST recommendations of the 24-h movement guidelines was positively associated with reduced odds of a high BMI z-score (odds ratio = 0.38, 95% confidence interval : 0.17-0.89), excess fat mass% (OR = 0.34, 95% CI: 0.13-0.93), and visceral adipose tissue (OR = 0.27, 95% CI: 0.10-0.74). Previous studies have shown that preschool children have a pedometer-measured daily number of steps comparable to those of school-aged children and adolescents, and the closest association in parent-child daily steps among families with preschool and older children and adolescents. However, daily sedentary time and RST have not yet been compared across the age spectrum of children in the Czech Republic regarding parental sedentary behavior. Similar to the results from Australia, the United States, or Canada, the lowest levels of sedentary time and RST were found in preschool children compared to older children and adolescents, as well as there being a positive association between age and RST in children. Consistently with previous studies, different weekday-weekend patterns of sedentary time and RST in older children and adolescents have been demonstrated, but no significant gender-related differences in sedentary time and RST in older children and adolescents were revealed. Following a comprehensive study of the sedentary behavior of adolescents, which reveals that adolescents are the most sedentary pediatric population and the one most involved in RST, we add that this finding also applies to the parents of adolescent offspring. However, the RST of children and adolescents has undergone a significant change in the last 20 years; traditional television/video viewing has been replaced by computer-or video game-based screen time and social media-based screen time. Following these changes in children's RST patterns, many important family, home, and neighborhood environment correlates have been identified. The presence of a computer, television, or video game system in the bedroom is positively associated with children's RST, as is the number of computers, televisions, or game consoles in the household. Lower frequency of family meals and eating meals in front of the television are associated with longer ST in adolescents. On the other hand, for example, more outdoor play, the application of parental television viewing rules, or living in neighborhoods with more walking infrastructure, services, and parks has been associated with shorter RST in children. However, the limit of studies is the non-inclusion of children's sex in the analysis, and in study, estimates were similar for boys and girls, although some associations were no longer significant in the sex-stratified models. In mothers of preschool children, there is a close association between their television viewing and screen media use in their offspring, but in addition maternal distress or depression and less cognitive stimulation in the home environment is associated with longer screen time for preschoolers. Although we did not focus on detecting correlates other than parental sedentary time correlates in this study, it is important to point out the above-mentioned correlates of RST, especially in families with preschool children, as the increase in sedentary time (or RST) between four and 16 years of age may be more than 100 min for daughters and 120 min for sons. RST in children and adolescents is associated with adiposity [14, and metabolic syndrome and this association often persist after adjustment for physical activity and diet. Although the adiposity of participants in families was not evaluated, the significant positive relationship between BMI and RST in daughters, sons, and mothers supports the above findings. It is clear that every child needs to be involved in some RST per day, but for the primary prevention of obesity, it may be important to promote sedentary habits at short intervals and to limit prolonged time spent in front of the device screen/display. A representative set of Czech families, the recording of sedentary behavior-related items as continuous variables, and the relatively strict criteria for inclusion in the crosssectional study are the strengths of the study. However, it is necessary to list the limits (the potential effect of social desirability and the degree of conscientiousness of proxy-recording of anthropometric and ST-related variables), which arise from the non-instrumental way the sedentary behavior of the participants was recorded. Although the variables related to sedentary behavior and RST were clearly differentiated, possible multitasking was not observed in detail (i.e., simultaneous use of multiple screen devices or different activities on one screen at the same time). However, for all the participating families, the methodology used was applied uniformly, which allowed a cross-sectional comparison of sedentary behavior and screen time. The associations of parent-child sedentary behavior and RST patterns could have been influenced by other lifestyle factors, such as parents' occupation and education or family eating habits, which were not monitored in the present study. Future studies should, therefore, account for these factors too. The new space for detecting parent-child gender-separated patterns of behavior opens up 24-h monitoring of behavior based on the use of multifunctional devices, which enable the accurate capturing of sedentary behavior and movement activities with regard to speed or intermittent execution, localization, or joint implementation. Conclusions Completely different weekday vs. weekend patterns of sedentary time and daily RST in all family members, the specifics of sedentary behavior in families with preschoolers, and the closest parent-child RST associations on weekends should be respected and taken into account in creating and applying programs to reduce sedentary behavior. A significant increase in daily RST on weekends compared to school days for daughters and sons of all ages, which has replaced sedentary time on school days for schoolchildren, is a critical indicator of a sedentary lifestyle. Thus, weekends provide a suitable target for implementation of programs aimed at reducing excessive RST involving not only children, but preferably parent-child dyads. Author Contributions: D.S. obtained funding, performed the data collection, and processed all the data analysis. E.S. validated the methodology, conceptualized, and designed the study, drafted the initial manuscript, and coordinated the writing of the manuscript. D.S. reviewed and edited the text, table, and figures of the manuscript. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: The data are owned by Palack University Olomouc and are not to be made freely publicly available during the project investigation but are available from the corresponding author E.S. upon reasonable request. |
Research on cooperative innovation strategy of multi-agent enterprises considering knowledge innovation and environmental social responsibility Based on the context of enterprise green innovation ecosystem, this paper discusses the decision-making of knowledge innovation and social environmental responsibility in the multi-agent enterprise R&D innovation system that is composed of core enterprises and satellite enterprises, and it provides a new perspective for comprehensively improving enterprise innovation ability. By using differential game method and considering the joint influence of knowledge innovation and environmental social responsibility, this paper analyzes the cooperative innovation of multi-agent enterprises under a dynamic framework and discusses the decision-making process and optimal returns of core enterprises and satellite enterprises under centralized decision, decentralized decision and Stackelberg master-slave game modes. The conclusions are as follows : When multi-entity enterprise R&D and innovation system moderately fulfill their social environmental responsibilities, their income increases significantly, and the green innovation capability of green innovation ecosystem is enhanced effectively; When the profit ratio of core enterprises is controlled above 1/3, and the subsidy rate of government on satellite enterprises is controlled below 2/3, the green innovation ecosystem is in a balanced state, and the innovation initiative of multi-agent enterprises reaches a high level; Under centralized decision making, the revenue proportion of core enterprises decreases, but the overall revenue of R&D innovation system reaches the Pareto optimum. Finally, the differential game decision-making process is analyzed with an example to verify the validity of the model conclusion, which provides a scientific basis for the knowledge innovation decision of multi-agent enterprise R&D innovation system in the green innovation ecosystem and accelerates the process of carbon neutrality. I. INTRODUCTION "Carbon peak and carbon neutral" has become a major development strategy for most countries around the world. According to the Intergovernmental Panel on Climate Change, the world must become carbon neutral by 2050 to meet the Paris agreement's 1.5°C temperature target. At present, 137 countries have proposed specific targets to achieve carbon neutrality. Among them, Sweden proposed to become carbon neutral by 2045, while others such as the European Union, the United States and the United Kingdom proposed to become carbon neutral by 2050. Under the "dual carbon" strategic goal, the new mission for enterprises is to take the initiative for undertaking green technology innovation and environmental social responsibility, and the key approach to promote highquality development of enterprises is to realize the synergistic coupling of the two. The essence of innovation is knowledge innovation, and knowledge innovation is a dynamic development process of knowledge acquisition, storage and sharing. From the perspective of enterprise innovation ecosystem, all main enterprises actively participate in innovation interaction and create common value beyond their individual values, and the form of innovation is changing from individual innovation to network collaborative innovation. Most of the world's industry leaders, such as Apple, IBM, Google, Microsoft, BYD, BOE, GREE, et al., are early or in the process of establishing enterprise innovation ecosystems with themselves as the core. These ecosystems help core enterprises to complete knowledge acquisition and knowledge storage, and share knowledge with other members, thus improving the efficiency of knowledge innovation research and development of the ecosystem and creating a good effect of environmental social responsibility. Some large European and American companies took the lead in the wave of social responsibility movement. They not only demanded the implementation of social responsibility standards internally, but also required suppliers and exporters to assume social responsibility. Enterprises such as Avon, Bestbuy, Mc-Donalds, and New Balance have made it clear that they will terminate their relationships with suppliers that fail to fulfill their social responsibilities. Meanwhile, the multi-core main body green innovation ecosystem jointly constructed by core enterprises and satellite enterprises, such as the innovation ecosystem in the field of new energy enterprises established by Chongqing Changan Automobile Company Limited, Chongqing Changan New Energy Automobile Co., LTD, Tsinghua University and other colleges and universities has better promoted the development of new energy vehicles and promoted the construction of a green ecological environment. With the rapid increase of enterprise economic benefits, environmental problems have become increasingly prominent, and the environmental and social responsibilities undertaken by enterprises are particularly significant, which directly affect the social image, value enhancement and sustainable development of enterprises. As enterprises face increasing pressure from environmental protection, the status of ecological innovation in enterprise strategy is constantly improved, but the input of knowledge innovation directly increases the cost of enterprises, leading to the weakening of enterprise enthusiasm, and even may ignore the environmental effect in production activities. In this case, the subsidies and incentive measures formulated by the government constitute the guidance and support of the innovation ecosystem and play a key role in enterprise knowledge innovation and green environmental social responsibility. The heterogeneity of enterprises' knowledge innovation is an important reason for the difference in industry competitiveness. Enterprises that improve their knowledge innovation level by fulfilling environmental social responsibility may be more likely to gain competitive advantages. In addition, the process of carbon neutrality provides a good opportunity for enterprises to fulfill their environmental and social responsibilities and improve the level of knowledge innovation. There is a balanced relationship between knowledge innovation and environmental social responsibility. If an enterprise overperforms its environmental social responsibility or ignores its environmental social responsibility, its balanced relationship will be broken, which will have a serious negative impact on the sustainable development of the enterprise. Then, from the perspective of the innovation ecosystem, what are the decision-making situations of knowledge innovation and environmental social responsibility in the multi-agent R&D innovation system that is composed of core enterprises and satellite enterprises? What is the overall optimal revenue of multi-agent enterprise R&D innovation system and what is the optimal revenue of enterprise under different decision-making conditions? What are the effects on the optimal returns of core enterprises and satellite enterprises after the introduction of government subsidies for knowledge innovation and incentives for environmental effects? Is the centralized decision the optimal choice? All these questions are worth considering. 1) THE ROLE OF ENTERPRISES IN THE INNOVATION ECOSYSTEM Combining the characteristics of green innovation and innovation ecosystem, green innovation ecosystem aims at improving green innovation ability and promoting the emergence of green innovation. It is a complex system of symbiotic competition and dynamic evolution formed between innovation subjects and innovation environment through the flow connection of knowledge and other This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. elements. The theory of core competence emphasizes that the development of enterprises needs the support of core technical competence, so a series of R&D and innovation activities are needed. The resource-based theory points out that enterprises maintain sustainable development through the acquisition and rational use of specific resources. The enterprises in the innovation ecosystem are always committed to technological innovation and continue to maintain the absolute dominance of product innovation. S. Nambisan and M. Sawhney proposed that the core enterprise in the ecosystem is the enterprise occupying the core position of the system strategy and resources, and usually plays a leading and decision-making role in the system, with the goal of coordinating development with other enterprises. L. J. Anne believed that core enterprises are also the guides of the innovation ecosystem, leading enterprises in the ecosystem to carry out collaborative R&D and innovation of core technologies. B. Clarysse, M. Wright, and J. Brunee studied and showed that core enterprises will also coordinate and guide members' technological research and development and social responsibility to achieve governance for the healthy development of the ecosystem. C. Z. Zhang, X. R. Jiang, and H. B. Xu proposed that an innovation ecosystem centered on non-core enterprises with independent organizational structure embedded in modules can also become an important channel for knowledge and technology sharing. R. Adner believed that good coordination and adaptation between the enterprise and other members is the key to the healthy development of the ecosystem. M. G. Jacobides, C. Cennamo, and A. Gawer further proposed that the main enterprise adjusts its position according to the ecosystem strategy and constantly adjusts with the evolution of the ecosystem. 2) SYMBIOTIC EVOLUTION AND GAME RELATIONSHIP BETWEEN THE ENTERPRISES AND OTHER SUBJECTS According to the symbiotic evolution theory, only by establishing a sustainable relationship of complementary resources with other species can a species maintain a good ecological niche and promote the development of the community to a better situation. Game theory studies the correlation between more than two subjects participating in decision-making. When the decision-making of any party is not independent of the strategies of other subjects, the decision-making and equilibrium process of the participants are analyzed. Under the premise of symbiosis and winwin cooperation, the enterprises cannot create the optimal income alone, but they share complementary technologies, cutting-edge information and core capabilities with each other to improve the technical level through R&D and innovation, and then realize Pareto optimization. K.P. Hung and C. Chou emphasized that the main enterprises and other subjects, as different stakeholders, realize risk sharing and benefit sharing through the innovation ecosystem, thus they realize the non-zero-sum game of win-win cooperation. R. Adner and R. Kapoor took nine technological changes in the global semiconductor industry as the research background, analyzed and showed that the co-competition and symbiosis relationship between enterprises has a positive impact on the vitality of the ecosystem, and pointed out that the ecological niche of the enterprises is interdependent with the external environment. J. Z. Yang, X. D. Li believed that resource sharing among the main bodies in the ecosystem can form a synergistic effect of the innovation system through optimal allocation of resources and effective division of labor, so as to enhance overall income and form core competitiveness. A. Gawer and M. A. Cusumano concluded that core enterprises have a competitive game relationship with upstream and downstream enterprises or competitors in the same industry, so the position of dominant enterprises is easy to be replaced. A. Mantovani and F. Ruiz-Aliseda further analyzed the joint R&D innovation system constructed by the core enterprises and upstream and downstream enterprises, which will form a new competitive equilibrium state when facing challenges from competitors. 3) RESEARCH ON COLLABORATIVE INNOVATION BETWEEN CORE ENTERPRISES AND SATELLITE ENTERPRISES Core enterprises usually refer to the leading enterprises with good development scale based on resource endowment, grasping cutting-edge information, and satellite enterprises refer to the enterprises that the core enterprises may split off as some departments of the company for some strategic arrangement. R. Zhang, D. Ling and X. D. Chen proposed that symbiotic evolution is the cornerstone of sustainable development and common prosperity between dual-agent enterprises, providing ideas for the development of innovation ecosystem. A. Romano and G. Passiante, et al studied and showed that in the innovation ecosystem, the development of enterprises benefits from the active interaction of knowledge transfer, absorption and innovation among relevant enterprises. Therefore, satellite enterprises can rely on core enterprises to develop core technologies and improve their overall competitiveness. D. L. Wu and X. Jin took Yutong and Beijing Automobile Co., LTD as research objects to analyze the nature and influence mechanism of predation symbiosis and mutualism in the innovation ecosystem led by core enterprises. L. Yan and Q. Q. Zhang analyzed the core group of entrepreneurial enterprises and supporting enterprises and discussed the symbiotic evolution strategy mode of green innovation ecosystem. B. RESEARCH ON THE KNOWLEDGE INNOVATION AND ENVIRONMENTAL SOCIAL RESPONSIBILITY EFFECT OF CORE ENTERPRISES Generally speaking, there are one or more core enterprises in the innovation ecosystem, which form a connection and cooperation relationship with other subjects This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. through complementary resource dependence. The deeper the cooperation, the more efficient the knowledge flow. Subjects with specific network connections in the innovation ecosystem exchange knowledge resources, which forms the basis of continuous innovation iteration. Internal enterprises gain benefits in innovation iteration and can obtain key technical support. B. Cassiman B and G. Valentini believed that knowledge resources flow bidirectionally in the innovation ecosystem, and that the main links are the acquisition, assimilation and integration of external knowledge, as well as the outflow, reorganization and business model reconstruction of internal knowledge. In contrast to the traditional innovation collaboration mode, E. J. Malecki proposed that knowledge flow within the innovation ecosystem is very active and knowledge penetration among subjects is particularly obvious, but knowledge penetration is not equal to an exchange relationship. S. Vida empirically analyzed the influencing factors of enterprise knowledge innovation by taking 150 KIBS as research objects. L. Leydesdorff and C. S. Wagner et al. proposed that enterprises, as the subject of innovation, take innovation and value-added as the fundamental goal and realize knowledge innovation through the management of subsystems such as knowledge sharing, absorption, transformation, application and integration. Currently, there are abundant researches on knowledge innovation in innovation ecosystem in the academic circle. M. H. Boisot studied and showed that the characteristics of knowledge itself and the possibility of knowledge acceptance affect the process of knowledge flow. M. L. Ounjian and E. B. Carne believed that the characteristics of technology, technology recipient, technology provider and communication channel have a direct impact on knowledge flow. V. D. Michel, M. Cloodt, and G. Romme studied the connection between value formation and knowledge flow in the innovation ecosystem and concluded that efficient knowledge flow can significantly promote value formation. F. Koessler proposed a general model of strategic knowledge sharing in finite Bayesian games when uncertainty was ignored. Taking the community as the research object, Y. M. Li and J. H. Li introduced incentive mechanism into knowledge sharing and concluded that incentive mechanism significantly improved knowledge sharing ability. Research on the effect of enterprises' fulfillment of environmental social responsibility has been carried out by scholars from different perspectives. X. Luo and S. Du concluded that environmental social responsibility effectively improves the information asymmetry between enterprises and stakeholders, thus achieving good communication in the production process, reducing risks in the process of technological innovation, and further stimulating enterprises' R&D investment. Through empirical analysis, X. Xie, J. Huo, and G. Qi proposed that a good social green image of enterprises is conducive to improving the overall income and has an important impact on the core competitiveness of enterprises. K. S. Bardos, M. Ertugrul and L. S. Gao further proposed that enterprise social responsibility and enterprise value are synergistic, and the realization of value creation also positively promotes the effect of social responsibility. As an important guide of the innovation ecosystem, the government usually makes corresponding policies to guide the core enterprises to carry out green technology innovation with the help of market signals. S. Caravella and F. Crespi proposed that market-oriented environmental regulations promote green process innovation and product innovation of enterprises. M. X. Yang and J. Li put forward that environmental sustainability strategies formulated by enterprises for environmental issues are more important than sustainable economic development strategies for improving corporate social responsibility image. X. Chen and J. H. Ha took Shanghai and Shenzhen A-share listed enterprises as the research object and found that the sharing of corporate social responsibility has a positive effect on the investment in technological innovation research and development, and technological innovation has a mediating effect in the relationship between corporate social responsibility and value creation. Q. Liu and L. Chen analyzed the strategic choice of corporate social responsibility and technological innovation by using the oligarchic competition game model and pointed out that with limited resources, enterprises' excessive performance of social responsibility would inhibit technological innovation, so government guidance and supervision play an important role. C. LITERATURE REVIEW AND RESEARCH FRAMEWORK Based on the existing research, it can be concluded that the innovation ecosystem provides a major theoretical basis for the study of core enterprise knowledge innovation and the fulfillment of social environmental responsibility. Previous researches of scholars focused on the innovation activities and fulfillment of environmental and social responsibilities of single core enterprises, but the research on knowledge flow in the innovation ecosystem constructed by multi-core enterprises was insufficient, and scholars did not pay attention to the decision-making problems of multi-core enterprises when comprehensively considering knowledge innovation and fulfillment of social and environmental responsibilities in the innovation ecosystem. In fact, the relationship between the two is not simply complementary or alternative. Based on this, this paper mainly considers the following issues: In the context of carbon neutrality, how can the multi-agent R&D innovation system constructed by core enterprises and satellite enterprises balance the effects of knowledge innovation and environmental social responsibility, and then maximize the overall benefits? How does the knowledge innovation R&D system achieve the development goal of symbiotic evolution of innovation ecosystem under the constraint of environmental social responsibility? If the satellite enterprise is dependent on or follows the core enterprise, can the revenue of the innovation R&D system of both sides reach the optimal state? Can the optimal returns of both the core enterprise and the satellite enterprise be improved by making decisions independently? In summary, this paper comprehensively considers the effects of knowledge innovation and environmental social responsibility. Based on the theories of core competence, symbiotic evolution and game theory, this paper constructs differential game equations for knowledge innovation of R&D innovation system under three types of situations, and abstractly describes the enterprise innovation ecosystem. In addition, the optimal revenue of core enterprises and satellite enterprises under the three decisions is compared with the overall revenue, and then the optimal collaborative innovation mode is selected. Finally, the influence factors of the three scenarios are simulated and analyzed, and specific countermeasures and suggestions are put forward based on the actual situation, in order to provide some reference for selecting knowledge innovation strategies in the multi-core enterprise-led innovation ecosystem. A. PROBLEM DESCRIPTION The enterprise green innovation ecosystem is a complex and large system, which is composed of the government, enterprises, universities and institutes, intermediary institutions and other participants under the green economic ecological mode with low pollution, low energy consumption and low emissions as the standard, and aims to achieve ecological environmental protection. Assume that the research object of this paper is the knowledge innovation R&D system of enterprise green innovation ecosystem that is composed of core enterprises and satellite enterprises with completely rational and symmetric information, as shown in Fig. 1. There is a synergistic connection between the core enterprises with independent behavior and decision-making and the satellite enterprises. With the goal of maximizing profits, the efficient flow of knowledge can be achieved through collaborative innovation, knowledge capture and sharing, and the R&D level can be continuously improved. Core enterprises have absolute leadership, rich resources and good social reputation in the innovation ecosystem. In the process of green technology innovation research and development, there will be spillover effect of learning and imitation of satellite enterprises. Under the carbon neutral strategic goal, the government will subsidize knowledge innovation and reward the fulfillment of social and environmental responsibilities. At this time, the enthusiasm of core enterprises and satellite enterprises will be greatly increased. Therefore, scenario one is that the R&D innovation system composed of core enterprises and satellite enterprises pursues the maximization of overall interests and chooses collaborative knowledge innovation, which is in line with centralized decision-making. The second scenario is that the core enterprise and the satellite enterprise are in an equal position and make decisions independently but promote the development of the core system of knowledge innovation in the green innovation ecosystem together, that is, to meet the decentralized decision-making. The third Scenario is that the satellite enterprise will follow the core enterprise to conduct green technology research and development, and the Stackelberg's master-slave game decision is satisfied. B. MODEL CONSTRUCTION On the premise of considering knowledge innovation and social environmental responsibility, this paper uses three differential game methods, namely centralized decision making, decentralized decision making and Stackelberg master-slave game, to study the different decision-making behaviors of core enterprises and satellite enterprises. Parameters involved in the three differential game models and their meanings are shown in Tab. 1. The knowledge innovation research and development level of the core system of enterprise green innovation ecosystem at moment The environmental effect of the core system of the enterprise green innovation ecosystem at moment t to fulfill its social environmental responsibility The discount rate at time 0 to t, 0 Combined with the principle of marginal effect, the cost of knowledge innovation R&D and the cost of fulfilling social environmental responsibility have the same convexity characteristic. This paper uses a quadratic function to describe the cost. At time t, the cost function of core enterprise and satellite enterprise on knowledge innovation R&D and fulfilling social environmental responsibility is as follows: In order to maintain the core competitiveness of enterprise green innovation ecosystem, core enterprises and satellite enterprises tend to acquire frontier knowledge, and the core ways of acquiring frontier knowledge are knowledge capture and knowledge sharing. At the same time, in consideration of social reputation and obtaining more resources through the effect of social environmental responsibility, both core enterprises and satellite enterprises attach more importance to fulfilling social environmental responsibility. In this process, the knowledge level and social environmental responsibility effect in the R&D system composed of core enterprises and satellite enterprises are in a dynamic process of change. Therefore, the differential equation of collaborative R&D of decision-making bodies is: A. CENTRALIZED DECISION MAKING Centralized decision-making emphasizes the profit maximization of the core enterprise and the satellite enterprise as a whole, that is, the R&D innovation system will cooperate to determine the healthy development level of the enterprise green innovation ecosystem, with the overall profit maximization of both parties as the goal, so as to improve the core competitiveness of the whole system. At this point, the decision-making objective is: Theorem 1: Equilibrium results of core enterprises and satellite enterprises under centralized decision-making are as follows: The optimal equilibrium of core enterprises is: The optimal equilibrium of satellite enterprises is Proof: The dynamic random control method is adopted to solve the problem. After time t, the most valuable function of the overall long-term profits of core enterprises and satellite enterprises is In the range of Among them, Equation DE, and the optimal strategy of the core enterprise and the satellite enterprise can be obtained from the first-order condition: By substituting and into, we can obtain: Analysis of shows that the solution of Hamilton-Jacobi-Bellman (HJB) equation is a linear function of T and S, let: Where 1 a, 1 b and 1 c are all constants, the following equation can be obtained: Substitute into and to obtain the optimal equilibrium strategy, namely, and. Then, by substituting and into and, the optimal evolution trajectory of knowledge innovation level and social environmental responsibility effect of enterprise green innovation ecosystem can be obtained: A m o n g t h e m, By substituting into, the optimal profit functio n of core enterprises and satellite enterprises in the enterprise green innovation ecosystem under centralized decision makin g can be obtained, and then the total profit of the system can be obtained: End of proof. B. DECENTRALIZED DECISION MAKING Decentralized decision-making emphasizes that core enterprises and satellite enterprises in the enterprise green innovation ecosystem independently choose their respective strategies at the same time to maximize the objective function. At this point, their respective objective function is: Theorem 2: The optimal strategy of core enterprise and satellite enterprise in the case of independent decisionmaking is: Proof: The dynamic random control method is adopted to solve the problem. The return function of enterprise i is denoted [ ( ) 0, 0 According to the first-order conditions of and, the optimal strategy of core enterprises and satellite enterprises is: By substituting and into and, we can obtain: By analyzing and, it can be deduced that the solution of HJB equation is a binary function about sum, let 2 2 2 (, ) + 3 c are constant and can be deduced: M V T S a T b S c =+ By substituting into and, the optimal equil ibrium strategy can be obtained, namely, and. Then, by substituting and into and, the optimal ev olution trajectory of knowledge innovation level and Environ mental social responsibility effect of enterprise green innovat ion ecosystem can be obtained: Among them, By Substituting into and to obtain the opti mal revenue function of core enterprises and satellite enterpri ses in the enterprise green innovation ecosystem under centra lized decision-making, and then the total profit of the system is reached as follows: End of proof. C. STACKELBERG MASTER-SLAVE GAME Core enterprises have absolute leadership in the enterpri se green innovation ecosystem, and they also need to pay mo re costs to maintain the healthy development of the system. I n the Stackelberg master-follower game scenario, in order to encourage the cooperative innovation of satellite enterprises, the core enterprise will bear a certain proportion of the R&D cost of knowledge innovation and the cost of green environm ental responsibility, and the proportion of sharing is set as Theorem 3: In Stackelberg master-slave game between core enterprises and satellite enterprises, the optimal decision of both parties is as follows: Equations and are convex functions. Based on first-order conditions, the optimal strategy can be obtained: By substituting,, and into and (4 0), we can obtain: This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. By substituting into and, the optimal strat egy of both parties and the optimal cost sharing ratio By Substituting each value in into and Equatio n to obtain the optimal function of core enterprise and sa tellite enterprise respectively, the optimal value of total profit of R&D innovation system of enterprise green innovation ec osystem under Stackelberg master-slave game is as follows: Corollary 1: In the case of integrated decision making, c ore enterprises and satellite enterprises make the largest effor t in knowledge innovation research and development and fulf ill environmental social responsibility, and the optimal return is higher than the other two modes. Corollary 2: when the profit ratio of core enterprises is, the efforts of satellite enterprises are the same under t he three game decisions. When the profit ratio of the core ent erprise is 1,1, the core enterprise will share the cost with th e satellite enterprise, and the satellite enterprise will make mo re efforts in the Stackelberg master-slave game than in the de centralized decision. Proposition 2: When the government subsidy rate of sat ellite enterprises is 2 0, 3, the core enterprises and satellite e nterprises in the enterprise green innovation ecosystem tend t o cooperate. However, when the government subsidy rate is 2,1 3, that is, the government excessively encourages enterp rises to fulfill their environmental social responsibilities, whi ch has a certain negative impact on their own knowledge inn ovation research and development and overall benefits. At thi s point, dual-agent enterprises adopt Nash non-cooperative g ame or Stackelberg master-slave game, but do not tend to coo perate. When 1, under the centralized decision, th e higher the subsidy rate of the government for satellite enter prises, the higher the enthusiasm of satellite enterprises, and t he income of core enterprises increases indirectly. As a result, the income of both dual-subject enterprises increases, but the proportion of the income of core enterprises decreases. From the three decision proof processes, proportion of revenue of satellite enterprises is. However, wh en the subsidy rate is higher than a certain range, dual-agent e nterprises are not inclined to collaborative in innovation. Corollary 4: Sharing a certain proportion of cost by the core enterprise to the satellite enterprise can improve the overall revenue of the multi-agent enterprise cooperative innovation system. Under centralized decision-making, as the core enterprises and satellite enterprises make the largest efforts in knowledge innovation R&D and they fulfill environmental social responsibility, the multi-agent enterprises in the green innovation ecosystem achieve the highest level of knowledge innovation R&D and they fulfill environmental social responsibility income. Proposition 4: Under three decision scenarios, the comparative analysis results of optimal returns of core enterprises and satellite enterprises and overall returns of multi-agent enterprise innovation system of enterprise green innovation ecosystem are as follows: Proof: Combining proposition 1, when 1 In Stackelberg master-slave game, the optimal returns of core firms and satellite firms and the overall returns of multi-agent firms' cooperative knowledge innovation system are both higher than the corresponding values in Nash non-cooperative game, which is Pareto effective. Corollary 6: Under centralized decision making, the ove rall benefit of multi-agent enterprise knowledge innovation s ystem is the highest and is Pareto optimal. V. CASE ANALYSIS It can be seen from the above three decision-making si tuations that the optimal strategy, optimal income, knowled ge innovation R&D system and additional income brought by the fulfillment of environmental social responsibility of t he core enterprise and satellite enterprise respectively depen d on the parameter setting in the model. Referring to a nd combining with the actual situation, set 2 By substituting the parameters into theorems 1, 2 and 3, the solution is obtained, It can be concluded from Fig. 2-3 that, as the core subject of the enterprise green innovation ecosystem, the knowledge innovation level and social environmental responsibility effect of the knowledge innovation system composed of core enterprises and satellite enterprises keep a positive correlation with time and eventually stabilize. By sharing costs with satellite enterprises, core enterprises can improve the enthusiasm of satellite enterprises, and directly enhance the knowledge innovation research and development level of innovation system and social environmental responsibility effect. In the three decisionmaking situations, the optimal returns of both parties reach the highest in the centralized decision, followed by the Stackelberg master-slave game, and the lowest in the decentralized decision, that is, c s n T T T , c s n S S S , which is consistent with proposition 3. It can be concluded from Fig. 3-4 that the optimal returns of both core enterprises and satellite enterprises are positively correlated with time and tend to be stable after reaching equilibrium. The core enterprises share costs with satellite enterprises, which significantly promotes the increase of revenue of both parties, that is, s As can be seen from Fig. 6, in the knowledge innovation R&D system composed of dual-agent enterprises, the overall income also shows a positive correlation with time, and then tends to be stable, and the overall income is the highest under centralized decision, that is, c s n V V V , which is consistent with proposition 4. Figure 7 shows that with the continuous increase of government subsidy rate to satellite enterprises, the total income of the Knowledge innovation R&D system increases, but the proportion of income of core enterprises decreases, which is consistent with proposition 2. Figures 8 and 9 show that the technical level of the knowledge innovation R&D system of collaborative innovation constructed by core enterprises and satellite enterprises is positively correlated with government subsidies and rewards. Within a reasonable range, the higher the government subsidies and incentives for R&D innovation system, the higher the increase in the technological level and social environmental effects of R&D innovation system. It can be seen from Fig. 10 that the knowledge innovation level and social environment of R&D innovation system together improve the overall income. This shows that, in order to accelerate the completion of the "dual carbon" strategic goal, the government's active guidance to enterprises not only promotes enterprises to improve their knowledge innovation level, but also strengthens their motivation to fulfill their environmental social responsibilities. After receiving government subsidies and rewards, enterprises will give full play to the advantages of the ecosystem, actively obtain cutting-edge information and high-quality resources from the outside world, and constantly improve the green innovation level of the knowledge innovation R&D system. In addition, with the gradual enhancement of social environmental effects, enterprises will pay more attention to their own virtuous image and reputation, and constantly enhance the initiative to fulfill social environmental responsibilities, so as to form a virtuous circle of technological level and social environmental effects of knowledge innovation research and development system. VI. CONCLUSION AND IMPLICATIONS From the perspective of innovation ecosystem, this paper conducts research based on core competency theory, resource-based theory, symbiotic evolution, game theory, etc., embeds the theory of corporate environmental social responsibility into the knowledge innovation R&D system constructed by multi-agent enterprises in the ecosystem, and uses the differential game method. The paper analyzes the benefits of knowledge innovation, R&D innovation and fulfillment of environmental and social responsibilities of core enterprises and satellite enterprises. Taking into account the effects of knowledge innovation, R&D innovation and environmental social responsibility effects, companies may face the "innovation paradox", which will have a negative impact on the overall revenue of the entire R&D system. The green innovation ecosystem of enterprises is dominated by core enterprises and coordinated by multiple entities, forming a situation of symbiotic evolution and win-win cooperation under limited resources. After incorporating environmental and social responsibility into the green technology R&D system, the ecosystem has more uncertain factors, and achieving a balance between improving knowledge innovation, R&D innovation capabilities and fulfilling environmental and social responsibility has become an important decisionmaking issue for main enterprises. This paper constructs a dynamic decision-making problem under three situations of centralized decision-making, decentralized decision-making and Stackelberg master-slave R&D led by core enterprises, and conducts a comprehensive comparative analysis, and then it conducts simulation analysis to obtain the following conclusions: First, collaborative innovation of knowledge innovation R&D system under centralized decision making is the optimal path for multi-agent enterprises to improve knowledge innovation level and sustainable development. Core enterprises and satellite enterprises share resources and complement each other in information, and actively This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. participate in knowledge innovation research and development, thus enhancing the momentum and vitality of the ecosystem. In the process of improving the level of knowledge innovation, enterprises actively fulfilling environmental and social responsibilities will strengthen their virtuous image and reputation, which in turn increases the convenience of obtaining high-quality resources. Therefore, enterprises can continue to maintain a healthy development trend, actively maintain the network relationship in the ecosystem, carry other subjects for knowledge innovation research and development, and finally achieve symbiotic development. Second, the optimal benefits of core companies and satellite companies and the overall benefits of the knowledge innovation R&D system under decentralized decision-making are the lowest, but the optimal collaboration method for multi-agent R&D system in the ecosystem is not achieved. Collaboration between core enterprises and satellite enterprises to realize the coupling and balance of green technology research and development and environmental social responsibility is the cornerstone of the ecosystem. Under the constraints of the strategic goal of "carbon peaking and carbon neutrality", enterprises are facing the pressure of multiple knowledge innovation research and development and building an innovation ecosystem development model; a model that has become an inevitable choice. The higher the government's subsidy for knowledge innovation and R&D of satellite enterprises, the higher the proportion of revenue obtained by satellite enterprises. Although the proportion of core enterprises' income decreases, it does not harm the interests of the green technology research and development system. This shows that there is a marginal increase in the additional benefits generated by centralized decision-making in the knowledge innovation R&D system, and the benefits generated by the same cost and resources are higher than those generated by decentralized decision-making. Therefore, core enterprises should give full play to their leading and guiding roles, establish an efficient collaborative cooperation model with satellite enterprises, and jointly improve the level of knowledge innovation research and development in the ecosystem, and the government should adopt reasonable subsidies to increase the enthusiasm of the research and development system. Third, the government, as an important main body of the green innovation ecosystem of enterprises, provides necessary subsidies and support for enterprises to improve their knowledge innovation R&D level and fulfill their environmental and social responsibilities. This incentive behavior significantly increases the enthusiasm of enterprises to participate and promote the ecosystem, which achieves the key goal of symbiotic evolution. Based on the decision-making model and simulation results, it is found that the knowledge innovation R&D system composed of core enterprises and satellite enterprises has the highest degree of effort in technological innovation and fulfilling environmental and social responsibilities under the centralized decision-making situation. Within a reasonable level of effort, the performance of environmental and social responsibilities by enterprises can positively promote the balance of the knowledge innovation R&D system, and at the same time promote the knowledge innovation R&D system to achieve optimal returns and Pareto optimality. However, when the ecosystem is out of balance, the excessive support of core enterprises will directly lead to the "free-rider" behavior of satellite enterprises. At this time, the income of the knowledge innovation R&D system increases significantly, but the income of core enterprises does not reach one-third of the overall income, which reflects the phenomenon of "innovation paradox" of core enterprises. When the government subsidizes satellite companies excessively, it boosts the parasitic innovation of satellite companies. At this time, the knowledge innovation R&D system changes from Stackelberg's master-slave innovation to parasitic innovation, and it is difficult for the green innovation ecosystem of enterprises to maintain a healthy development trend. Therefore, the government should actively guide the core enterprises, give appropriate subsidies to the green technology research and development system, and promote the construction and healthy development of the green innovation ecosystem. VII. RESEARCH LIMITATIONS AND PROSPECTS The innovation ecosystem and environmental social responsibility involve a wide range of disciplines. This paper only takes the green innovation ecosystem as the research object and uses differential games to analyze the innovation mode and influencing factors of the knowledge innovation R&D system. In future research work, other subjects such as financial institutions and upstream and downstream enterprises should be included in the decisionmaking process for analysis. In addition, more variables should be added to the decision-making model, in order to deeply analyze the knowledge innovation R&D system from different perspectives, and to further analyze the optimal equilibrium state of the ecosystem. |
The Discovery of HNPC-A3061: a Novel Arylpyrrole Insecticide HNPC-A3061 is a novel arylpyrrole insecticide designed and discovered by National Engineering Research Center for Agrochemicals, Hunan Research Institute of Chemical Industry in 2003. More than 10 years greenhouse and field trials shows that HNPC-A3061 is an effective insecticide for control of pests in rice and vegetables such as Spodoptera litura, diamondback moth, cotton bollworm, Cnaphalocrocis medinalis, rice planthopper, Euproctis pseudoconspersa and so on at the dosage of 12 ~120 g a.i./ha. HNPC-A3061 has positive characteristics such as broad insecticidal spectra, rapid action, low toxicity, low residue, safe to crops and non-target organism such as soil microorganism, earthworms and so on. |
DENS: A Dataset for Multi-class Emotion Analysis We introduce a new dataset for multi-class emotion analysis from long-form narratives in English. The Dataset for Emotions of Narrative Sequences (DENS) was collected from both classic literature available on Project Gutenberg and modern online narratives available on Wattpad, annotated using Amazon Mechanical Turk. A number of statistics and baseline benchmarks are provided for the dataset. Of the tested techniques, we find that the fine-tuning of a pre-trained BERT model achieves the best results, with an average micro-F1 score of 60.4%. Our results show that the dataset provides a novel opportunity in emotion analysis that requires moving beyond existing sentence-level techniques. Introduction Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms. Many recent advances in natural language processing on emotions have focused on product reviews () and tweets ). These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification). Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad). In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement. Fewer works have been reported on understanding emotions in narratives. Emotional Arc () is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg 1. For labelled datasets on narratives, () provided a sentence-level annotated corpus of childrens' stories and (Kim and Klinger, 2018) provided phrase-level annotations on selected Project Gutenberg titles. To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes 2. Dataset In this section, we describe the process used to collect and annotate the dataset. Plutchik's Wheel of Emotions The dataset is annotated based on a modified Plutchik's wheel of emotions. The original Plutchik's wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1). The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense). We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions. We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content. The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral. Passage Selection We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens. In long-form narratives, many nonconversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix A.1 for the lexicons used. Mechanical Turk (MTurk) MTurk was set up using the standard sentiment template and instructed the crowd annotators to 'pick the best/major emotion embodied in the passage'. We further provided instructions to clarify the intensity of an emotion, such as: "Rage/Annoyance is a form of Anger", "Serenity/Ecstasy is a form of Joy", and "Love includes Romantic/Family/Friendship", along with sample passages. We required all annotators have a 'master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's score of greater than 0.4. For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset. Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions. During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik's wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as 'pain versus sadness' or 'confused versus surprise' and improve the emotion model for narratives. Dataset Statistics The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words. The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strongfemale-lead, and LGBTQ+. The genre distribution is listed in Table 1. In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators. The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table 2. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well. Benchmarks We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust. We pre-processed the data by using the SpaCy 3 pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification. Benchmark results are shown in Table 4. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique. We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) () achieved the best performance with a 0.604 micro-F1 score. Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset. Bag-of-Words-based Benchmarks We computed bag-of-words-based benchmarks using the following methods: Doc2Vec + SVM We also used simple classification models with learned embeddings. We trained a Doc2Vec model (Le and Mikolov, 2014) using the dataset and used the embedding document vectors as features for a linear SVM classifier. Hierarchical RNN For this benchmark, we considered a Hierarchical RNN, following (). We used two BiLSTMs () He stood stock-still for a while and said nothing, and I went on thus: "You cannot," says I, "without the highest injustice, believe that I yielded upon all these persuasions without a love not to be questioned, not to be shaken again by anything that could happen afterward. If you have such dishonourable thoughts of me, I must ask you what foundation in any of my behaviour have I given for such a suggestion?" Angry She stretched hers eagerly and gratefully towards him. What had happened? Through all the numbness of her blood, there sprang a strange new warmth from his strong palm, and a pulse, which she had almost forgotten as a dream of the past, began to beat through her frame. She turned around all a-tremble, and saw his face in the glow of the coming day. Anticipation Ah! That moving procession that has left me by the road-side! Its fantastic colors are more brilliant and beautiful than the sun on the undulating waters. What matter if souls and bodies are failing beneath the feet of the ever-pressing multitude! It moves with the majestic rhythm of the spheres. Its discordant clashes sweep upward in one harmonious tone that blends with the music of other worlds-to complete God's orchestra. each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs. Joy The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe (). Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training. Bi-directional RNN and Self-Attention (BiRNN + Self-Attention) One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations. Self-attention (;;) has been adapted to text classification, providing improved interpretability and performance. We used () as the basis of this benchmark. The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function. Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments. ELMo embedding and Bi-directional RNN (ELMo + BiRNN) Deep Contextualized Word Representations (ELMo) () have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words. We used the pre-trained ELMo model (v2) available on Tensorhub 4 for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Crossentropy was used as the cost function. Fine-tuned BERT Bidirectional Encoder Representations from Transformers (BERT) () has achieved state-of-the-art results on several NLP tasks, including sentence classification. We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT LARGE 5 to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%. Conclusion We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results 4 https://tfhub.dev/google/elmo/2 5 https://tfhub.dev/google/bert_ uncased_L-24_H-1024_A-16/1 based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT). Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis. Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions. Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multiemotion complexities of human language and their contextual interactions. |
Notes on unintegration and disintegration from historical and developmental perspectives Abstract This paper contextualises Bick's contribution historically, particularly in relation to Winnicott's work which challenged the Kleinian tradition to provide a developmental account of psychotic phenomena that would give a role to environmental factors. A difference in emphasis between Bick's early and later account of the functions of the skin in object relations is identified, which to some has suggested a greater distancing from a Kleinian position. A review of contemporary developmental psychology evidence argues for sophisticated coordinations in many perceptual, motor and communication systems in human neonates that apparently disappear within the course of the first few months, to be gradually re-coordinated later in development. The attempt to explain these phenomena has demonstrated that it is necessary to incorporate both integrative and differentiation processes into developmental accounts. While many of Bick's observations and conceptualisations have found confirmation in empirical research, it is argued that the significance of the integrations demonstrated for human neonates is greatly illuminated by Meltzer's account of the aesthetic conflict. One of Bick's major contributions remains her identification of a range of existential and catastrophic anxieties as universal aspects of human psychic experience, even if these are not always recognised in consciousness. |
An Interactive AI-Powered Web Healthcare System The Covid-19 pandemic has brought many changes in the healthcare industry lately. As things are going normal with time, many health projects designed and used during emergencies are left unexploited. To make the perpetual use of those technologies, the current need should be taken into consideration along with necessary ideas and frameworks to evolve the existing system into. To demonstrate the same, in this paper, we have presented a package of healthcare services powered by Artificial Intelligence via a chatbot system, where a user can entertain the services either by an interactive Graphical User Interface or a conversational chatbot system. This proposed system showcases how a similar Covid-19 system can be developed into a sophisticated healthcare service. This paper emphasises adding Artificial Intelligence to any conventional software via chatbot services which would broaden the services it provides even further. In order to find out the probable best technology to integrate AI with, about 50 papers have been analysed and out of which 27 relevant papers have been included in the literature review. In future, we intend to add medical support and other intelligence-based services to our system in order to meet user requirements and essential features in the field of healthcare. |
Genetic Map Refinement Using a Comparative Genomic Approach Following various genetic mapping techniques conducted on different segregating populations, one or more genetic maps are obtained for a given species. However, recombination analyzes and other methods for gene mapping often fail to resolve the ordering of some pairs of neighboring markers, thereby leading to sets of markers ambiguously mapped to the same position. Each individual map is thus a partial order defined on the set of markers, and can be represented as a Directed Acyclic Graph (DAG). In this article, given a phylogenetic tree with a set of DAGs labeling each leaf (species), the goal is to infer, at each leaf, a single combined DAG that is as resolved as possible, considering the complementary information provided by individual maps, and the phylogenetic information provided by the species tree. After combining the individual maps of a leaf into a single DAG, we order incomparable markers by using two successive heuristics for minimizing two distances on the species tree: the breakpoint distance, and the Kemeny distance. We apply our algorithms to the plant species represented in the Gramene database, and we evaluate the simplified maps we obtained. |
Nanostructures in biosensor--a review. In the 21(st) century, it is widely recognized that along with information technology (IT) and biotechnology (BT), nanotechnology (NT) will be a key field of science that will drive future developments. NT is expected to allow innovations in industrial fields such as electrical and electronics, biochemistry, environment, energy, as well as materials science by enabling the control and operation of materials at the atomic and molecular levels. In particular, the application of NT in the field of biochemistry is now enabling the realization of previously unachievable objectives.This review discusses the growth, synthesis, and biocompatible functionalization of each materials, with an emphasis on 1D nanomaterials such as CNTs, inorganic nanowires (made of Si, metals, etc.), and conducting polymer nanowires, along with 0D nanomaterials such as nanoparticles. This review also investigates the sensing principle and features of nanobiosensors made using the abovementioned materials and introduce various types of biosensors with nanostructure 0-D and 1-D. Finally, the review discusses future research objectives and research directions in the field of nanotechnology. |
Endoscopic management of postcholecystectomy bile duct injuries: review. Injuries to the biliary tree arising as a consequence of cholecystectomy, continue to be significant complicating factors. The advent of laparoscopic cholecystectomy has resulted in a further rise in the incidence of these injuries. Various means of investigation are available to assess the site, extent and severity of the damage to the biliary tree. Lately, the endoscopic option seems to have gained popularity in the management of these patients as it can combine both the investigative and therapeutic arms in one common procedure. This procedure is recommended as the primary modality of intervention gaining precedence over the radiological and surgical options. Since limitations exist in managing these patients with endoscopic retrograde cholangiopancreatography, a combined approach between the endoscopist, surgeon and radiologist, is the most practical option. In this article, we review the various types of biliary injuries, their reported incidence, etiological factors, the diagnostic means available, and the endoscopic management of each. |
Comparison of oneweek and twoweek empirical trial with a highdose rabeprazole in noncardiac chest pain patients Background: In patients with noncardiac chest pain (NCCP), the optimal duration of an empirical trial with a highdose proton pump inhibitor (PPI) is unclear. We aimed to compare the efficacy of oneweek and twoweek PPI trial in patients with weekly or more than weekly NCCP and to determine its optimal duration for diagnosing gastroesophageal reflux disease (GERD)related NCCP. |
Prostate cancer prevention with 5-alpha reductase inhibitors: concepts and controversies Purpose of review We review the concepts surrounding prostate cancer prevention strategies with 5-alpha reductase inhibitors (5-ARIs) and the controversies associated with their use. Recent findings Updated data have shown no increased risk of death from the diagnosis of higher risk cancer; however, 5-ARIs remain controversial and not approved for prostate cancer prevention. Summary The main theme of the review identifies the success of reducing insignificant prostate cancer and the controversy with the increased association of higher risk prostate cancer by approximately 20%. The reduction was shown to be most significant reduction in low-grade prostate cancer. The initial concern about 5-ARI use was that it could potentially increase high-risk prostate cancer leading to higher mortality in those men. Higher mortality has not been seen in follow-up data; however, 5-ARIs continue to have a black box warning and are not approved for prostate cancer prevention. |
Anomalous four-fermion processes in electron-positron collisions This paper studies the electroweak production of all possible four-fermion states in $e^+e^-$ collisions with non-standard triple gauge boson couplings. All $CP$ conserving couplings are considered. It is an extension of the methods and strategy, which were recently used for the Standard Model electroweak production of four-fermion final states. Since the fermions are taken to be massless the matrix elements can be evaluated efficiently, but certain phase space cuts have to be imposed to avoid singularities. Experimental cuts are of a similar nature. With the help of the constructed event generator a number of illustrative results is obtained for $W$-pair production. These show on one hand the distortions of the Standard Model angular distributions caused by either off-shell effects or initial state radiation. On the other hand, also the modifications of distributions due to anomalous couplings are presented, considering either signal diagrams or all diagrams. Introduction Recently, the electroweak four-fermion production processes relevant for LEP2 and beyond have been studied in a number of ways. One of the objectives is to obtain a description of W-pair production better than an on-shell treatment with W-decay products attached to it. Thus all recent papers contain nite width e ects. Some papers only include the three diagrams leading to W-pair production, others include all diagrams giving a speci c four-fermion nal state. Most of them include some form of initial state QED radiative corrections. There are semi-analytical methods 1, 2] and Monte Carlo approaches 3]{ 10]. The former can only give distributions in the virtualities of the W's, but no fermion distributions. The latter can produce any wanted distribution. Among the various Monte Carlo treatments we mention in particular the program EXCALIBUR, since it aims both at completeness and speed. All diagrams for any four-fermion nal state are included and a relatively fast calculation is achieved by assuming massless fermions and by using a multichannel approach to generate the phase space. The details are given in 9], whereas the treatment of initial state radiation (ISR) can be found in 10]. One of the objectives of LEP2 and future electron-positron colliders is a test of non-abelian triple gauge boson couplings. A way to quantify deviations from the Standard Model (SM) Yang-Mills couplings is to set experimental limits on anomalous couplings. Many discussions of the latter can be found in the literature, see e.g. 11,12,13]. Theoretical arguments, which reduce the a-priori large number of non-standard couplings are discussed in 13]. In order to investigate the experimental possibilities to measure limits on anomalous couplings one ideally needs samples of anomalous events, made by an event generator and one requires a tting program containing the anomalous matrix element. The tting program can then establish whether the input anomalous couplings can really be extracted from the generated anomalous data. Up to now such studies were made with tools, which have certain limitations. Usually data are generated for W-pair production containing three diagrams with W-decay attached to it in zero width approximation. The tting programs use the same approximation. Examples of such investigations can be found in 13,14]. Very recently a Monte Carlo program with anomalous couplings with a nite W-width became available 15]. It covers the semi-leptonic nal states. In view of the advantages of the EXCALIBUR program it is natural to use its structure and strategy as a basis for an anomalous four-fermion generator. Thus it is the aim of the present paper to describe the necessary additions and changes to the approach of 9, 10] to obtain a four-fermion generator with anomalous couplings. This generator has the following characteristics. Any massless four-fermion nal state can be produced with CP conserving anomalous couplings. All abelian and non-abelian diagrams contributing to the nal state are taken into account. It is also possible to restrict oneself to the "signal" diagrams of a process, e.g. for four-fermion production through W-pairs one takes only three diagrams. Finally, ISR can be switched on or o. The new anomalous matrix elements will be discussed in some detail, since they are the key ingredient of the anomalous EXCALIBUR program and since they would also be required for a tting program. A number of numerical results for four-fermion production will be presented. On one hand they serve as illustration how the W nite width or ISR can modify SM angular distributions. On the other hand they show how non-standard distributions behave. The very relevant physics application is the one mentioned above: generating anomalous data and studying the extraction of anomalous couplings with a tting program. It is expected that in the future this question will be addressed. The actual outline of the paper is as follows. In section 2 the anomalous couplings are described. The next section discusses those four-fermion nal states, which are sensitive to anomalous couplings and gives the required matrix elements. Some illustrative examples of anomalous e ects in distributions are shown in section 4, whereas section 5 contains conclusions. Anomalous couplings In this section those non-standard couplings are de ned, which will be considered for the generation of anomalous four-fermion nal states. When one uses only Lorentz invariance as condition there exist 14 couplings, which lead to deviations from the SM triple gauge boson couplings. Some of them can be immediately discarded since they would either modify the strength of the electromagnetic interaction or introduce C or CP violation in it. At this point there still are 9 parameters left, three of which lead to CP violation through the ZWW interaction. Also these will be omitted. We are then left with a Lagrangian of the form: L = L 1 (C-and P-conserving) + L 2 (CP-conserving, Cand P-violating): W h (p + p 0 p p + p 0 p ) + (p 0 p + )(g p g p ) + (p 0 p )(g p + g p + ) + (p p + )(g p 0 g p 0 ) For the WW vertex one has to replace cot w + Z by 1, x z and y z by x and y. The coupling z is zero. With these vertices the matrix elements for four-fermion production will be evaluated. It should be noted that the form chosen in the interaction corresponds to that of 13]. In the Z couplings the signs look di erent from 13]. This is however compensated by the vectorboson-fermion couplings, which di er between the two papers. For the SM we use here for the photon-electron vertex ie and for the Z-vertex ie (v a 5 ) with a = (4 sin w cos w ) 1 and v = a(1 4 sin 2 w ). In 13] the latter is the same but the photon vertex has opposite sign. The matrix elements In the literature 16] studies have been made of the e ect of non-standard triple gauge boson couplings on the following gauge boson production processes e + e ! W + W, e + e ! Z e e. They are described by 3, 9 and 7 diagrams respectively of which 2, 2 and 1 diagrams containing a triple gauge boson vertex. In practice these processes lead to four-fermion nal states. For a speci c four-fermion nal state not only the "signal" diagrams of the above reactions contribute but also "background" diagrams, of which some contain also triple gauge boson vertices. Thus the anomalous couplings can contribute to the background diagrams as well. Tables 1{3 list the leptonic, semileptonic and hadronic nal states which can originate from one of the above signals. Moreover, the number of abelian diagrams (N a ) and of non-abelian diagrams (N n ) is given. For the actual calculation of the matrix element we proceed as in 9]. We rst repeat the SM calculation and then extend it to non-standard couplings. Although many diagrams can contribute to a speci c nal state there are only two topological structures (generic diagrams), given in g. 1. In these diagrams all particles are considered to be outgoing. The actual Feynman diagrams will be obtained by crossing those electron and positron lines which were assigned to become the colliding e + e pair. In the Abelian diagrams the charges of the fermions determine the character of the two exchanged bosons, which may be W +, W, Z or. In the non-abelian diagrams, two of the vector bosons are xed to be W + and W, and the third one can be Z or. In this way we avoid double-counting of diagrams. In principle the particles and antiparticles can each be assigned in six ways to the external lines. For the non-abelian diagrams we get at most 8 diagrams and due to the speci c nal states considered we get at most 48 abelian diagrams. Fixing a speci c four-fermion nal state all possible assignments are tried. Only those which are allowed for by the couplings survive in rst instance. Since also successively all helicity combinations are tried certain diagrams do not contribute as can be seen from the numerator of the generic abelian diagram: A( ; ; ; p 1 ; p 2 ; p 3 ; p 4 ; p 5 ; p 6 ) = = u (p 1 ) u (p 2 ) u (p 3 ) (/ p 1 + / p 2 + / p 3 ) u (p 4 ) u (p 5 ) u (p 6 ) : Here we have disregarded the particle/antiparticle distinction since it is already implied by the assignment of the external momenta. The helicity labels ; ; = 1 determine the helicity of both external legs on a given fermion line. Using the Weyl-van der Waerden formalism for helicity amplitudes 17] (or, equivalently, the Dirac formalism of 18]), the expression A can easily be calculated. For instance, for = = = 1 one nds A(+; +; +; 1; 2; 3; 4; 5; 6) = 4h31i h46i h51i h21i + h53i h23i] where the spinorial product is given, in terms of the momenta components, by hkji = p 1 j + ip 2 j " p 0 k p 3 k p 0 j p 3 j # 1=2 (k $ j) : Figure 1: generic diagrams for four-fermion production. The fermion momenta and helicities, and the bosons are indicated. The bosons V 1;2 can be either Z, W, or ; V can be either Z or. Finally it should be noted that the vector boson propagators are implemented in the form (q 2 M 2 V + iM V V ) 1, irrespective whether q is timelike or not. This recipe guarantees the validity of electromagnetic gauge invariance. When this is violated even by a small amount forward electron cross sections can be o by orders of magnitude. This is due to photon exchange in the t-channel and was already noticed a long time ago as a problem in single W-production 19]. A less ad hoc solution to this problem is underway 20]. Results Whereas at high energies total cross section measurements will give crucial information on the size of possible non-standard couplings, one has to consider at LEP2 angular distributions for this purpose. The natural ve-dimensional di erential cross section is d (e e + ! W W + ! f 1 f 2 f 3 f 4 ) d cos d cos 1 d 1 d cos 2 d 2 where is the angle between the incoming electron and W. The angles 1, 1 are the polar and azimuthal angles of the particle f 1 in the rest system of the parent particle W, whereas the angles 2, 2 ful ll a similar role for the antiparticle f 4 originating from W +. The angles are de ned with respect to coordinate frames related to the W and W +. The z-directions are the directions of the momenta of the vectorbosons. The y-axes are de ned by respectivelyp q andq + p +, wherep,p + denote the momenta of the incoming electron and positron andq,q + the momenta of the W and W +. In the zero width limit the above cross section is directly related to the helicity amplitudes for on-shell W-pair production and functions describing the decay of the vectorbosons 12,13]. In principle direct ts to the above cross section could be performed. In practice one-or two-dimensional distributions will often be used. In the following we shall study d =d cos, d =d cos 1, d =d 1. The main purpose of this section is to illustrate e ects of certain phenomena which have sofar not been incorporated in anomalous coupling studies. These are the e ects of the nite W-width, of ISR and of background diagrams. It is useful to de ne a number of (di erential) cross sections evaluated under di erent assumptions. In the rst place we introduce SM cross sections SM;on, SM, SM;ISR and SM;all which are respectively on-shell, o -shell signal cross sections (i.e. with three diagrams), the o -shell signal case with ISR and the cross section containing all diagrams. Furthermore, we de ne AN, AN;ISR, and AN;all which are (di erential) anomalous cross sections calculated with the three signal diagrams, without or with ISR, and with all diagrams without ISR. The following ratios give an illustration of the e ects of the nite width, the ISR, background diagrams and of non-standard couplings: The reaction which we take as example is e e + ! e e u d: The following input parameters are used For the ISR the usual value of is used. It should be noted that the above experimental values for the total widths are incorporated in the propagators. In EXCALIBUR the decay widths of the W into a lepton pair or quark pair are independent from the input total width. They follow from the other input parameters. Since one would like to have s-dependent widths in the s-channel and because this would violate gauge invariance the following practical procedure is used. The s-dependent widths can be transformed into a constant width 21]. When this constant width is used in both sand tchannel gauge invariance is ensured in a simple way, which numerically agrees well with theoretically more sound methods 20]. Thus the calculations are performed with propagators (q 2 M 2 V + iM V~ V ) 1, wher M V = M V = q 1 + 2 V ; With these input values various di erential cross sections have been evaluated. The SM and anomalous cross sections with all diagrams have to be calculated with cuts avoiding thus the singularities due to the massless fermions. In order to make meaningful comparisons the cross section SM in R 3 has the same cuts. The imposed cuts are E e ;u; d > 20 GeV j cos e ;u; d j < 0:9 j cos 6 (u d)j < 0:9 m u d > 10 GeV where is the angle between the outgoing particles and the incoming beams. In tables 4{7 total cross sections AN, AN;ISR, AN (cuts), AN;all are listed for an energy of 190 GeV. The SM di erential cross sections are given in gure 2. Di erential cross section ratios are given in the form of histograms in gs 3{10. From R 1 it is seen that the inclusion of the nite width already changes the cos distribution by a few percent. Comparing the on-and oshell AN a similar angular modi cation arises 22]. Similarly the inclusion of ISR or background diagrams introduce even larger modi cations of this angular distribution. In order to show the e ects of the various anomalous couplings histograms of R 4 and R 5 are presented with values 0:5 for every coupling successively, the others being zero at the same time. When doing the analysis with the three signal diagrams both for SM and non-standard couplings (R 4 ) the overall picture is roughly the same as for the case where both cross sections contain all diagrams (R 5 ). The e ects of the anomalous couplings show up most clearly in the cos distribution as can been seen when comparing to the pictures of the cos 1 and 1 distributions. Conclusions With the extended EXCALIBUR program it becomes possible to study effects of anomalous couplings in all four-fermion nal states which receive contributions from non-abelian diagrams. In this way nite width e ects of the vectorbosons are incorporated and studies of ISR and background diagrams can be made. Up to 2 TeV the program works e ciently. For studies at higher energies the present phase space treatment of the multiperipheral massive vectorboson diagrams should be adjusted, which in principle does not pose any problem. For LEP2 this is not yet required. From the presented results it is clear that in particular the distribution in the W production angle is a ected by the nite W width, ISR and background. Also here anomalous couplings show up most clearly. The results of this paper give a quantative assesment of the above e ects, which have hitherto not been considered in the literature. |
Obtaining functional dependence of friction coefficient of soil on steel The development of theoretical approaches to the substantiation of the water-physical properties of soils, which determine the favorable mode of functioning of tillage machines and units, and the improvement of methods for measuring hydrophysical parameters, is an actual problem of agricultural science. The intensification of technological processes in some cases leads to irreversible changes in the soil profile. The results of the research can be substantially expanded by modeling the hydrophysical properties of the soil and studying the water and air regimes of the soils. In accordance with this, an important task is the theoretical justification of the functional dependence of the frictional force in the soil on its specific surface, porosity and moisture for determining the moisture ranges when which meliorative measures are environmentally friendly and least energy intensive. The methodological basis was the fundamental and applied foundations of soil hydrophysics. The study of the modes of operation of the tillage machines (crumbling / speed) on various types of soils allowed one to determine the most effective moisture intervals at which, with an average fuel consumption from 4.1 to 17 l/ha, it is saved from 0.16 to 0.68 l/ha. This allows one to reduce fuel consumption by 5-7 % and to provide the best mechanical effect on the soil. The application of these methods was based on the use of modern software packages for processing the results of experiments conducted in laboratory and field conditions. Introduction The use of a wide range of fundamental characteristics of the physical properties of soils for scientific substantiation and technological decisions of various meliorative measures related to the cultivation of lands and the regulation of their water regime allows for an increase in the quality of management decisions made and the efficiency of economic activity. The developed methods make it possible to substantiate environmentally friendly meliorative measures and technologies that ensure the creation of a favorable soil water regime. The results of the research were used to evaluate meliorative measures in the territories of Batyrevsky, Kanashsky and Cheboksary districts of the Chuvash Republic. Addressing the problem of reducing the energy efficiency of agrotechnical operations associated with the impact on deep subsurface horizons (decompaction and recultivation) is an urgent issue since the energy intensity during deep tillage is high. In addition, the intensity of water absorption during irrigation is strongly affected by the presence of the soil layer (plough pan) compacted at a certain MEACS2018 IOP Conf. Series: Materials Science and Engineering 560 012137 IOP Publishing doi: 10.1088/1757-899X/560/1/012137 2 depth. Therefore, the directions of research to solve the problem of reducing the energy efficiency of reclamation agrotechnical operations associated with the impact on deep subsurface horizons (planning, decompression and recultivation) are among the topical. Whatever irrational material a plow is made of, it must constantly overcome the resistance of the cultivated soil. Moreover, the frictional force arising on the coulter, ploughshare, dump, field board takes place during all the time of "effective" use of the plow and, accordingly, "depreciates" a substantial part of the costs associated with the plowing process. The share of the resistance attributable to the working bodies (share, disc, cutter, etc.) is 50-60 % in the total resistance of the plow. Approximately the same percentage of energy spent on plowing is spent on overcoming friction forces. It is well known that the friction force of a soil on a metal depends both on moisture and on the mechanical composition of the soil. Moreover, with an increase in moisture, after a continuous increase, it begins to decline quite quickly. Therefore, it is important to have a theoretically sound and dependent on the mechanical composition of the soil information on the critical soil moisture corresponding to the greatest friction force. With a certain moisture comes the physical ripeness of the soil. This moisture corresponds to the least traction resistance to plowing, as well as soil sticking and wear of the working parts of the plow. In connection with the above, we believe that the substantiation of soil moisture, at which it is necessary to carry out operations, and the speed of movement of the units can be carried out considering the dependencies of stickiness and friction force on moisture and the specific surface of the soil. The intensity of water absorption during irrigation is strongly influenced by the presence of a compacted soil layer (plough pan) at a certain depth. Therefore, among the topical are the directions of research to solve the problem of reducing energy efficiency of reclamation agrotechnical operations associated with the impact on deep subsurface horizons (planning, decompaction and recultivation) since the energy intensity during deep tillage of soils is quite high. To substantiate soil moisture it is necessary to carry out operations, and the speed of movement of the units can be carried out considering the dependencies of stickiness and friction force on moisture, porosity and specific surface of the soil. The friction forces in the soil appear when it slides relative to the body that is in contact with it (external friction), or the particles that make up the soil relative to each other (internal friction) slip. We carried out studies of the influence of moisture, porosity and specific surface of the soil on the processes of interaction of tools and units with the soil to establish the ranges of moisture at which carrying out meliorative activities is environmentally friendly and least energy intensive. When carrying out meliorative measures, such as milling, deep loosening and deep plowing during the primary soil treatment, from 30 to 50% of energy of machine-tractor units (MTU) is spent on work to overcome friction forces. Therefore, it is important to choose a moisture with which the soil crumbles well, minimally sticking to the processing tools. This will ensure not only a reduction in tractive effort, but also the best soil condition after the ameliorative event. Materials and methods The frictional force occurs when an active force tends to move one body relatively another under normal pressure. Friction force F is always located in the plane of interaction of bodies and directed in the opposite direction to the active force. We determined from the formulas that: where f is the coefficient of friction; N is the force of normal pressure; is the angle of friction. For fast and easy measurement of the coefficient of friction (rest / motion), you can use the device of Zeligovskiy or the dynamograph (a disk device measuring friction). The degree of interaction of soil particles with the surface of the working bodies of machines is mainly influenced by the ratio of forces in the systems "particle-particle" and "particle-surface." With increasing differences in this, the ratio increases the fixation and particles on the working surface. We determine the force of fixation through the difference between the resulting friction force among the contacting soil particles and the force of friction on the surface of the working body F = p z (fk f'). In this case p is the specific pressure, z is the number of contacting with the particle surface, f is the coefficient of friction between soil particles, f' is the coefficient of soil particles on the surface of the working bodies, and k is a constant dependent on the number of contacts. Adhesion between particles of soil, as a rule, exceeds the adhesion of particles to the surface of the working bodies. Therefore, in addition to reducing fuel costs, the selection of operating modes for which agromeliorative and cultural-technical measures (planing, deep loosening, milling) are most effective, allows increasing the durability of the working bodies. The coefficient of soil friction depends on many factors, the main of which are mechanical composition and humidity. Changing the ratio of solid, liquid and gaseous phases in the soil leads to a change in the forces acting in the system "soil tillage toolsoil". It is therefore important to investigate the dependence of the friction coefficient on the moisture content in the soil. An improved soil structure results in less friction. This is explained by the fact that with an increase in porosity, the area of actual soil contact with the surface of a foreign body decreases, i.e. in dense soil, friction is greater than in loose, structured. At low moisture, the soil moisture is of little concern to the body and has almost no effect on the friction force, i.e. dry friction occurs. In addition, at low moisture from 0 to 8... 10 % stickiness does not appear, and the soil moisture does not adhere to the metal (section ab), therefore only the influence of F for which the coefficient of friction does not depend on moisture. With an increase in soil moisture, the forces of molecular attraction between the soil moisture and the body begin to appear more noticeably; there is a phase of external friction sticking. Slip resistance depends on sticking: R=k0S+kSN, where k₀ is the coefficient of specific adhesion in the absence of normal pressure; k is the coefficient of specific adhesion caused by normal pressure, cm; S is the contact area, cm. The effect of moisture on the friction coefficient is shown in figure 1. The increase in f in the bc segment is explained by the appearance and increase of stickiness and the forces of molecular attraction of soil particles to the metal surface. When w ≈ 35 % (depending on the mechanical composition of the soil), the values of the friction coefficient reach a maximum. With a further increase in moisture (the cd segment) F decreases since the stickiness decreases, and, moreover, the soil moisture begins to play the role of lubricant. If F depends only on the magnitude of the normal pressure and the properties of materials in contact with the surfaces, then R influences even without external applied pressure and depends on the size of the touch. For some intervals of soil moisture, F and R act together; both values appear at the same time as the general resistance F * = F + R. If the sum of the forces F+R is more than the shear strength of the soil, the working bodies stick. When the sum of the sticking and friction forces of the soil on the soil becomes greater than the total resistance of adhered particles to icing, self-cleaning is observed. It is well known that the next factor after moisture, which has a significant effect on f, is the mechanical composition of the soil, or rather the particle content is less than 0.1 mm, i.e. physical clay. The smaller the size of the elementary particles of the soil, the greater the coefficient of friction. This fact is fully consistent with the proposed approach. Stickiness is directly proportional to the specific surface of the solid phase. Therefore, we can conclude that the coefficient of friction should be directly proportional to, i.e. content of physical clay. It is well known that with an increase in the percentage of physical clay, the coefficient of friction increases. Since dry friction occurs at low humidity and tackiness begins to appear with increasing humidity, the function f can be divided into two parts. One of them is proportional to the stickiness L, which in turn is related to the mechanical composition of the soil through the specific surface ₀ and the function associated with the particle size distribution D(w, ₀). The other is proportional to the fraction of the solid phase (1-₀), because the improvement of the soil structure leads to a decrease in the friction force, as well as from the surface of contact with the liquid w⅔ and (1-w). The proposed approach is fully consistent with the fact that the physical clay content (particles less than 0.01 mm) has a significant effect on the friction coefficient f. Friction bond but with stickiness, which is directly proportional to the specific surface of the solid phase. Consequently, the coefficient of friction should be directly proportional to ₀, i.e. the content of physical clay. After summarizing the above facts, you can use the phenomenological method to write the formula for the coefficient of soil friction: where f is the coefficient of friction; L is stickiness;,, are empirical coefficients. Results and discussion The obtained dependencies of stickiness and friction forces for the black soil of the IAPC "Trud" of the Batyrevsky district, light gray and gray forest soils OJSC "Sormovo" of the Kanashsky region and light gray soil of the CJSC «Progress of the Cheboksary district of the Chuvash Republic were analyzed together with dependencies for frictional forces. From the dependencies, the values of the "ripe" soil condition are determined from the moisture content of the initial sticking (i.e., the soil conditions are optimal for the mechanical impact on the soil). Moisture intervals corresponding to the soil conditions at which the mechanical impact on the soil is least effective are determined by the maximum adhesion moisture. The assessment of economic efficiency was carried out according to the methodological recommendations for assessing the economic efficiency of the introduction of new technologies and agricultural equipment. In addition, in some cases, for an objective comparative assessment of existing and proposed methods, the cumulative energy costs (direct and materialized indirect) are determined. Since the improvement and introduction of technologies is accompanied by additional capital investments, the introduction of new technologies and methods should ensure the improvement of quality and reduction of production costs together with the growth of labor productivity, that is, ensure the economic effect. The fuel consumption rate of a tractor is variable, depending on many factors, such as soil moisture, depth of processing, fuel system operability, condition of the tool's working parts, etc. The obtained dependences of friction coefficients on moisture for the main types of soil in the Chuvash Republic show that friction with optimal days of tillage conditions is 1.25-1.5 times less than the maximum value. Thus, the choice of optimal intervals for the mechanical impact on the soil allows, by reducing the traction resistance, saving about 5-7% of fuel. Experimental verification of the ratio for the main types of soil in the Chuvash Republic showed that the dependencies described are about 86.6% of experimental data (Figure 2). The study of the working modes of tillage machines (crumbling / speed) concerning various types of soil made it possible to determine the most effective moisture intervals at which, with an average fuel consumption of 4.1 to 17 l/ha, it saves from 0.16 to 0.68 l/ha. This makes it possible to reduce fuel consumption by 5-7 % and ensure the best mechanical effect on the soil (Table 1). agriculture. In addition, in some cases, for an objective comparative assessment of existing and proposed methods, the cumulative energy costs (direct and materialized indirect) are determined. Since the improvement and introduction of technology are accompanied by additional capital investments, the introduction of new technologies and methods should ensure the improvement of quality and reduction of production costs together with the growth of labor productivity, that is, ensure the economic effect. Conclusions The results of the research can be substantially expanded by modeling the hydrophysical properties of the soil and studying the water and air regimes of the soils. In accordance with this important task there is the theoretical justification of the functional dependence of the frictional force in the soil on its specific surface, porosity and moisture for determining the moisture ranges at which meliorative measures are environmentally friendly and least energy intensive. The methodological basis was the fundamental and applied foundations of soil hydrophysics. The study of the modes of operation of the tillage machines (crumbling / speed) concerning various types of soils allowed determining the most effective moisture intervals at which, with an average fuel consumption from 4.1 to 17 l/ha, it is saved from 0.16 to 0.68 l/ha. This allows reducing fuel consumption by 5-7 % and to providing the best mechanical effect on the soil. The application of these methods was based on the use of modern software packages for processing the results of experiments conducted in laboratory and field conditions. |
The Testis-Specific Factor CTCFL Cooperates with the Protein Methyltransferase PRMT7 in H19 Imprinting Control Region Methylation Expression of imprinted genes is restricted to a single parental allele as a result of epigenetic regulationDNA methylation and histone modifications. Igf2/H19 is a reciprocally imprinted locus exhibiting paternal Igf2 and maternal H19 expression. Their expression is regulated by a paternally methylated imprinting control region (ICR) located between the two genes. Although the de novo DNA methyltransferases have been shown to be necessary for the establishment of ICR methylation, the mechanism by which they are targeted to the region remains unknown. We demonstrate that CTCFL/BORIS, a paralog of CTCF, is an ICR-binding protein expressed during embryonic male germ cell development, coinciding with the timing of ICR methylation. PRMT7, a protein arginine methyltransferase with which CTCFL interacts, is also expressed during embryonic testis development. Symmetrical dimethyl arginine 3 of histone H4, a modification catalyzed by PRMT7, accumulates in germ cells during this developmental period. This modified histone is also found enriched in both H19 ICR and Gtl2 differentially methylated region (DMR) chromatin of testis by chromatin immunoprecipitation (ChIP) analysis. In vitro studies demonstrate that CTCFL stimulates the histone-methyltransferase activity of PRMT7 via interactions with both histones and PRMT7. Finally, H19 ICR methylation is demonstrated by nuclear co-injection of expression vectors encoding CTCFL, PRMT7, and the de novo DNA methyltransferases, Dnmt3a, -b and -L, in Xenopus oocytes. These results suggest that CTCFL and PRMT7 may play a role in male germline imprinted gene methylation. Introduction Genomic imprinting is an epigenetic mechanism of transcriptional regulation that ensures restriction of expression of a subset of mammalian genes to a single parental allele. The Igf2/H19 locus is the best studied example of imprinted gene regulation in which Igf2 (insulin-like growth factor 2) is expressed uniquely from the paternal allele. Control of Igf2 expression is achieved by monoallelic methylation of an imprinting control region (ICR) located between the Igf2 and H19 genes. The non-methylated ICR of the maternal allele functions as a chromatin insulator through interaction with the 11-zinc finger protein CTCF (CCCTC-binding factor). In contrast, CTCF cannot bind the methylated ICR of the paternal allele, and consequently, distally located enhancers can activate the Igf2 promoter. The CTCF protein is thus defined as a somatic regulator of imprinted gene expression. H19 ICR methylation is established during male germline development. At the outset of mouse testis development (12.5-13.5 days post coitum ), male ICR methylation is absent and is re-established during subsequent developmental stages (14.5-17.5 dpc). The de novo DNA methyltransferases, Dnmt3a and -L have been shown to play a key role in this initial ICR methylation, and their maximal expression coincides with these developmental stages. No specificity of DNA binding is exhibited by the Dnmt3 subunits, and therefore it is thought that the de novo methyltransferases are recruited to sites of DNA methylation through interaction with specific chromatin modifications or a bridging protein(s) recognizing specific chromatin modifications. A potential candidate for Dnmt3 recruitment could be a post-translationally modified histone(s). Histones are known to be subject to a large variety of modifications including methylation, acetylation, ubiquitination, and phosphorylation, each of which can occur at numerous residues, thereby contributing to histone structural diversity. These modifications constitute the ''histone code,'' which can then be translated by interacting proteins into specific conformational alterations and/or DNA methylation. The best example of this mechanism is recognition of trimethylated K9 histone H3 present in heterochromatic regions and subsequent Dnmt3 recruitment by HP1 (heterochromatin protein 1). A model for the acquisition of CpG methylation in ICR has been recently proposed. The model invokes specific recognition of the ICR region that targets a histone modification, and subsequent recruitment directly or indirectly of the de novo DNA methyltransferases. The only protein characterized to date to exhibit specific ICR recognition and binding is the ubiquitously expressed CTCF protein. Recently, CTCFL/BORIS (CTCF like/brother of the regulator of imprinted sites; hereafter referred to as CTCFL), a testis-specific paralog of CTCF, has been characterized. CTCFL possesses an 11-zinc finger region that is highly homologous to that of CTCF (74% identity), suggesting similar DNA recognition. The latter notion is supported by the demonstration of CTCFL binding in vitro to the FII element within the b-globin gene cluster, a characterized CTCF binding site. The amino acid sequence flanking the zinc finger region of CTCFL exhibits no significant homology with CTCF, suggesting a function distinct from that of CTCF. These characteristics thus render CTCFL an interesting candidate to participate in ICR methylation re-establishment. In the present report, we have pursued the possible role of CTCFL in the methylation of the H19 ICR. Chromatin immunoprecipitation (ChIP) analysis indicates CTCFL association with two paternal ICRs, H19 and Gtl2. Furthermore, the embryonic expression of CTCFL in the developing testis coincides with imprint re-establishment. Interestingly, the Nterminal region of CTCFL interacts with a protein methyltransferase, PRMT7, which methylates histones H2A and H4. Co-expression of CTCFL and PRMT7 and the de novo methyltransferases in Xenopus oocytes by microinjection of expression plasmids results in significant methylation of a coinjected ICR plasmid. Taken together, these results suggest that CTCFL may play a role in male germline H19 ICR methylation. Expression of CTCFL during Testis Development Erasure and re-establishment of paternal ICR methylation takes place during embryonic testis development. At 12.5 dpc, CpG methylation within the H19 ICR has been erased in primordial germ cells (PGCs) and is subsequently re-established, such that by 17.5 dpc, significant H19 ICR methylation is present. To examine whether CTCFL expression coincides with the timing of re-establishment of the methylation, we performed immunohistochemical studies on embryonic and adult mouse testis using an a-CTCFL antibody ( Figure 1). CTCFL was not detected at 13.5 dpc, but expression was observed in mitotically arrested gonocytes of 14.5-dpc embryos ( Figure 1B and 1C). At 17.5 dpc, a few centrally located gonocytes and cells present at the periphery of the developing seminiferous tubules were positive ( Figure 1D). Similar CTCFL localization was observed in newborn testis ( Figure 1E). At 15 d after birth, nuclei of spermatogonia expressed CTCFL ( Figure 1F), as did their counterparts in adult testis ( Figure 1G). The developmental timing and celltype commitment of CTCFL expression thus coincide with ongoing ICR methylation and parallel that of the de novo DNA methyltransferases. Since CTCF and CTCFL were anticipated to exhibit similarity in DNA binding specificities, we also examined the cellular commitment to CTCF expression in adult testis ( Figure 1H). Cells with triangular nuclei located along the basal lamina, consistent with the Sertoli cell phenotype, displayed nuclear staining. Thus, cell-type expression of CTCF is distinct from CTCFL in testis. CTCFL Binds Both the H19 ICR and Gtl2 Differentially Methylated Region In Vivo Given the high homology of the zinc finger regions of CTCF and CTCFL (74% identity), it is anticipated that the two proteins would recognize similar DNA sequences. CTCF has been shown to bind in vitro to the H19 ICR in a methylation-sensitive fashion. CTCFL, on the other hand, has been demonstrated to bind in vitro to the FII b-globin insulator, a characterized CTCF target, but specific ICR recognition by CTCFL has not been reported. Thus, we undertook an in vivo analysis of CTCFL association with the H19 ICR by ChIP (Figure 2), using an a-CTCFL antibody and mouse testis chromatin (15.5-dpc and adult gonads). In addition, we analyzed CTCFL association with the differentially methylated region (DMR) of Gtl2, another characterized, paternally imprinted locus. The mouse Gtl2 DMR and the H19 ICR contain two (a and b) and four (m1-m4) CTCF consensus binding sites, respectively. Chromatin immunoprecipitation of 15.5-dpc testis by the a-CTCFL antibody yielded a significant enrichment of the m3 region of the H19 ICR ( Figure 2C). Similar enrichment of the m1, m3, and m4 regions of the H19 ICR was observed in adult testis samples ( Figure 2B). The analysis of the b region of the Gtl2 DMR in adult testis also appears enriched relative to normal rabbit serum controls ( Figure 2B). The Gtl2 locus ChIP was not evaluated by quantitative PCR. A control genomic region without CTCF consensus sites was negative for enhancement. This control sequence is a unique sequence containing three CpGs with a GC content of 49% located on mouse Chromosome 8. These results demonstrate that CTCFL specifically associates in vivo with both the H19 ICR and the Gtl2 DMR in testis. CTCFL Interacts with the Protein Methyltransferase PRMT7 The amino acid sequences N-terminal to the zinc finger regions of CTCFL and CTCF do not exhibit significant homology, suggesting different functions, possibly by association with distinct proteins. To identify potential interacting proteins, CTCFL was used as bait in a yeast two-hybrid screen with a mouse testis cDNA prey library. Thirteen different mRNAs were present in a total of 33 colonies. We retained a recently described protein arginine methyltransferase PRMT7 and a novel arginine-rich testis-specific histone H2A variant, observed two and seven times, respectively. The cDNA sequence within PRMT7 prey clones encodes the carboxy-terminal 87 amino acids (aa) of the 692-aa protein, downstream of two predicted methyltransferase domains ( Figure 3A). To validate this putative interaction, a CMV-GST (cytomegalovirus-glutathione S-transferase) PRMT7 (full length) was co-expressed in 293T cells with epitope-tagged CTCFL (N-terminal region and full length). Lysates were prepared and GST fusion proteins purified with glutathione beads. CTCFL was detected by Western blotting ( Figure 3B). A reciprocal assay with CMV-GST-CTCFL (Nterminal region and full length) and epitope-tagged PRMT7 was also performed. Both assays verified interaction between the N-terminal region of CTCFL and full-length PRMT7 in 293T cells ( Figure 3B). The N-terminal region of CTCFL used in these experiments corresponds to the aa sequence preceding the zinc finger region, which is approximately 255 aa in length. Further evidence of CTCFL-PRMT7 interaction is provided by the presence of CTCFL in the immunoprecipitate of co-expressed PRMT7 ( Figure 3C). An interaction assay with CTCF (CMV-GST-PRMT7 and epitopetagged CTCF) did not detect any association with PRMT7 ( Figure 3D). CTCFL Interacts with Histones H1, H2A, and H3 As mentioned in the previous section, a second CTCFL interaction candidate identified in the yeast two-hybrid screen was a novel testis-specific histone H2A variant. This result prompted us to examine the possibility that CTCFL also interacts with other canonical histones. CTCFL association with histones was initially examined by Farwestern analysis, which detected histones H1 and H3 ( Figure 4A). Recognition of histones H1 and H3, and H2A by CTCFL was further examined by GST pull-down assays. Histones H1, H3, and H2A interacted with both full-length and N-terminal CTCFL ( Figure 4B). No CTCFL was detected in control GST reactions. No detectable PRMT7 was observed in parallel interaction assays with histones ( Figure 4B). PRMT7 Is Expressed in Germ Cells during Testis Development Given the interaction of CTCFL and PRMT7, it was appropriate to evaluate the developmental expression of PRMT7 in the developing testis to determine whether or not it coincides with that of CTCFL. Immunohistochemistry with a-PRMT7 evidenced the expression of PRMT7 in all stages examined ( Figure 5A). All cells within the developing tubule are positive, including gonocytes and spermatogonia that were positive for CTCFL expression. CTCFL Stimulates PRMT7 Methylation of Histones H2A and H4 In Vitro PRMT7 has been recently shown to catalyze arginine methylation of histones H2A and H4. PRMT7 expressed in and purified from bacteria exhibited little to no activity on several protein substrates, including histones H2A and H4. However, PRMT7 immunopurified from transfected HeLa cells exhibited significant activity in methyltransferase reactions. A possible explanation for this increased activity is a post-translational modification of PRMT7 or the presence of an accessory protein(s) in HeLa cells. It has been shown that CTCFL is constitutively expressed in HeLa cells, raising the possibility that CTCFL may function as an accessory protein of PRMT7. To test this hypothesis, we used immunopurified CTCFL and PRMT7 for in vitro methyltransferase assays. Initially, either V5-immunopurified PRMT7 alone or V5 co-immunopurified PRMT7 and CTCFL-myc were used for methyltransferase assays with total histones ( Figure 5B). Methylation of histones H2A and/or H2B and H4 was observed. Enhancement of methylation was observed when CTCFL had been coimmunopurified with PRMT7. To distinguish between histones H2A and H2B, separate reactions were performed ( Figure 5C). For these reactions, CTCFL-Myc and PRMT7-V5 were independently immunopurified with a-Myc and a-V5 antibodies, respectively, and subsequently eluted with corresponding epitope-tagged peptides. Only histone H2A methylation was observed. Enhanced methylation of histones H2A and H4 by PRMT7, upon addition of immunopurified CTCFL, supports the idea that CTCFL functions as an accessory protein for PRMT7 activity. CTCFL Directs H19 ICR Methylation in Xenopus Oocyte Injection System Thus far we have shown that the developmental expression profile of CTCFL is coincident with re-establishment of imprints. In addition, CTCFL is bound to H19 ICR in vivo, interacts with PRMT7 and histones H1, H2A, and H3 and stimulates H2A and H4 methylation by PRMT7. The discovery of these molecular interactions prompted us to address the question as to whether or not CTCFL and PRMT7 could participate in specific ICR methylation with the de novo DNA methyltransferases, Dnmt3a, Dnmt3b, and Dnmt3L. To test this hypothesis, we co-injected cDNA expression vectors and ICR plasmids into the nucleus of stage VI oocytes of Xenopus. The advantages of this assay are numerous. There is an abundant reserve of histones, which are capable of packaging injected DNAs into chromatin. Oocytes lack male germline-specific factors. Imprinting does not occur in Xenopus. In addition, one can analyze individual oocytes injected with a small quantity of ICR (40 pg). Co-injection of a GFP (green fluorescent protein) expression plasmid facilitates the identification of successfully injected oocytes. Following DNA injection and oocyte incubation, GFPpositive oocytes were selected by fluorescence microscopy. DNA was then extracted and treated with sodium bisulfite, to determine CpG methylation. Subsequently, the m3 H19 ICR region was amplified by PCR with specifically designed primers, then cloned and sequenced. Controls for DNA contamination were carried out for each series of injected oocytes by analyzing non-injected oocytes in parallel. When CTCFL, PRMT7, Dnmt3a, Dnmt3b, and Dnmt3L expression plasmids were co-injected with H19 ICR, a significant fraction of the CpGs in the m3 region of the H19 ICR were methylated after 48 h of incubation ( Figure 6A). In contrast, no CpG methylation was observed when the H19 ICR alone was injected, indicating the absence of DNA methylation by endogenous factors in the Xenopus oocyte. Coinjection of CTCFL and PRMT7 expression vectors with ICR did not yield any CpG methylation, suggesting that endogenous Dnmt3 activity is lacking or insufficient. We then proceeded with injection of various combinations of the expression plasmids to assess the requirement of each corresponding protein in the observed ICR DNA methylation. Oocytes injected with Dnmt3a, Dnmt3b, and Dnmt3L, and H19 ICR yielded few methylated CpGs ( Figure 6A), demonstrating that co-expression with CTCFL and PRMT7 is crucial for efficient ICR methylation, and that over-expression of all three Dnmt3s alone gives rise to a low level of methylation. Low methylation levels were also observed when either CTCFL or PRMT7 were co-injected with Dnmt3s ( Figure 6A). This result further underscores the need for both CTCFL and PRMT7 in conjunction with the Dnmt3s to achieve significant ICR methylation. The sequence specificity of observed DNA methylation was assessed by co-injecting a control plasmid, without CTCF binding sites (ChIP control in Figure 2), along with CTCFL, PRMT7, and Dnmt3s expression plasmids ( Figure 6B). Only low levels of CpG methylation were observed when the control plasmid was injected, confirming the sequence specificity of CTCFL-mediated ICR methylation. Control experiments, replacing the CTCFL expression plasmid with one encoding CTCF, did not yield significant ICR methylation ( Figure 6A). Taken together, these results demonstrate that expression of CTCFL, PRMT7, and Dnmt3s in Xenopus oocytes results in specific and efficient methylation of co-injected H19 ICR. Having demonstrated the value of the Xenopus system to reflect the contribution of individual proteins on specific CpG methylation, we next evaluated the contribution of each Dnmt3 isoform in H19 ICR methylation ( Figure 6C). Omission of either Dnmt3a or Dnmt3b resulted in intermediate methylation levels after 72 h of incubation. This observation suggests that both Dnmt3a and Dnmt3b can participate in ICR methylation in the Xenopus system. When Dnmt3L was omitted from injected expression plasmids, no ICR CpG methylation was observed, highlighting its essential role. Replacing the Dnmt3 expression plasmids by one encoding Dnmt1 does not result in significant ICR methylation, consistent with a specific requirement for the de novo methyltransferases ( Figure 6A). Positive controls, with injection of all expression plasmids and ICR, yielded higher levels of CpG methylation after 72 h of incubation than that observed after 48 h. Expression of PRMT7 and CTCFL in Xenopus Oocytes Yields Detectable Arginine 3 Methylation of Histone H4 Having shown that PRMT7 methylates histones H2A and H4 in vitro, we wished to determine whether or not this modification occurs in oocytes expressing PRMT7 and CTCFL. Given the high copy number of ICR plasmid injected into oocytes during the above experiments (approximately 10 6 ), we reasoned that if PRMT7 in association with CTCFL methylates R3 of histone H4, it would be detectable by analysis of total oocyte histones. To test this idea, histones were extracted from oocytes with sulfuric acid and the resulting Western blots were reacted with a-SYM-R3H4 antibody (directed against a GR Me2(sym) GKGGKG peptide) for detection of SYM-R3H4 (symmetrical dimethyl arginine 3 histone H4; Figure 6D). When ICR was co-injected with expression plasmids for both PRMT7 and CTCFL, an increase in SYM-R3H4 was observed. However, if the CTCFL expression plasmid was replaced with a CTCF expression plasmid or if the PRMT7 expression plasmid was omitted, the level of detectable SYM-R3H4 is lower. An a-histone H3 was used as a control for histone loading. No signal was detected when an a-asymmetrical dimethyl arginine H4 R3 antibody was used to probe the same Western blot (unpublished data). The observed increase in SYM-R3H4 in total oocyte histones suggests that PRMT7, in collaboration with CTCFL, methylates at least histone H4, consistent with in vitro observations ( Figure 5). Developmental Accumulation of SYM-R3H4 Parallels ICR Methylation If SYM-R3H4 is functioning to signal recruitment of the de novo DNA methyltransferases, then one would anticipate developmental accumulation of this modification in testis, paralleling ICR methylation. Immunohistochemistry with a-SYM-R3H4 antibody yields detectable nuclear staining at 17.5 dpc ( Figure 7A). Subsequent developmental stages evidence a progressively stronger signal up to adult stages. The spermatogonia within the adult testis are positive for SYM-R3H4. There is a slight lag in the detection of SYM-R3H4 in testis, relative to the CTCFL expression (17.5 dpc vs. 14.5 dpc), consistent with a modification resulting from a process being initiated around 14.5 dpc. A ChIP analysis with a-SYM-R3H4 on adult testis chromatin evidenced a significant enrichment of this histone modification at the m1, m3, and m4 regions of the H19 ICR ( Figure 7B), relative to a control genomic region. Analysis of the b region of the Gtl2 DMR also evidenced the presence of this modification. Discussion The present characterization of CTCFL, a CTCF paralog, supports its involvement in male germline imprinting. ChIP analysis demonstrates that CTCFL is associated in vivo with the H19 ICR at 15.5 dpc. The binding of common regions by CTCF and CTCFL is consistent with the high homology between their zinc finger domains (74% identity, 84% positivity). The mutually exclusive expression in testis excludes competition for binding sites. CTCFL is detected in the male germline during testis development shortly after the erasure of methylation marks on ICR (14.5 dpc), and coincides with maximal expression of the de novo DNA methyltransferases, Dnmt3a and Dnmt3L. The expression of CTCFL during embryonic development and its association with the H19 ICR would suggest that it associates with non-methylated ICR. The observed interaction of CTCFL with PRMT7, and histones H1, H2A, and H3 provides additional insight into its role in imprinting. PRMT7 has been reported to methylate arginine residues of histones H2A and H4 in vitro, and more precise analysis indicated a symmetrical dimethylation of arginine residues. Non-exhaustive experiments with peptide substrates indicate that PRMT7 methylates a glycinearginine-glycine motif, which is present in the N-terminal region of histones H2A and H4. The absence of detectable interaction of PRMT7 and histones ( Figure 4B) and the stimulation of in vitro methylation by CTCFL ( Figure 5) suggests that PRMT7 may rely upon CTCFL for efficient association with histones. The latter could be accomplished in a specific chromatin region due to the DNA binding specificity of CTCFL. The expression of PRMT7 in the embryonic testis is consistent with a role in H19 ICR methylation. Interaction between CTCFL and histones is of relevance from several viewpoints. The reported nucleosome positioning at the H19 ICR places CTCF consensus sequences in the inter-nucleosome linker region. Histone H1 and the Nterminal tail of histone H3 have been shown to stabilize chromatin structure, suggesting that their interaction with CTCFL may stabilize CTCFL's association with ICR. CTCFL interaction with histone H2A, a PRMT7 substrate, has particularly interesting functional implications. Attempts to detect association of H2A with PRMT7 were unsuccessful ( Figure 4B), raising the possibility that CTCFL simultaneously interacts with histone H2A and PRMT7, thus presenting PRMT7 with its histone substrate for methylation. This notion is supported by the observed stimulation of PRMT7mediated methylation of histone H2A by CTCFL ( Figure 5B and 5C). The present results on PRMT7 and its association with CTCFL are analogous to observations made on other PRMTs. PRMT5 has been shown, for example, to form a complex with the hSWI/SNF chromatin remodelers BRG and BRM, and this association enhances PRMT5 methyltransferase activity. PRMT1 interacts with the transcription factor YY1 and is thereby targeted to specific promoters, where it catalyses adjacent histone H4 methylation in vivo. This arginine methylation has been shown to be essential for activation of promoters to which YY1 is bound. These observations open the possibility that the PRMT7-dependent increase in SYM-R3H4 observed in Xenopus oocytes is occurring on nucleosomal histones adjacent to the H19 ICR, when CTCFL is expressed. Results of SYM-R3H4 ChIP analysis with testis chromatin are supportive of this interpretation. A consideration of the experiments carried out in Xenopus oocytes should take into account the number of ICR plasmid copies present. Forty picograms of injected ICR plasmid corresponds to approximately 10 6 copies. This elevated copy number probably renders the system less sensitive to endogenous components. The ICR methylation observed in the oocyte experiments is occurring on a significant fraction of this large number of injected ICR plasmid molecules. Secondly, the dependence on the injected expression plasmids for methylation to occur is probably due to the large number of templates. This does not rule out that endogenous factors may participate, yet the primary players in the reaction are likely to be those encoded by the coinjected expression plasmids. Symmetrical dimethylation of R3 in histone H4 by PRMT7 was observed to increase when PRMT7 and CTCFL are coexpressed along with ICR in Xenopus oocytes. This augmentation is dependent on both proteins in that no increase in SYM-R3H4 methylation is observed when CTCF is expressed in place of CTCFL. Furthermore, omission of PRMT7 also evidences no increase in R3H4 methylation. In addition, the progressive accumulation of SYM-R3H4 in developing testis parallels expression of CTCFL and PRMT7. The dependence of ICR methylation in Xenopus oocyte experiments on coexpression of PRMT7 and CTCFL and the observed in vitro interaction of CTCFL with histones are consistent with such activity. The results of Xenopus oocyte microinjections indicate that expression of both proteins is necessary for efficient H19 ICR methylation. In addition, ChIP analysis of adult testis indicated that SYM-R3H4 is present within H19 ICR chromatin. The sequence specificity of the observed ICR methylation is supported by the lack of significant methylation when a control plasmid lacking CTCF consensus sequences was injected in place of H19 ICR ( Figure 6B). Knockout mouse models of Dnmt3a and Dnmt3L have shown that both are necessary for appropriate maternal and paternal imprinting. Results obtained with the heterologous Xenopus oocyte system are consistent with these observations. No CpG methylation was observed when Dnmt3L was omitted from the injected expression plasmids ( Figure 6C). In contrast, Dnmt3b knockout mice do not exhibit any deficiencies in ICR methylation in the male germline, suggesting that expression of Dnmt3b is not critical for this event. Nonetheless, examination of Dnmt3b RNA/protein expression during the developmental interval of ICR methylation indicates that there is significant expression of a splice variant, Dnmt3b2, in testis from 14.5dpc to 17.5dpc. Thus, Dnmt3b may participate in ICR methylation, but is not absolutely essential. Our results would be supportive of such a view. Dnmt1 expression is unable to compensate for the de novo methyltransferase expression, supportive of the specificity of ICR methylation in Xenopus oocyte experiments ( Figure 6 ). Several recent reports have indicated that the mutation of the four characterized CTCF binding sites, m1-m4, does not alter the establishment of H19 ICR imprints in the male germline or the non-methylated status of the ICR in the female germline. The created mutations were demonstrated in vitro to drastically alter CTCF binding. However, no in vivo confirmation of altered CTCF binding was presented, leaving open the possibility that alternative binding sites for CTCF exist within the 1.8-kilobase ICR sequence. In the present report, we demonstrate in vivo binding of CTCFL to the H19 ICR, however, we have not investigated the precise sequence to which it binds. In order to reconcile the CTCF binding site mutant studies with the present results, one must consider the possibility that the sequence recognition of CTCFL is similar to, but distinct, from that of CTCF. Closer inspection of the sequence homology of the individual zinc fingers is consistent with such an interpretation. Two zinc fingers (fingers 1 and 11) of CTCFL show no significant homology with CTCF (twosequence alignment), whereas the other fingers range from 54% to 95% amino acid homology (average, 67% identity). A question that arises from the present results concerns the nature of recruitment of the Dnmt3s to H19 ICR. Clearly the de novo DNA methyltransferases are efficiently recruited, given the level of specific methylation observed, yet the mechanism remains undefined. A possible explanation may come from recent studies on the PWWP domain. The observation that the PWWP domain shares similarities in both sequence and structure with the Tudor domain, which recognizes symmetrical dimethylated arginines, opens the possibility that dimethyl arginine-containing histones would be capable of recruiting the Dnmt3a and Dnmt3b subunits through interaction with their PWWP domains. Alternatively, there is a bridging protein(s) present in Xenopus oocytes that is responsible for the recognition of the dimethyl arginine histones, and interaction with this protein(s) recruits the Dnmt3s. A recent publication by Vatolin et al. describes partial demethylation of the MAGE-A1 promoter either following treatment of cells with 5-aza-29-deoxycytidine or upon overexpression of CTCFL. However, neither the mechanism of demethylation nor the role of CTCFL in the demethylation was examined. In an effort to integrate the individual observations of protein interactions and the observed ICR methylation in Xenopus injection experiments, we propose a model for ICR methylation (Figure 8). The initial event is recognition and binding of the ICR chromatin by the zinc finger region of CTCFL, which then recruits PRMT7 to the chromatin site by interaction via its N-terminal region. Subsequently, PRMT7 catalyses the methylation of arginine residues in the GRG motif present in adjacent histones H2A and H4. The latter event is stimulated by the ICR-bound CTCFL scaffold. Following disengagement of the CTCFL-PRMT7 complex, the de novo DNA methyltransferases are recruited to the ICR chromatin region, possibly through recognition of dimethylated arginine residues by the PWWP domains of Dnmt3a and Dnmt3b or by a bridging protein provided by the oocyte. Materials and Methods a-CTCFL antibody preparation and characterization. Immunization of rabbits with GST N-terminal CTCFL (1-229 aa) fusion protein was done in collaboration with Eurogentec (Seraing, Belgium). The fusion protein was expressed in Bl21 bacteria and purified using standard protocols. Use of the antibody in both immunoprecipitation and ChIP is described in the relevant sections. Characterization of the anti-sera is presented in Figure 1A. ChIP analysis. ChIP was performed according to the Upstate Biotechnology's protocol (Upstate Biotechnology, Waltham, Massachusetts, United States) with minor changes. Adult mouse testes and embryonic gonads were homogenized in 5 ml and 1 ml of phosphatebuffered saline (PBS), respectively, and fixed with formaldehyde (1% v/v) for 10 min at room temperature. The reaction was stopped with 125 mM glycine. After washing twice with PBS, the cells were lysed with 1-ml SDS-lysis buffer (1% SDS, 10 mM EDTA, 50 mM Tris-HCl ) for 15 min on ice and subsequently sonicated 2 3 45 s at 30% power (SONOPLUS Ultrasonic homogenizers HD 2070; Bandelin Electronics, Berlin, Germany) to obtain an average DNA length of 500 nt. In the case of 15.5-dpc testis, chromatin was reduced in size by digestion with restriction enzymes, according to Murrell et al.. The sample was centrifuged at 20,0003g for 15 min, and the chromatin supernatant was diluted 10-fold with dilution buffer (0.01% SDS, 1.1% Triton X-100, 1.2 mM EDTA, 16.7 mM Tris , 167 mM NaCl, 11% glycerol) and stored at 80 8C. After preclearing, 1.5 ml of prepared chromatin (approximately one third of testis) was incubated overnight at 4 8C with 40-ll salmon sperm DNA/ protein A agarose slurry and 10 ll of either a-CTCFL or pre-immune serum. The samples were washed with 1 ml of each buffer and eluted twice with 250-ll elution buffer. NaCl was added (final concentration 300 mM), and the samples were incubated at 65 8C for 5 h to reverse crosslinking, and subsequently treated with proteinase K mix (final concentration: 40 lg ml 1 proteinase K, 40 mM Tris-HCl , 10 mM EDTA) at 45 8C for an additional 2 h. After extraction with phenol/chloroform, the samples were precipitated with ethanol plus glycogen carrier. DNA was recovered in 50 ll of H 2 O. Real-time PCR analyses were carried out on all H19 ICR ChIP samples. Optimization of reactions were carried out with all primer probe sets. Further, linearity of primer probe sets was verified by DNA dilution series over 3 logs. Standard curves of input DNA were carried out with all PCR analyses of ChIP. Primer and Taqman probe sequences are available on request. The analysis of the real-time PCR of ChIPs was according to Litt et al.. Yeast two-hybrid screen. A yeast two-hybrid screen was performed with the Matchmaker GAL4 Two-Hybrid System 3 (Clontech, Palo Alto, California, United States) according to the manufacturer's instructions. A mouse testis cDNA library was prepared using a library construction kit (Clontech). All screening was carried out with four drop-out selection media. Isolated candidates were further screened for b-galactosidase activity. DNA was prepared from prey candidates by standard procedures and amplified by PCR with primers flanking the cloning site. Each insert was sequenced from the 59 end to allow identification by BLAST and to verify retention of the upstream open reading frame. All selected prey candidates were retransformed into bait strains to verify interactions. Vector construction. GST fusion clones of PRMT7, CTCFL, and Nterminal CTCFL (1-229 aa) were constructed in a CMV-GST eukaryotic expression vector. Myc-tagged CTCFL, and V5-tagged PRMT7, CTCF, N-terminal CTCFL, and Dnmt3L clones were ligated into the EF-1a eukaryotic expression vector (Invitrogen, Carlsbad, California, United States). Both CMV-GST fusion and Myc-or V5tagged expression vectors were transfected into the 293T cells with CaPO 4. Dnmt3a and Dnmt3b expression constructs were kind gift of F. Fuks and T. Kouzarides. All expression constructs used in Xenopus oocyte injections were confirmed by Western blot analysis with lysates from transfected 293T cells. GST fusion histones H1, H2A, and H3 were constructed in pGEX vectors and expressed in Escherichia coli Bl21 strain. Details of vector construction are available on request. Protein extraction. Transfected 293T cells were washed twice in cold PBS and then resuspended in 1 ml (per 10-cm 3 dish) of lysis buffer (150 mM NaCl, 50 mM Tris-HCl , 10% glycerol, protease inhibitors: PMSF and Complete from Roche). After sonication (4 3 5 s at 25% power), Triton X-100 was added to a final concentration of 0.5%, and samples were centrifuged at 20,0003g for 15 min, and the supernatant was transferred to separate tubes. Myc-and V5-tagged protein lysates were snap-frozen and stored at 80 8C until use. CMV-GST fusion bait proteins, after extraction, were further purified. A total of 50 ll of glutathionesepharose beads (G-4510; Sigma, St. Louis, Missouri, United States) were added to GST fusion protein lysates and incubated for 2 h at 4 8C. The immobilized GST fusion proteins were washed four times in lysis buffer plus 0.5% Triton X-100 and stored at 4 8C until use. A similar protocol of extraction and purification was followed for GST fusion proteins expressed in bacteria with different sonication conditions (5 3 10 s at 50% power). Histones were extracted from Xenopus oocytes by incubation on ice with in 0.4 N H 2 SO 4 for 1 h. Proteins in the resulting supernatant were precipitated with ten volumes of acetone overnight at 20 8C. Protein interaction assays. GST pull-downs with fusion proteins expressed in bacteria were performed according to standard procedures. Co-transfection of CMV-GST fusion (bait) and epitope-tagged (prey) constructs in 293T cells are referred to as in situ GST interaction assays. In situ GST pull-down reactions were done overnight at 4 8C. In all assays, following the interactions, beads were washed 4 times in lysis buffer and boiled in 23 SDS loading buffer and analyzed by Western blotting. Histone methyltransferase reactions. Immunoprecipitation of V5tagged PRMT7 and myc-tagged CTCFL were according to standard procedures using protein G beads. The reactions with separate histones H2A and H2B (Roche, Basel, Switzerland) were done as follows. V5-or myc-immunopurified PRMT7 and CTCFL were eluted using epitope peptides V5 and myc, respectively. Equivalent amounts of PRMT7 were used in histone methyltransferase reactions with or without CTCFL using C 14 methyl S-adenosyl methionine (Hartmann Analytic, Braunschweig, Germany). Reactions were incubated (B) Following disengagement of the CTCFL complex, the de novo DNA methyltransferases Dnmt3a and Dnmt3b are recruited by either their PWWP motifs or through a bridging protein to the methylated histones. Dnmt3L is recruited by direct interaction with Dnmt3a and Dnmt3b. Subsequent to their recruitment, the de novo DNA methyltransferases methylate adjacent CpGs, resulting in ICR methylation. DOI: 10.1371/journal.pbio.0040355.g008 overnight at 30 8C. Following the reactions, proteins were separated on 15% PAGE, embedded with 18% sodium salicylate, dried, and then exposed for fluorography. PRMT7 and CTCFL were coimmunoprecipitated using a-V5 antibody for methyltransferase reactions with total histones (Roche). Nuclear microinjection into Xenopus oocytes. Nuclear microinjection into stage VI oocytes was done as previously described with minor modifications. Injections into the nucleus were performed without prior centrifugation. The DNAs injected per oocyte were: 40 pg of target DNA (ICR or non-ICR), 200 pg of each expression plasmid with CMV or EF-1a promoters (CTCFL, PRMT7, DnmT3a, DnmT3b, and DnmT3L) and 200-pg pEGFP-C1 vector (Clontech). Injected oocytes were incubated at 19 8C for 48 or 72 h. The total amount of plasmid DNA injected thus depended upon the precise experiment carried out, yet gave a total amount of 1,240 pg maximally when six expression plasmids were injected. No carrier plasmid was used to normalize total injected plasmid DNA between experiments. Western analysis of Dnmt3a and Dnmt3b expression indicated no perturbation of expression of either protein when expressed alone or together ( Figure S1). DNA isolation, bisulfite modification, and sequencing DNA was isolated from GFP-positive oocytes by incubation in STOP buffer (1% SDS, 30 mM EDTA, 20 mM Tris-HCl , 500 lg ml 1 Proteinase K) for 2 h at 37 8C. Subsequently the DNA was extracted twice with phenol/chloroform and ethanol precipitated. DNA was recovered in H 2 O, mixed with 2-lg salmon sperm DNA, and digested with the restriction enzyme ScaI (Roche) for 2 h at 37 8C, subsequently phenol/ chloroform extracted, ethanol precipitated, and resuspended in 50-ll H 2 O. Bisulfite treatment of DNA was based on previously described method. DNA was denatured by adding 5.5 ll of freshly prepared 3 M NaOH and incubating at 95 8C for 3 min, and immediately placed on ice. A total of 500 ll of freshly prepared Na-bisulfite mix (2.85 M Na-bisulfite, 2.7 mM hydroquinone, and 400 mM NaOH) was added and placed in PCR machine for bisulfite transformation (2 h at 55 8C, 1 min at 95 8C, 1 h at 558C, 1 min at 95 8C, and 1 h at 55 8C). The samples were desulfonated on columns from QIAquick PCR purification kit (Qiagen, Valencia, California, United States) and purified with buffers from the same kit as follows. Five volumes of PB buffer was added to samples, and the mixture was applied to columns and washed with PE buffer. Samples were desulfonated with NaOH/ ethanol (150 mM/90%) at room temperature for 10 min, washed with PE, and eluted in 50-ll EB buffer (10 mM Tris-HCl ) for 15 min at 70 8C. A total of 5 ll of bisulfite-transformed DNA was amplified in 50-ll PCR reactions (40 cycles; annealing temperature was 50 8C) using corresponding meth-primers. (Primer sequences are available upon request.) PCR products were gel purified, cloned into pGEM-T vector, and sequenced with BigDye Terminator v1.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, California, United States) in ABI 3100 sequencer. Efficiency of bisulfite modification was determined by evaluating transformation to T of non CpG Cs. The maximum non-transformed Cs observed was one out of 51. Figure S1. Western Blot Analysis of Dnmt3a and Dnmt3b Expressed in Nuclear-Injected Xenopus Oocytes Left and right panels show analysis of Dnmt3a and Dnmt3b, respectively. Nuclear injections into Xenopus oocytes: 3a (GFP&Dnmt3a), 3aT (GFP, Dnmt3a, Dnmt3L, ICR, and CTCFL&PRMT7), 3a/b (GFP and Dnmt3a&Dnmt3b), 3a/bT (GFP, Dnmt3a, Dnmt3b, Dnmt3L, ICR, and CTCFL&PRMT7), 3b (GFP&Dnmt3b), and 3bT (GFP, Dnmt3b, Dnmt3L, ICR, and CTCFL&PRMT7).There is no change in expression of either protein when expressed alone or together. Note, the panels showing GFP expression in the samples 3a/b and 3a/bT are identical. Found at DOI: 10.1371/journal.pbio.0040355.sg001 (2.6 MB DOC). |
PdCatalyzed CH Activation: Expanding the Portfolio of MetalCatalyzed Functionalization of Unreactive CH Bonds by AreneChromium Complexation Palladium-catalyzed C¢H activation reactions have revolutionized modern strategies for the rapid generation of complex molecules. Principally, C¢H activation has been accomplished by the use of directing groups that form transient chelates with the transition metal and direct it for selective C¢H cleavage. In contrast, the selective functionalization of nonor weakly acidic sp and sp C¢H bonds (pKa>35) that, in addition, are distal to a directing functionality remains a challenge. h-Arenetransition-metal complexes, and h-arenechromium tricarbonyls in particular, are important intermediates in organic chemistry (Figure 1 A). Coordination of a transition |
OPTICAL FLOW DETECTION USING AN IMPROVED BLOCK MATCHING ALGORITHM In the last few years, researchers shown a growing interest for computer detection and tracking of moving objects. The Block Matching Algorithm (BMA) approach is employed in the MPEG standard, as well as in a large variety of optical flow detection techniques. The BMA divides the current frame in a number of blocks and search for a matching in the next frame, in order to estimate the displacement of blocks between two successive frames. Due to large searching space, the classical BMA is a greedy algorithm and a number of versions where developed toward reducing the computational load. We present a new BMA by making a number of notes on the classical BMA implementation. We have improved the computation time by reducing the number of operations and loops required. We have also found an intrinsic optical flow regularization method and therefore obtain smoothed motion vectors. |
Situational Awareness for Smart Home IoT Security via Finite State Automata Based Attack Modeling Smart Home Internet of Things (SHIoT) provides a rich compendium of innovative, ubiquitous, and interactive services to users using a variety of smart sensors, devices and applications. However, owing to the strongly internet-facing, dynamic, and heterogeneous and low capability nature of these devices, and existence of vulnerabilities in them, in their controlling applications and their configurations, there are security threats in SHIoT that affect the safe and secure functioning of these systems. Moreover, owing to the rich interactions with human users, these systems are more vulnerable to security attacks. On the other hand, because of the complexity of the SHIoT system, it is difficult to effectively determine the security posture. What is lacking is a comprehensive model that would allow the security analysts to capture and analyze the nature of the interactions between the different devices, applications and human users, and the vulnerabilities and misconfigurations in the same in order to understand the weak spots in the SHIoT system and prepare for potential security attacks. Towards this end, we propose a finite state automata (FSA) based framework to build attack models of SHIoT. We present a formalism for such a model and show through several scenarios how the model enables one to obtain a better understanding of the security posture of the system. Furthermore, An FSA based attack model offers more opportunities for tool support for automated analysis using techniques such as model checking. |
Plunging Ranula as Initial Manifestation of HIV-AIDS: A Case Report and Literature Review Introduction: Ranulas are defined as the extravasation of mucus into an intraoral cystic cavity produced by an injury to the excretory ducts or the acini of the sublingual gland. Plunging ranulas are generated when the salivary collection penetrates the mylohyoid through a dehiscence of its fibers, invading the submandibular space. The close relationship between these lesions and HIV-AIDS infection has been reported since 2004, with a 75.88% rate of positive cases. The objective of this article is to present the management of patient with a plunging ranula, which made it possible to address the diagnosis of HIV-AIDS. Case Report: A 28-year-old male patient presented with a painless and fluctuating swelling in the laterocervical space and the floor of the mouth, which produced dysphagia and dyslalia. Serological tests for HIV were carried out and had a reactive result. Surgical treatment was performed via intraoral approach, with a favorable evolution at 12 months of follow-up. Conclusion: Ranulas, and particularly plunging ranulas, should be considered in the group of oral lesions associated with HIVAIDS infection, as they may even be the initial manifestation of the disease. |
Waste Profiling and Analysis using Machine Learning A large amount of solid waste is generated in the urban areas, and these wastes contain different types of substances like organic wastes, paper, plastic, metal, glass, etc. which needs to be treated separately for efficient waste management. This indicates that the wastes which are dumped all together need to be separated into categories before treating them. To support this process, Government introduced the concept of wet and dry wastes for civilians to dump wastes accordingly. If these norms are followed strictly, a huge amount of budget for waste segregation will be saved, which can be used for further waste treatment. Keeping this in view, this paper has proposed the system which can classify the waste as dry waste or wet waste based solely on the image of the waste taken. Focusing on simplicity, it is intended to propose an application which will only be required by the civic bodies to upload the captured images of garbage bins and sent to the system to analyze whether the garbage is wet, dry or mixed. The detection of contents of the garbage is the crucial aspect which will be done using machine learning. This idea can contribute in the near future to help analyze the waste disposal habits of people in different locations. This analysis can then be used to create awareness in required locations and help improve the waste disposal habits. |
Empirical pension indicators: Cross-country comparisons and methodology for Russia The article proposes a system of empirical indicators for Russia, taking into account the analysis of foreign and Russian approaches to assessing the adequacy of the level of pension provision. One group of the indicators, designed for crosscountry comparisons, is based on the methodology of the European Commission. The results of calculations of the proposed indicators on Russian data are presented, which made it possible to compare the level of pensions in Russia and European Union countries. The article defines the limitations of indicators for cross-country comparisons in terms of assessing the level of pension payments within the Russian system of compulsory pension insurance. For more adequate assessment of the adequacy of payments, the second group of indicators was developed that take into account the particularities of the Russian pension system. A distinctive feature of the proposed approach to the assessment of empirical indicators is that they are focused primarily on assessing the adequacy of the actual pension payments in terms of fulfilling the functions assigned to them - protection from poverty, compensation (replacement) of wages and ensuring the balance of income. The authors propose to evaluate these indicators not only on the data of population surveys, as is most common in foreign practice, but also on the administrative data of the Pension Fund of the Russian Federation. |
Optimal design of inner evaporative cooling permanent magnet wind power generator Combine the fractional slot concentrated winding (CW) with inner evaporative cooling system (IECS), we can provide a new train of thought of cooling for the permanent magnet (PM) wind turbines. Wind generator with FSCW and IECS has some special characteristics compared with traditional wind generator with distributed wind (DW) and air cooling system. These must be considered during the design work. In this paper, we have a research for the above situation and proposed an optimal design method. A 5MW PM generator designed by the optimal design method is analyzed in no-load and rated load electromagnetic field simulation by finite element method (FEM) to prove the validity of the design model and method. |
Silencing allatostatin expression using double-stranded RNA targeted to preproallatostatin mRNA in the German cockroach. YXFGL-NH family allatostatins (ASTs) were isolated from cockroach brain extracts based on their capacity to inhibit juvenile hormone (JH) biosynthesis in corpora allata (CA) incubated in vitro. Subsequently, the inhibitory activity of synthetic ASTs was demonstrated experimentally, although these peptides were shown to be active as JH inhibitors only in cockroaches, crickets, and termites. Here, we sought to examine whether ASTs are true physiological regulators of JH synthesis. To this end, we used RNA interference methodologies and the cockroach Blattella germanica as a model. Treatments with double-stranded RNA targeting the allatostatin gene in females of B. germanica produced a rapid and long-lasting reduction in mRNA and peptide levels in both brain and midgut during the reproductive cycle. Nevertheless, while brain AST levels were reduced approximately 70-80%, JH synthesis did not increase in any of the age groups tested. |
A DESIGN RATIONALE ANALYSIS METHOD TOWARDS ROBUST ARTIFACT DESIGN Abstract To design a more robust artifact, an artifact design based on a design rationale analysis is pivotal. Errors in previous design rationales that led to the degradation of artifact robustness in the past provide valuable knowledge towards improving the robust design. However, methods for exposing and analysing errors in design rationale remain unclear. This paper proposes a structured method for a design rationale analysis based on logical structuring. This method provides a well-constructed means of identifying and analysing errors in design rationale from the perspective of knowledge operation. |
Awareness of Skin Self-Assessment as an Early Detection Tool for Skin Cancer ABSTRACT: The incidence of skin cancers is increasing at an alarming rate. If recognized and treated in early stages, skin cancer is nearly 100% curable. Precancerous lesions can be eliminated before becoming malignant. It is therefore extremely important to assess and screen for changes in the skin. Utilizing the health belief model as the conceptual framework, this study sought to determine the public's awareness of (a) the importance of skin self-assessment in the early detection of skin cancer, (b) the proper technique for self-assessment, and (c) factors associated with performance or nonperformance of self-assessment. A scripted interview was used with participants to determine their attitudes toward skin self-assessment with regard to susceptibility, seriousness, perceived benefit, perceived barriers, health motivation, and confidence related to skin self-assessment. The findings of this study indicated that a majority of the respondents believed that skin cancer is a serious condition but was not viewed as a concern unless it had personally affected them. |
Mucormycosis of middle ear in a diabetic patient Mucormycosis is an infection caused by fungi belonging to class zygomycetes, with high mortality and morbidity rate. Acquisition of mucormycosis is inhalation of spores or cutaneous route. The common risk factors for invasive mucormycosis consist of diabetes mellitus, high-dose glucocorticoid therapy, and neutropenia. The most clinical manifestation of mucormycosis is rhinocerebral lesions. Other manifestations are pulmonary, cutaneous, disseminated, and gastrointestinal. Ear involvement is extremely rare. The authors describe a case of mucormycosis cholesteatoma with concomitant central nervous system lesion in a patient with diabetes mellitus that responded to therapy. |
Primitive defense mechanisms in schizophrenics and borderline patients. In this study, patients with neurotic disorders, borderline patients, acute schizophrenics, and chronic schizophrenics were studied with regard to primitive defense mechanisms. Primitive defense mechanisms were assessed by means of the Lerner Defense Scale (LDS). In this study, the LDS was applied to the Holtzman Inkblot Technique. With the exception of primitive idealization, borderline patients used all primitive defense mechanisms significantly more frequently than patients with neurotic disorders, that is, splitting, projective identification, primitive denial, and primitive devaluation. Compared with both acute and chronic schizophrenics, borderline patients used primitive devaluation at a significantly higher degree of frequency. Both acute and chronic schizophrenics differed from patients with neurotic disorders by using splitting and projective identification significantly more frequently. However, there were differences concerning primitive devaluation and idealization. The defense structure of chronic schizophrenics was heterogenous. Except for primitive idealization, all primitive defense mechanisms correlated significantly with self-report measures of identity diffusion and impaired reality testing, which is consistent with theoretical assumptions. By a discriminant analysis, 90% of the borderline patients, 80% of the patients with neurotic disorders, 76% of the acute schizophrenics, and 92% of the chronic schizophrenics were classified correctly. |
Practical power consumption analysis with current smartphones In this paper, we analyzed the power consumption of all Samsung Galaxy smartphones to explore modern smartphones' power consumption characters. With dedicated measurement and analysis, we found that, some previously emphasized power hungry consumers, like Wi-Fi and multimedia codec, consume very trivial power in the modern smartphones; and video adaptation don't achieve significant power saving impact any more. Meanwhile, some other hardware component like cellular network module, GPU and camera emerge as considerable power consumers, and these might be the most efficient optimization objects for designing future power-efficient smartphones. |
A Current Appraisal of Problems with Gangrenous Bowel Gangrenous bowel most often results from hernia, adhesions and mesenteric insufficiency. The overall mortality rate for 151 cases was 37%. This figure was 20% for hernia, 23% for adhesions and 74% for mesenteric insufficiency. In the latter category where bowel resection was feasable the mortality rate was 40%. Other causes of bowel gangrene had a mortality rate of 28%. In many instances the pathophysiologic processes were of such a nature that current medical expertise has not reached a level of development to effectively cope with the situation. There were, however, a significant number of cases where survival may have been achieved had it not been for deficiences on the part of the patient, the primary health care personnel or those in attendence at the referral center. The basic keystone for a successful outcome in the management of patients with the gangrenous bowel problem is early surgical intervention. All will be lost if patient exposure to this source of lethal toxins is allowed to proceed to an irreversible stage. Liberal antibiotic administration probably postpones the arrival of intractable hypotension. Other factors which can be expected to improve the survival rate include minimization of technical errors, repair of incidental hernias, elemination of dependence upon nasogastric tubes for the definitive management of patients with complete bowel obstruction (with one or two exceptions), and a firm commitment to the diligent pursuit and early definitive management of postoperative complications. |
A simple noise generation method for millimeter-wave therapy apparatus Millimeter wave therapy (MMWT) is now one of the recognized electric physical therapy methods. The great number of techniques available are distinguished from each other by various complex radiation parameters (frequency range, intensity, available frequency and/or peak modulation), and method of effect on a patient (duration of each influence and their amount in a course of treatment, the particular irradiated zones of a patients body, and effect to acupuncture locus) is known. That is why the spectral structure of a radiation is considered to be an important parameter. The variation of radiation spectra used in practice reaches from high coherent oscillations up to quasi noise and noise broadband (ten percent) signals. In recent years good therapeutic results of noise effects on organisms in all parts of the electromagnetic spectrum wavelengths has been proved by investigators in the field of experimental and clinical medicine. It is shown that the efficiency of treatment is higher than in case of quasi-harmonic effects. |
A-TIP: Attribute-aware Text Infilling via Pre-trained Language Model Text infilling aims to restore incomplete texts by filling in blanks, which has attracted more attention recently because of its wide application in ancient text restoration and text rewriting. However, attribute- aware text infilling is yet to be explored, and existing methods seldom focus on the infilling length of each blank or the number/location of blanks. In this paper, we propose an Attribute-aware Text Infilling method via a Pre-trained language model (A-TIP), which contains a text infilling component and a plug- and-play discriminator. Specifically, we first design a unified text infilling component with modified attention mechanisms and intra- and inter-blank positional encoding to better perceive the number of blanks and the infilling length for each blank. Then, we propose a plug-and-play discriminator to guide generation towards the direction of improving attribute relevance without decreasing text fluency. Finally, automatic and human evaluations on three open-source datasets indicate that A-TIP achieves state-of- the-art performance compared with all baselines. Introduction Originating from Cloze tests, text infilling aims to fill in missing blanks in a sentence or paragraph by making use of the preceding and subsequent texts. For example, given two infilling tasks E1 and E2 in Fig.1, text infilling models are supposed to provide fine-grained control over the location of any number of blanks and infill a variable number of missing tokens for each blank. Text infilling has been gaining increasing attention in a number of prevailing research fields, including ancient text restoration (), text editing and rewriting (), and conversation generation (). However, current text infilling methods are based only on bidirectional semantic constraints (), and other abundant attribute-based constraints, e.g., sentiment and topics, remain to be studied. In reality, infilling attribute-aware content can better satisfy human needs and introduce more diversity. For instance, as shown in Fig.1, A-TIP can fill in blanks under the guidance of an attribute to satisfy sentiment or expert knowledge infilling, while current text infilling models mainly focus on fluency, which leads to meaningless and monotonous infilling contents (). Designing a simple but efficient attribute-aware text infilling model is a challenging task. First, to achieve attribute awareness, simply modifying a text infilling model architecture or finetuning with attribute-specific data will destroy the model's ability to infill blanks or require a significant cost for re-training. Second, if the model infills blanks towards the direction of improving text attributes, avoiding ill-formedness between infilling content and its bidirectional context becomes a challenge. For instance, "The movie interesting and perfect us" with _ as blanks. Finally, current methods lack fine-grained control over automatic determination of the number/location of blanks or the infilling length for each blank. For example, Markov assumption-based models ) hardly adapt to variable infilling lengths, while masked language model (MLM)based methods (; are incapable of generating more than one word per blank, and generative LM-based methods () cannot guarantee the output will match the number of missing blanks in the input. To circumvent the above dilemma, in this paper, we propose an Attribute-aware Text Infilling method based on a Pre-trained LM (A-TIP), in which a plug-and-play discriminator provides finegrained control over bidirectional well-formed fluency and attribute relevance. 1 Specifically, 1) we first propose a general text filling framework that fine-tunes a standard LM with many artificiallymasked examples in an auto-regressive manner. Moreover, to ensure that the number of infilling contents equals the number of blanks, we design a new attention mechanism, where unmasked tokens can attend to each other but masked tokens can attend only to the preceding context ( Fig.2 (A)). We also adopt two-level positional encoding to combine inter-and intra-blank positional information to automatically learn the length of blanks. 2) To achieve attribute-aware generation without modifying LM's architecture or re-training, we propose a plug-and-play discriminator that shifts the output distribution of the text infilling model towards the semantic space of given guide attributes. We also design two additional strategies to ensure the infilling content is well-formed with its bidirectional context without decreasing attribute relevance. The main contributions are summarized as follows: We propose a unified text infilling model that adopts a new attention mechanism and two-level positional encoding to enable our model to learn the number/location of blanks and infilling length for each blank automatically. To the best of our knowledge, A-TIP is the first attribute-aware text infilling model that does not require any modification of the language model's architecture or re-training on specific attributed datasets. Further, our plug-and-play discriminator can provide fine-grained control over fluency and attribute relevance, and can be applied to any transformer decoder-based text infilling model. The experimental results on three open datasets show that A-TIP achieves state-of-the-art performance compared with all baselines. Related Work In this section, we briefly review the most relevant studies to our work on pre-trained LMs, text infilling, and constrained text generation. Pre-trained Language Models Pre-trained LMs have made significant improvements in many natural language processing tasks by adopting self-supervised learning with abundant web texts (;). They can be classified into three types. The first uses an auto-encoding model. For example, BERT () and its variations are pre-trained as masked LMs to obtain bidirectional contextualized word representations. The second adopts an encoder-decoder architecture, which is pre-trained for seq2seq tasks, such as MASS () and T5 (). The third adopts an auto-regressive model, which follows a left-to-right manner for text generation, such as GPT-2 () and XLNet (). While we adopt GPT-2 as the LM in this paper, our method can be easily migrated to any type of pre-trained LMs. Text Infilling Approaches Current text infilling algorithms can be classified into four categories. Generative adversarial networks (GAN)-based methods train GANs to ensure that the generator can generate highly reliable infilling content to fool the discriminator (;). Intricate inference-based methods adopt dynamic programming or gradient search to find infilling content that has a high likelihood within its surrounding context ). Masked LM-based methods generate infilling content on the basis of its bidirectional contextual word embedding (;). LM-based methods fine-tune off-the-shelf LMs in an auto-regressive manner, and a number of methods change the input format by putting an infilling answer after the masked input (), while others do not change the input format (). Unlike the aforementioned methods, we solve a more complex task: attribute-aware text infilling. Constrained Text Generation Traditional controlled generation models involve either fine-tuning existing models or training conditional generative models (). proposed a plugand-play controlled generation model (PPLM), which does not modify or re-train the parameters of the original LM but can achieve comparable performance to fine-tuning methods. For example, PPCM updates the hidden state towards the direction of attribute enhancement to generate attribute-aware conversations. Pascual et al. designed a complex plug-and-play architecture to ensure that the generated content contains specific keywords. While GeDi () and its extension (Lin and Riedl, 2021) can accelerate the decoding process of PPLM, they assume the model is trained by large-scale labeled datasets, which is unrealizable for text infilling. Unlike the previous work, we should also consider the generated infilling content is well-formed with its corresponding bidirectional context, ensuring PPLM is suitable for text infilling. Preliminaries To clarify our method, we first introduce some essential background knowledge and then define the task of attribute-aware text infilling. Language Models reveal the degree of how much a sentence (a sequence of words) is likely to be a realistic sequence of a human language. Formally, let W be the vocabulary set and w 1:n = {w 1,..., w n } is a sentence with n words, where w i ∈ W. An LM measures the joint probability by decomposing the sequence one by one: where w <i = {w 1,..., w i−1 }. Constrained Text Generation: Given k explicit constraints c = {c 1,..., c k }, our goal is to generate a sentence w that maximizes the conditional probability p(w|c): Task Definition: Attribute-aware text infilling is to take incomplete text w, containing one or more missing blanks, and return completed text w under the constraints of c. As in Fig.1, several attributes are listed in c. Specifically, let be a placeholder for a contiguous sequence of one or more missing tokens. Then, w is a sequence of tokens in which a number of them are . To map w to w, constrained with attribute c, an infilling strategy must specify both how many and which tokens to generate for each . Note that there may be many logical w for a given w. Hence, we are interested in learning a distribution p(w| w, c). Specifically, in accordance with Bayes' theorem, we formulate the probability of predicting the token w i for its corresponding as: where p(w i |w <i, c) can be decomposed into two parts that deal with the LM for p(w i |w <i ) and the discriminator for p(c|w 1:i ). In Section 4, we introduce these two parts in detail. We assume that any two constraints are independent: p(c|w 1:i ) = k j=1 p(c j |w 1:i ). Methodology The overall framework of A-TIP is shown in Fig.2. A-TIP contains two components: a text infilling model and a plug-and-play attribute-aware controller. Text Infilling Model Given a corpus consisting of complete text examples, we first create infilling examples and then train the GPT-2 with these examples. Specifically, given an input example w 1:n with n tokens, we first randomly replace m non-overlapping word spans S = {s 1,..., s m } in w with tokens to form a corrupted text w. We also assume each span s i contains n i consecutive tokens . Then, we concatenate the spans S separated by tokens to form a training target S = {, s,..., s (1,n 1 ), ,..., , s (m,1),..., s (m,nm) }. Finally, we construct a complete infilling example by concatenating w and S (see Token Embedding in Fig.2). There are two advantages of designing such an input format. First, we add only 2m additional tokens (one and one per blank as shown in Fig.2 "Token Embedding" add 4 tokens for two spans). Although memory usage for GPT-2 grows quadratically with sequence length, as m is small, additional training time complexity will be minimal. Second, we can apply two different attention strategies for the corrupted text w and training target text S. As shown in Fig.2 (A) (B) = = Figure 2: Model overview. We first fine-tune an off-the-shelf GPT-2 by adopting a new attention mechanism and two-level positional encoding to infill blanks. Then, we design a plug-and-play discriminator to guide generation in the direction of improving attribute relevance. We also adopt KL divergence and a threshold-based strategy to provide fine-grained control over fluency and attribute relevance. adopting such an attention mechanism, when A-TIP infills the i-th blank s i, it will focus on the bidirectional context of the i-th blank, which can ensure the well-formedness and rationality of the whole sentence. Current methods hardly perceive the number/location and infilling length for each blank. We design two-level positional encoding, which can provide fine-grained control over them. Specifically, each token is encoded with two position IDs. The first position ID represents the inter-position in the corrupted text w and the second position ID represents the intra-position in each span. Finally, A-TIP trains the GPT-2 with the infilling examples in an auto-regressive manner. When predicting missing tokens in each blank, A-TIP has access to the corrupted text w and the previously predicted blanks. Formally, the probability of generating the i-th blank s i is where are parameters for the GPT-2, n i represents the number of tokens in s i, s i,j represents the jth token in the span s i, s <i represents previously predicted blanks, and s i,<j = {s i,1,, s i,j−1 }. Plug-and-play Attribute-aware Controller To clarify our approach, we follow the notation of and define the GPT-2 decoding process (Eq.) in a recursive manner. Specifically, we first define H t, that contains all historical key-value pairs, i.e., t ) stores all key-value pairs of t tokens in the l-th layer. Then, we formally define the recurrent decoding process to generate the i-th token as: where o i is the hidden state of the input at i-th time-step. Then, we sample the i-th generated token from the following distribution by beam search (Hokamp and Liu, 2017): where W is a parameter matrix that maps the hidden state o i to a vector of the vocabulary size. In accordance with Bayes' theorem in Eq., we have p(w i |w <i, c) ∝ p(w i |w <i ) p(c|w 1:i ). To achieve attribute-aware text infilling, when we infill the i-th blank, we shift history matrix H i−1 towards the direction of the sum of two gradients: 1) To maximize the log-likelihood of the attribute c under the conditional attribute model p(c|w 1:i ) and 2) To ensure high fluency of text infilling p(w t |w <i ). We update only H i−1 and fix other model parameters unchanged since next-token prediction depends only on the past key-value pairs via H i−1. Thus, we propose to gradually update H i−1 to guide future generation in the desired direction. Let ∆H i−1 be the update to H i−1 to shift the generation infilling content towards the desired attribute direction c. At the beginning of the generation, ∆H i−1 is initialized to zero, and we can obtain the unmodified distribution as p i. Then, we update ∆H i−1 with gradients from the attribute model that measures the extent to which the generated text possesses the desired attribute. Following, we rewrite p(c|w 1:i ) as Pb = p(c|H i−1 + ∆H i−1 ) and define the gradient up-date for ∆H i−1 as where is the learning rate and is the scaling coefficient for the normalization term to control the relevance of the attribute. We repeat Eq. less than 10 times to generate attribute-aware tokens. Subsequently, the new H i−1 = H i−1 + ∆H i−1 is computed, and a new token is generated using o i, H i = GPT-2(w <i, H i−1 ). The described optimization process is repeated for every token in the generated sequence. Compared with the unconditional LM-based text generation task, this process will not take much time (see detail in experiments). Although we can generate attribute-aware infilling content, we can easily generate low-quality, repetitive, and low-fluency text. Thus, we add two additional components to ensure the fluency and quality of generated infilling content with its bidirectional context. First, we minimize the KL divergence between the unmodified distribution p i and modified distribution p i for the i-th token: Our objective function can be reformulated as where is a parameter to balance the fluency and attribute relevance. Then, we update ∆H i−1 as: Intuitively, we can generally find many words that have different levels of correlations with the specific attribute. For example, {perfect, good, bad, like} can mainly determine the sentiment of a sentence. Thus, we define Gain from the attribute to determine whether to change a generated word. As shown in Fig.2, two candidate words are sampled from the unmodified distribution (before back propagation) and modified distribution (after back propagation), respectively. Gain between two candidate words in the conditional model can be formulated as where w i and w i are samples from the modified and unmodified distributions, respectively. To better control the relevance of the attribute, we define a threshold to determine whether to generate a word from the modified distribution. Specifically, Gain > represents that the word generated from the modified distribution can have a relatively remarkable effect on attributes. Otherwise, if the discriminator does not guide well at certain steps (Gain <), we select the word generated from the unmodified distribution to maintain the fluency to be the same as the original unconditional text infilling model to the greatest extent. Discriminator Construction: As shown in Fig.2 (B), for simplicity, we train a linear classifier f as a discriminator with annotated datasets, indicating a sentence and label pair as (w, y). Specifically, for each sentence w of length t, we compute the set of hidden states o = {o 1,..., o t } from the GPT-2. Then, we compute the mean of o as and train f using the cross-entropy between the true label distribution y and predicted label distribution f (). The number of parameters in this layer is (embedding dimension number of attributes + number of attributes), which is negligible compared with the number of parameters in the text infilling model itself. Experimentation As shown in Table 1, we evaluated the proposed methods on three tasks to demonstrate that our framework is not custom tailored to a single domain: sentiment-aware, domain knowledge-aware, and topic-aware text infilling. We also show a case study for these tasks. We determined whether A-TIP can generate infilling text that satisfies the desired attribute and whether it can infill high-quality text in blanks by using both automated methods and human annotators. Experimental Settings Datasets In addition to using the datasets in Table 1 to train our text infilling model, we also adopted sentiment labels in SST-5 (Pang and Lee, 2005) area labels in Abstracts () for domain knowledge-aware text infilling, and topic labels in ROCStories () for topic-aware text infilling. For the datasets with attribute labels like SST-5 and Abstracts, we can directly use their labels to train our plug-and-play discriminator. However, considering that most datasets do not have attribute labels, we adopted COMBINETM () to detect attributes for them (details in Appendix A). For example, for ROCStories, we can detect thirteen attributes and prove that A-TIP can generate a relevant topic in human evaluation (Table 3). We split the datasets into 80%/10%/10% as training/validation/test data, respectively. Following TIGS and BLM (), we randomly masked r% tokens in each document. To ensure that all experiments are performed on the same data, we removed infilling examples that exceed our training sequence length of 256 tokens. Evaluation Metrics In automated evaluation, perplexity is a measure for fluency in open-domain text generation. 2 We measured it using GPT-2. The diversity of text was measured using the number 2 Overlap-based metrics such as BLEU scores () are not appropriate for evaluating infilling as there are many realistic infills that have no word-level overlap with the original. of distinct n-grams (normalized by text length) as in Li et al.. We reported Dist1, Dist2, and Dist3 scores for the distinct 1, 2, 3-grams. Following, we used an external classifier to evaluate Accuracy (macro-average Fscore) for sentence attribute labels. We evaluated the attribute control for sentiment (SST-5) with an external sentiment classifier with XLNet (), which was trained with the IMDB dataset. We chose a BERT-based classifier () for the Abstracts dataset. The t-test was used to evaluate the significant performance difference between two approaches (Yang and Liu, 1999) for both automated and human evaluations. Baselines We compared A-TIP with six baselines that can be classified in four classes (Section 2.2): 1) Inference-based: We trained TIGS, an RNN-based seq2seq model. At inference time, we iteratively searched tokens in continuous space and projected their vectors to real words. 2) GAN-based: We trained the generator of MaskGan () on PLM with a seq2seq architecture. The discriminator can make word distributions of the generator closer to those of the real word distribution. 3) Masked LM-based: We used representations of blanks as seeds to fine-tune BERT () and Roberta. At inference time, blanks are infilled one after another and are conditioned on the previous generation. We trained BLM () with a seq2seq architecture, where the encoder module is a transformer (base) and the decoder process adopts beam search. 4) LM-based: We trained ILM () by fine-tuning GPT-2 to output a full document from a masked input. Note that it may have invalid outputs that do not match the input format. Implementation Details In our experiments, we set the learning rate = 1e − 4 and the scaling coefficient = 0.5 for Eq.. Sequence representations were obtained by the GPT-2 module (12 layers, 12 heads, n embd = 768, n ctx = 1024, batch size = 24). We applied the Adam (Kingma and Ba, 2015) optimizer with an initial learning rate of 1e-4, and the weight decay and dropout were turned based on the loss on the validation data. Our discriminator has a linear layer on the head of GPT-2. For a fair comparison, we followed the default parameter settings of the baselines and repeated all experiments 10 times to report the average accuracy. The unpaired t-test was used to evaluate the significant difference between any two approaches as multiple comparisons (details in Appendix B) for both automated and human evaluations. We trained models with early stopping. Following, we evaluated the attribute control for sentiment with an external sentiment classifier. Parameter Sensitivity A-TIP uses two hyperparameters. dominates the attribute relevance of generated text and can control the fluency of infilling content. We analyzed the parameter sensitivity on all three validation data and selected the validation data of SST-5 as an example to determine the parameter sensitivity of A-TIP. As shown in Figs.3 (A-C), we observed how and affect the performance of A-TIP by varying from 0.2 to 0.6 in 0.1 intervals and from 0.008 to 0.012 in 0.001 intervals. The results indicated that A-TIP obtain the best performance when ∈ and ∈ . The reason why these parameters can affect the results is that when < 0.4, the attribute relevance becomes stronger and the fluency gets destroyed. > 0.5 weakens both the attribute relevance and text diversity. When < 0.01, A-TIP tends to preserve modified words, which leads to low fluency. When > 0.012, A-TIP preserves the original unmodified words, which causes low attribute relevance and diversity of text. To achieve a balanced performance, we set =0.4 and =0.01 on all datasets in our experiments. Considering that the mask rate r is also a hyperparameter, we analyzed its effect on the results by varying it from 10% to 70%. We found the same trend on all datasets and took SST-5 as an example. As shown in Fig.3 (D), the fluency decreased when r varies from 10% to 40% because infilling content may be well-formed with its bidirectional context. As r increased from 40% to 70%, the fluency of text mainly depends on the baselines' original generation ability, which is stable. Fig.3 (E) shows that when r increases, the baselines cannot recover the attributes of infilling content well. However, A-TIP can generate attribute-aware text to improve the classification accuracy. All baselines can obtain stable fluency and classification accuracy when r = 50%, we fixed r= 50% to show numerical experimental results in the later experiments. Automated Evaluation We evaluated the performance of A-TIP on attribute-aware text infilling by measuring PPL, Dist1, Dist2, Dist3, and ACC on the test data. Table 2 shows, A-TIP outperformed other baselines, indicating that our proposed framework can take advantage of the bidirectional context and attribute information. Additionally, ILM can achieve good results on PPL because it also adopts GPT-2 for text infilling. However, compared to one-layer positional encoding and auto-regression attention mechanism in ILM, A-Tip/Dis (A-Tip without discriminator) achieves better fluency (PPL) because it adopts the modifies attention mechanism (Fig.2 (A)) to effectively learn the length for each blank, and focus on the number/location of blanks by twolevel positional encoding (intra-and inter-blank). A-TIP obtained more accurate sentence attributes than other baselines, which demonstrates A-TIP can generate text that satisfies the desired attribute. While the accuracy was improved by 8% compared with the baselines, we observed ILM and BERT also yield high classification accuracy. This is because we randomly masked 50% of tokens in the original input without considering whether the token has a specific attribute. We did not generally mask attribute relevant tokens, that helps the sentence maintain its original attribute. If all attribute relevant tokens are masked, we can obtain better results. For a fair comparison, we randomly masked tokens instead of masking specific tokens. Ablation Study To verify the effect of each component in A-TIP, we conducted an ablation study. In specific, A-TIP/Dis does not include the plug-and-play discriminator, and the text infilling part remains unchanged. A-TIP/KL does not include the KL loss and threshold-based strategy. Table 2 shows A-TIP/Dis can improve text fluency while reducing attribute relevance. A-TIP/KL increases attribute relevance and decreases text fluency. Since the discriminator can guide generation towards the attribute-aware direction, while losing the fluency to a certain extent. By incorporating KL and a threshold, A-TIP achieves a better balanced performance. Human Evaluation We considered two types of human annotation: fluency and attribute relevance (Attri-Rele). Annotators were asked to evaluate the fluency/attribute relevance of each individual sample on a scale of 1∼5, with 1 being Not fluent/Not relevant at all and 5 being Very fluent/Very relevant, as in (). We randomly selected 100 samples for each baseline from each test data and asked ten people on Amazon Mechanical Turk to identify the fluency and attribute relevance for each sample. We then used the average scores of ten annotations as final scores (see more detail in Appendix C). As shown in Table 3, A-TIP achieved the highest score compared with the baselines, indicating that sentences infilled by A-TIP can be not only more fluent but also more attribute relevant. Somewhat surprisingly, we observed that BERT, TIGS, and MaskGan yield the worst performance. BERT performed poorly due to the intrinsic difficulty of finding convincing infilling content with a suitable length. TIGS and MaskGan may have performed poorly because, unlike ILM and A-TIP, they were not initialized from a large-scale pre-trained LM. Running Time Comparison To generate attribute-aware tokens, we update the Eq. less than 10 times for each token. As shown in Fig.5, we compare the running time be- tween A-TIP/Dis and A-TIP to ensure that we have less additional time-consuming. Specifically, we randomly select 30 samples from SST-5 and ROC-Stories datasets, where SST-5 contains short sentences and ROCStories contains almost long sentences. Then, we changed the mask rate from 30% to 70% for each selected sample to make our results more reliable. As shown in Fig.5, compared with the unconditional LM-based text generation task, updating the hidden state towards attribute-relevant direction will take less additional time. Case Study We conducted a case study to show the infilling ability of A-TIP. Specifically, as shown in Fig.4, we first propose to infill the blanks with sentimental words. We choose Roberta and BLK as our compared examples. Because these two methods get the best result in this case. We can see Roberta infill the blanks with two contradictory words (funny and heartbreaking), where humans do not have such contradictory and complex emotional expressions. BLK can unify the expression of emotion, but it can not ensure the fluency of the generated sentence. In contrast, we can control A-TIP to generate positive or negative infilling contents with high fluency. We want to explore if A-TIP can generate domain knowledge for a specific area for the second case. We choose BERT and TIGS as our compared examples. Since these two methods get the best result in domain knowledge infilling. We find that they cannot generate expert knowledge infilling content. And they tend to generate correct and high-frequency infilling content, while they are generally meaningless and monotonous (;;). However, we can control A-TIP to generate both CS-related and Math-related infilling content by constraining the attribute as CS and Math. Conclusion In this paper, we presented a simple strategy for text infilling A-TIP that leverages an LM by proposing new attention mechanisms and two-level positional encoding to effectively improve the quality of generation in limited data settings. Furthermore, our plug-and-play discriminator can guide the generation towards the direction of improving text attribute relevance. In future work, we plan to incorporate the plug-and-play discriminator into more systems that assist humans in the writing process, where we hope that our work encourages more investigation of text infilling. A Detail Information for Datasets As shown in Table 4, we give the number of examples, the total number of words and the detail attributes label for three widely used datasets, SST-5, ROCStories and Attributes, respectively. We selected these three datasets since we would like to check if A-TIP can infill the blanks with sentiment words, domain knowledge and topics. We can directly use their labels to train our plugand-play discriminator for datasets with attribute labels like SST-5 (sentiment labels) and Abstract (domain knowledge labels). However, considering most datasets like ROC-Stories have no labels, we extend our method to deal with this situation. Intuitively, we can construct a general attribute-based plug-and-play discriminator to guide different datasets to generate different infilling content. However, in practical operation, it is unrealistic to build such an available attribute-based discriminator to guide the infilling generation because the downstream datasets have a variety of different attribute requirements. Therefore, we need to generate specific category labels for different downstream datasets to satisfy their specif attribute-related needs and use them to guide the infilling generation. Specifically, we extend our model to more applications by combining our model with any topic exploration algorithms to mine topic labels on unlabeled datasets. For instance, we adopt COM-BINETM () to detect topic attributes for ROCStories dataset by two methods Contextual and Combined. As shown in Table 5, we adopt three metrics to evaluate the quality of the attributes of ROCStories dataset: Topic Coherence, Inverted RBO and NPMI. And we choose 13 topics as our final labels since it has the best performance on average of all metrics. As shown in Fig.6, we draw a topic similarity graph among thirteen topics. We find the similarity within topics is high, and the similarity between topics is low, demonstrating that the detected topics have high quality and low redundancy. We adopt 13 topic labels to train discriminators for ROCStories datasets, and we achieve the best performance about topic-relevant on human evaluation. B Benjamini-Hochberg procedure The Benjamini-Hochberg (B-H) Procedure is a powerful tool that decreases the false discovery rate (Benjamini and Hochberg, 1995). Considering the reproducibility of multiple significant test, we introduce how we adopt the B-H procedure and give the hyper-parameter values that we used. Specifically, we first adopt t-test (Yang and Liu, 1999) with default parameters 3 to calculate p-value between each compared algorithm with A-TIP. Then, we put the individual p-values in ascending order as input to calculate p-value corrected by B-H. We directly use the "multipletests(*args)" function from python package 4 and set the hyperparameter of false discover rate Q = 0.05 which is the widely used default value (). Finally, we get cut-off value as the output of "multipletests(*args)" function, where cut-off is a dividing line that distinguishes whether two groups of data are significant or not. Specifically, if the p-value is smaller than the cut-off value, we can get the conclusion that two groups of data are significant different. C Detail Information for Human Evaluation We show the human evaluation in Fig.7. We adopt fluency and attribute relevance as our evaluation metrics. We use their label as their attribute for labelled datasets SST-5 and Abstract. For unlabeled datasets like ROCStories, we manufacture labels as their attributes. And we list detailed scores from 1 to 5 for each metric. |
Habermas on Heidegger and Bataille: Positing the Postmetaphysical Experience This article critically exposes Habermas discussion of Martin Heideggers philosophy of Being and George Batailles heterology as a way of identifying the postmetaphysical stance as the guiding spirit of Habermas modernity. In his work The Philosophical Discourse of Modernity, Habermas argues that whereas Heideggers Being sacrifices actuality in the name of interpretation, Batailles heterology sets up the unlimited experience which fails to provide an impetus for societal critique. Here a postmetaphysical approach is envisaged by Habermas as a way of going beyond the confines of the metaphysical tradition, although it also needs to pay attention to charges of misreading in its attempt to deconstruct the discourse of the modern. Introduction In the world of increasing skepticism towards metaphysical systems and the attempt to subsume particularity and difference into a general system; Habermas' critical theory explicitly advocates postmetaphysical thinking. In such an attempt, the epistemological problem of certainty is replaced by linguistic commerce, and the isolated I with the intersubjective realm. In his engagement with Martin Heidegger and George Bataille in his The Philosophical Discourse of Modernity, Habermas resists the philosophy of Being and the heterogeneous that try to dislocate the boundaries of communicative action. In criticizing Heidegger's philosophy of Being, Habermas argues that Being posited as the ultimate point of analysis undermines actuality and also negates potential critique from individual actors. As such, Heidegger's Being reincarnates Nietzsche's Dionysus as the enigmatic other of reason. In return, Bataille's heterology in its attempt to destroy conventional boundaries that limit the unique human experience, only ends up positing a nonrational experience that negates all possible communal validation. Section one starts by analyzing Habermas' critique of Heidegger's philosophy of Being. Then in section two, Bataille's heterology is critiqued in light of the communicative paradigm and intersubjective recognition. This is followed by section three that laments postmetaphysical thinking as the alternative approach that Habermas uses to critique dominant philosophical traditions. Finally, in section four, Habermas' discourse of modernity is scrutinized in terms of charges of misreading that question its interpretation of the various thinkers within the discourse. Habermas on Heidegger's philosophy of Being For Habermas, the fact that Heidegger's philosophy arrives at an analysis of beings through time that ends up in tracing everything to Being, can easily be seen in the "four operations that Heidegger undertakes in his confrontation with Nietzsche" (Habermas, 1987: 131). First, Heidegger restored to philosophy its traditional status of being the highest authority on truth. As Habermas sees it, what the young Hegelians had affected was the primacy of the particular and the material over the ideal, concrete relations against thought, sensibility over reason and the immediate over the conceptual. The result of this was, philosophy lost its status as the judge of all truth claims. Heidegger again empowered philosophy by calling for an ontological analysis, analyzing things in their wholeness through a horizon and interpreting them; in trying to contemplate how Being manifests itself in beings and how the existential structure of these beings as a whole could be laid out. Traditionally metaphysics has taken over the task of interpreting "beings". So, Heidegger tried to destruct the "history of metaphysics", by reminding it of its "forgetfulness" of Being, and philosophy was given the task of unraveling this forgetful metaphysical tradition. Still, how does this affect Heidegger's critique of modernity? Secondly, at the same time, Horkheimer and Adorno were writing the Dialectic of Enlightenment which criticizes Western rationality for being immersed in an instrumental rationality, Heidegger was also depicting "European modern dominance of the world" (Ibid., 132), as being the result of the "will to power" and its excessive manifestations. The 'overman' as expressing the truth of "will to power" and "eternal recurrence of existence" was manifesting itself in the truth of the European dominance of the world. This dominance resulted in a fierce struggle to manipulate the materials of the world. Hence, "Heidegger sees the totalitarian essence of his epoch characterized by the global techniques for mastering nature, waging war, and racial breeding" (Ibid., 133) Heidegger locates the modern "European Man" and his dominance of the world as a logical result of the modern conception of man as it is developed from Descartes to Nietzsche. In modern philosophy, man became the center and measure of things. This was pioneered by Descartes' thinking "I", and culminated in Nietzsche, as man becomes the one who expresses the truth as "will to power" and existence as "eternal recurrence". The "overman" becomes the one who actualizes the "will to power" in its fullest sense, by exploiting others, including other humans As William J. Richardson sees it, Heidegger conceived man for Nietzsche in the sense that; If the Being of beings is will unto power, what must be said about the nature of man? His task is to assume his proper place among the ensemble of beings according to the nature of Being which permeates them all. More precisely this means to endorse with his own will, this dominion over the earth of universal will by assuming the responsibility of achieving to the limit of his possibility the global certification in which the truth and value of all constants consist. (Richardson, 1967, 373) For Habermas, this understanding of Heidegger of the interpretation of Being in a being that actualizes itself, and is at the center of all things, made Heidegger unable to differentiate between the positive and negative sides of the modern project. Thirdly, it's Heidegger's analysis and the fact that he is trying to go beyond modern period that leads to his destruction of the metaphysical tradition. The philosophy of modern period started with Descartes' cogito and culminated in Nietzsche's attempt to think of Being, as a universal desire for power that's best expressed in man's urges to actualize its inclinations. Further, Heidegger saw the present as chaotic and questioned whether it heralds the beginning of another period or consummation of the historical process. So, Heidegger conceptualized the need to decipher the nature of the present, and in his philosophy of Being tried to salvage the present. Still, Nietzsche's desire to revive the present through the revival of a past ideal like 'Greek tragedy' is replaced by Heidegger's vision of how the future comes out of a proper relation with the past and the present. This idea of the coming future and the reformulation of the metaphysical tradition in its forgetfulness, as Habermas claims were influenced by "romantic models, especially Holderlin, the thought figure of the absent God, so as to be able to conceive of the end of metaphysics as a 'completion', and hence as the unmistakable sign of another beginning" (Habermas, 1987, 135). Further, Nietzsche's Dionysus is taken over by Heidegger's Being, specifically in the "ontological difference". Heidegger differentiated between the ontological or the concern with Being as such, and the ontic or contemplation of beings in their particularity. In this scheme, both "Being" and "Dionysus" are what are absent, and manifest themselves in the particular. In Heidegger's case, being manifests itself in things and entities, while in Nietzsche, Dionysus shows itself in the passionate, emotional and non-rational. Hence, "only Being, as distinguished from beings by way of hypostatization, can take over the role of Dionysus" (Ibid.). Finally, for Habermas, Heidegger does not escape the philosophy of the subject in his attempt to destruct the metaphysical tradition, since he is still, trying to employ Husserl's phenomenology as the method that excavates the existential structures within which Dasein is said to dwell daily. Heidegger criticized the traditional approach to knowledge, the subject/object distinction and presupposed an idea that Being comes before knowing and that knowing is just one form of being. For Heidegger, to be in the world doesn't mean, I am here and they are there, we are not here dwelling in an attempt of grasping other dwellers, we are dwelling with others. In other words, to know that one is there in the world, Dasein doesn't need to objectify the others, it's there and this implies being there with others. According to Habermas Heidegger takes on Husserl's phenomenological method of investigation with an aim of unconcealing the truth of Being. This entails trying to phenomenologically dig out or lay out an experience, the existential structures through which being manifests itself. Here for Richard Polt, there are two major notions that Heidegger appropriated form Husserl's phenomenology. These are, first, "Evidence", or that there are conditions in which phenomena manifests itself and that the task of the phenomenologist is to make the hidden truth manifest itself, second, "categorical intuition", or that through beings we can have an insight into what underlines and can never be grasped in itself i.e. Being (Polt, 1999: 14-15). The difference between Heidegger and Husserl is the distinction between Being as such and beings, and then trying to apply the phenomenological method to Being itself. Still, Heidegger for Habermas "does not free himself from the traditional granting of a distinctive status to theoretical activity, to the constative use of language, and to the validity claim of propositional truth" (Habermas, 1987: 138). Habermas on the limitations of Heidegger's philosophy of Being Thus, what are some of the consequences of Heiddegerian philosophy for the critique of modernity and the modern project in general? First of all, as Habermas sees it, the need for a unifying force other than religion which was supplanted by artistic, mythological and rational ideals, was replaced by Heidegger's critique of the metaphysical tradition and its forgetfulness of Being. Heidegger emphasized on the difference between the ontic and the ontological. For Habermas, this results in an inability to address problems that arise in everyday world, and possibilities of unifying, emancipatory, ideal being generated. Secondly, Heidegger's conception of modernity is divorced from specific practical, concrete questions that are addressed by the various sciences which are oriented towards specific validity claims. Hence, "the critique of modernity is made independent of scientific analysis'' (Ibid., 139). Finally, Heidegger arrives at a kind of an acceptance of current realities in conceiving Being itself as beyond what can be described and conceptualized, and could only be deciphered indirectly. Accordingly, "he propositionally contentless speech about Being has, nevertheless, the illocutionary sense of demanding resignation to fate" (Ibid., 140). Habermas goes on to specifically look at Heidegger's earlier position as developed in his Being and Time. Heidegger in his Being and Time claimed that his interrogation of Dasein was aimed at revealing the truth of Being as such, and to this extent criticized the metaphysical tradition for focusing on beings and not Being. The Dasein Analytic was supposed to be the foundation. For Habermas, this gives one an inadequate background of the context in which Heidegger developed his ideas in Being and Time. This context is for Habermas; "the post idealism of the nineteenth century," and specifically "neo-ontological wave that captured German philosophy after the first world war, from Rickert through Scheler down to Hartmann" (Ibid., 141). It was a scene in which, Kantianism and pre-Kantian forms of philosophizing were being abandoned in favor of forms of thought that emphasized the concrete and the particular. The paradigm of the subject that is at the center of thought transcends itself, reflects on itself and the world, and was starting to dissolve. Even though "the idea of a subjectivity that externalizes itself, in order to melt down these objectifications into experience, remained standard" (Ibid., 142). Heidegger's approach is seen as one of exposing how the metaphysical tradition has been focused on things and entities rather than Being as such. Still he tried to preserve some aspects of the tradition like the analysis of phenomena, from Kant's critique of reason to Husserl's phenomenology. In Being and Time, Heidegger explicitly states that the various sciences like anthropology, psychology and biology aren't adequate enough to carry out the Dasein Analytic. The only focus is on the ontic, and not the ontological, by treating humans as a "thing, substance or subject" (Heidegger, 1985: 78). But for Heidegger Dasein is unique in the fact that it is to be situated in the ways it tries to realize itself in the future, or the fact of it's a possibility. Still, Heidegger according to Habermas, when trying to explicate the nature of Dasein as being-in-theworld, resorts to the strategy of analyzing the subject, by going beyond it and looking at what is it that makes its existence possible (Habermas, 1987: 143). By being-in-the-world, Heidegger stressed that, Dasein's being in the world doesn't entail being inside the other or in something. "We are inclined to understand this being in as being in something" (Heidegger, 1985: 79). Dasein's world is of being there with others and dwelling with them, being found alongside them. Habermas goes on the look at how Heidegger establishes the primacy of Dasein and what makes Dasein the center of analysis. First of all, Heidegger distinguishes between the ontic and the ontological, and bestows Dasein an ontological priority. For Heidegger, Dasein like other beings occurs as an entity, but it doesn't just occur, since it is oriented towards that understanding of Being itself. Dasein is the only being whose being is at issue and it inquires into Being by inquiring the Being of one's being. So, while ontically it is a being that's concerned with its being, ontologically it's concerned with Being as such this is to be situated in the context of its ontical questioning and uniqueness, leading to an ontological insight. As, Heidegger puts it in Being and Time, "Dasein is an entity which does not just occur among other entities rather it is ontically distinguished by the fact that in its very being, that Being is an issue for it" (Ibid., 32). Secondly, Heidegger expounds his idea of phenomenology as the method that it is to be used in the Dasein project. For him phenomenology is not submitting a thing to theory or a philosophical doctrine from which truths are extracted, but a way or method of approaching things. Only as phenomenology is ontology possible for Heidegger. The phenomenological method is to unmask the various ways in which phenomena are concealed. Some of the ways through which phenomena could be concealed include "undiscoveredness", "being buried" and "disguised". In "undiscoveredness", a phenomenon has always been concealed and is in need of a revelation. In "being buried", a phenomenon has been discovered but is again concealed. Finally, in "disguising" the phenomenon has been represented as something which is not really its nature and when one tries to identify with the things it's disguised to (Ibid., 60). Phenomenology and the analysis of Being In the final analysis, phenomenology is a way of carrying out a hermeneutics or an interpretation of Dasein in its dwellings. The theme is Dasein, and it will be interpreted as it dwells in the world with other entities. It is to be interpreted in its dwellings alongside other entities in the world. Finally, in an "existentialist" tone, Heidegger interprets Dasein in terms of its choice to actualize itself or not for Heidegger, Dasein dwells with a potential of "authentic" or taking up its existence consciously towards Being. In contrast in "inauthenticity" Dasein forgets its ontological significance in its tendency to identify itself in terms of other things it encounters in the world. Dasein is the one through which the meaning of Being is to be interrogated since it turns out to be the one that raised the issue of Being. For Heidegger, whenever we ask or pose a question, it is about something and not nothing, and this in turn implies that we have to examine something for an answer, and in the final analysis we have some objective of asking. To put it differently, there is some purpose behind questioning and this can lead to a thing questioned and also a questioner further, one is to interrogate beings in their Being, to arrive at the truth of Being. But which being? It's Dasein since; it's the only being whose Being is its issue. Hence, for Heidegger, as Habermas puts it; "he human being is an entity with an ontological nature for whom the Being question is an inbuilt existential necessity" (Habermas, 1987: 145). As Habermas summarizes it, by bestowing Dasein an ontico-ontological significance, reducing all possible ontology to phenomenology, and interpreting Dasein in terms of its "authenticity" or "inauthenticity", Heidegger established his Dasein Analytic. Heidegger also established the primacy of existence against knowing; interpretation against reflection. There was also a focus on how the subject reflects upon itself and transcends one's own self. Dasein has a special insight into Being in trying to contemplate its own existence. Heidegger tried to lay out the meaningful structures within which Dasein is said to dwell. Finally, Heidegger also tries to solve the problem of existence through his notions of "authenticity' and 'inauthenticity". One of the crucial moves that Heidegger makes in Being and Time, for Habermas, is from conceiving Dasein as basically different from things "present-at-hand", to Dasein as being thrown into the world of others. Earlier, in his discussion of being-in -the -world, Heidegger claims that the nature of Dasein lies in its "to be" or "mineness" or the fact that it inquiries into Being as such and is also characterized by a choice, that makes it authentic or inauthentic. This makes Dasein different from things, "Present-at-hand" which are only tools, entities, and hence have no on ontological significance. Later on, Heidegger comes to see how the question of the "who" in the existential character indicates the presence of others. Thus, we do encounter other beings in the environment that we live in. Habermas sees this as how "Heidegger extends his analysis of the tool-world, as it was presented from the perspective of the actor operating alone as a context of involvements, to the world of social relationships among several actors" (Habermas, 1987: 148-149). Heidegger tries to show how being-in-the-world implies being constrained by others in his discussion of "oneself" and the "they". He shows, how the ways in which we act in the world is shaped by others. Thus, the way in which we behave is constructed by the one (das man) not by each Dasein privately for itself. We are thrown into the world and the inherited horizons necessarily constrain us. The "das man" is the one that provides possibility for the individual in the socialization process. As Habermas sees it, the notion of a shared lifeworld in which communicative rationality could be built is not developed in Heidegger. This is because, the context into which one is thrown is seen as a conservative state that constrains oneself in its inclination to make oneself authentic and establish a unique relation with Being. Hence; "Heidegger does not take the path to a response in terms of a theory of communication because from the start he degrades the background structures of the lifeworld that reach beyond the isolated Dasein as structures of an average everyday existence, that is, of inauthentic Dasein." F. Merawi -Habermas on Heidegger and Bataille: Positing the Postmetaphysical Experience _________________________________________________________________________ 50 (Ibid, 149) In emphasing being beyond knowing, the focus in Heidegger becomes on Dasein. In turn Dasein returns to the subject as in the philosophy of the subject, as a point of analysis. For Habermas, Heidegger's ontology, in trying to sketch Dasein's dwellings, and philosophy of the subject, in focusing in how the subject knows the world, managed to negate the accumulated meanings that give background and contexts for discussing issues for individuals and also the everyday communicative processes. In the final analysis, Heidegger failed to see that truth and meaning are not something that passes through, but, is produced. Hence, "He fails to see that the horizon of the understanding of meaning brought to bear on beings is not prior to, but rather subordinate to, the question of truth" (Ibid., 154). One of the controversial issues surrounding Heidegger's philosophy is its political implications, and specifically how it justified the Nazi rule. For Habermas one can witness in both Heidegger's lectures and addresses during the Nazis period, and the implicit ideas developed in Being and Time, how there is an intrinsic connection between Heidegger and the Nazi rule. In relation to the Dasein Analytic, in Being and Time, Heidegger applied it to show how the individual stands in a world in relation to other individuals and entities, and how one's existence could be deciphered in time, i.e. in its 'thrownness', dwelling and projection of a future. But, during the Nazi period Heidegger interprets Dasein as a collective group or society as a whole, and how this collectivity is moving in time, into the future. Further, after being elected as the "rector of Freiburg", in his inaugural address to students and professors, Heidegger stressed how the Germans as a whole are called on by their leaders, to actualize their collective potentials, to take their proper place in history, to become authentic and consolidate their unique place in history. As Habermas sees it; "Whereas earlier the ontology was rooted ontically in the existence of the individual in the lifeworld, now Heidegger singles out the historical existence of a nation yoked together by the Fuhrer into a collective will as the locale in which Dasein's authentic capacity to be whole is to be decided" (Ibid., 157). Habermas also locates Heidegger's affiliation with Nazism in the latter's views towards technology. During the Nazi period, Heidegger called on Germans to employ technology, to further the national socialist movement and Germany's Greatness, but later, Heidegger came to view technology as a will to power manifesting itself in domination and exploitation of the planet and hence leading humanity into destruction (Ibid., 159-160). Here Richard Polt expresses how Heidegger already begun to doubt the national socialist revolution in his "private notes" in 1939. Heidegger wonders, where the nationalist movement is going, its place in history, from where it obtains its standards for collective movement and so on. Hence, for Polt, "Heidegger's frustration is obvious. A revolution that had appeared to promise a rebirth of the German spirit has turned out to be dogmatic and totalitarian" (Polt, 1999: 158). Bataille and bursting of the conventional What underlies most of Bataille's undertaking was getting beyond the conventional, the given standards and the normal. To this extent, Bataille conceives of the real human as the one that is willing to go beyond the limits, or the one that pushes the extreme to go beyond the conventional. Habermas categorizes, Bataille not under the reformers but radical critics of modernity. Habermas thinks that this radical critique of Bataille specifically focuses on the ethical side of life rather than a general critique of reason. Habermas traces the origin of Bataille's critique of modernity, to the development of the latter's concept of "the heterogeneous" at "the end of the 1920's" (Habermas, 1987: 212). Here, Bataille launched his attacks on the capitalist society, ordinary day to day life, and the sciences in favor of a kind of experience that goes beyond the standards set by all these authorities, and hence limit the human experience. For Habermas, Bataille here is echoing the surrealist notion of experience which tries to go beyond an interested, instrumental, exploitative relation to the world, abolishes given standards of right and wrong, and brings a new kind of aesthetic dimension into focus. As Habermas sees it, Bataille in his "the Heterogeneous" focused, on those categories that don't fit into our day today lives, these are elements that are excluded from normal life, taboo, sinners "outcasts and the marginalized... pariahs and the untouchables, the prostitutes or the lumpen proletariat, the crazies, the rioters, and revolutionaries, the poets or the bohemians." (Ibid.) Habermas thinks that Bataille's category of "the Heterogeneous" as those excluded from the ordinary bounds of our lives, also include "fascist leaders heterogeneous existence." (Ibid., 213) According to Habermas, going beyond things like political affiliations, methods of interrogation and styles of writing, one could establish certain similarities between Heidegger and Bataille. Accordingly, both conceived modern society as based in a decadent form of rationality that resulted in their times "into a totality of technically manipulable and economically realizable goods" (Ibid.). Still, Bataille's critique of modernity like that of Heidegger is not aimed at a critique of epistemology that yields an exploitation of the world. Rather, it's a specific kind of "ethics" behind capitalism, that's at the center of Bataille's analysis. Bataille's focus is geared towards liberating the subject from the routines of daily life and the rationality of capitalism, into a context in which the destruction of conventionality will lead one into a genuine moment. This is a moment, and experience that has been suppressed, and excluded from our networks of truth and rightness. Unlike Heidegger's ontological difference between and Being and beings, and how the whole analysis is focused on a remembrance of Being, Bataille aims at setting the subject free, and asserts that, going beyond the limits set for the subject is the essence of "liberation to true sovereignty" (Ibid., 214) Seen from this angle, Bataille was able to utilize Nietzsche's ideals of how the aesthetic frees, and how the "overman" leads to a new "transvaluation of values". Heidegger was unable to appropriate these Nietzschean insight, for Habermas, since his focus was geared at how Being will be grasped through a specific comportment of the ontical i.e. Dasein. Accordingly; "For Bataille, as for Nietzsche there is a convergence between the self-aggrandizing and meaning-creating will to power and a cosmically moored fatalism of the eternal return of the same" (Ibid.). Also, Bataille and not Heidegger was able to appropriate Nietzsche's dissolving and defiance of all authority in the aftermath of the down fall of all ascetic values, in his attempt to liberate the subject from conventional standards. Heidegger was not able to utilize Nietzsche's destruction of the metaphysical system in his attempts to trace everything to the forgetfulness of Being. Here Habermas thinks that Foucault is justified in claiming that Bataille operates in a world where all the metaphysical, religious truths have lost their vitality, and that to this extent, Bataille directs his attention towards the annihilation of conventional standards that are products of human beings themselves, like capitalism. Instead of trying to expose the great philosophical and religious traditions, Bataille focuses on how the erotic, sensual experience sets the subject free from a post-metaphysical world where man is chained not by other worldly philosophies but exploitative, manipulable relations to the world that essentially limit the bounds of the subject's experience. Thus, "Bataille does not delude himself about the fact that there is nothing left to profane in modernity" (Ibid., 215). Habermas, now that he has established Bataille's project of emancipating the subject in a world where the great metaphysical systems have been destructed, would like to show how Bataille analyzes fascism and modernity. To this extent, "Bataille sees modernity embedded in a history of reason in which the forces of sovereignty and labor are in conflict with one another" (Ibid.). Bataille sketches the development of complex societies in humanity's history as the further degradation of freedom and sovereignty. So, how does Bataille try to sketch the move form a "reified society to a renewal of sovereignty" (Ibid.). According to Habermas the rise of fascism and national socialist movement in Europe was seen by some as positive and others negative. It also served as the catalytic force for the theories of Heidegger, Bataille, and Horkheimer. In this context, in his work The Psychological Structure of Fascism, Bataille try to go beyond Marxist categories of thought, and tries to analyze the new movements in Italy and Germany not as based on class struggle but the psychological forces found behind such movements in history, especially the unique relation that exists "between the masses mobilized by plebiscites and their charismatic or Fuhrer figures" (Ibid., 216). Bataille and the heterogeneous In a Marxist tone, Bataille asserted that before revolutionizing the modes of production and societal organization by a movement like fascism, capitalism needs to "collapse because of internal contradictions" (Ibid., 217). Bataille was interested in studying the extra elements, elements out of the bounds of Bourgeoisie society that fascism will bring into the scene in such a revolution. Bataille tried to analyze how violence introduces a different, strange element by destroying boundaries. Generally, Bataille analyzed modernity in terms of how a one-sided focus on reason led to conventional norms, values and standards. Rather than trying to modify the modern project by criticizing its reason, Bataille focused on going beyond the ethics of modernity by a violent force that goes beyond fixed boundaries. Hence; "Bataille seeks an economics of the total social ecology of drives; this theory is supposed to explain why modernity continues its lifeendangering exclusions without alternatives, and why hope in dialectic of enlightenment, which has accompanied, the modern project right down to western Marxism is in vain" (Ibid.). According to Habermas, Bataille works under Durkheim's distinction of the "profane" and "sacred". The "sacred" represents the tendency to go beyond the convention and regularity and the "profane", as the uniform aspects of day to day life. In capitalist society, labor (the creative power) becomes homogeneous by being measured in terms of "time" and "money". The uniformity of labor is further established by "science" and "technology" that create a world where identical, similar things are produced based on the demands of the capitalist and the fixing of the process of production of an object by standards given by the bourgeoisie. What the unique leaders and followers of fascism introduce is a negation of this uniformity and regularity Hence, "against the background of interest-oriented mass democracy, Hitler and Mussolini appear to be the totally other" (Ibid., 218). Habermas thinks that Bataille is especially interested in how the appropriation of the violent, the spontaneous, and the negated experience by fascism disrupts capitalist modes of organization. Bataille is also fascinated by how elements of order and chaos uniformity and disruption, are found alongside one another in fascism. On the one hand, sacrifice for the totality, performance of duties, and on the other, collective upheavals, festivities and absolute rule of the "fuhrer" are found along one another expressing the spirit "of true sovereignty" (Ibid., 219). Habermas goes on to make a contrast between Bataille's and Horkheimer and Adorno's views on fascism. One thing common to both Bataille and Horkheimer and Adorno, is the focus on studying the psychological dimension of fascist rule as it is manifested in its arousal of the masses and the collective force. For Horkheimer and Adorno, Fascism arouses the suppressed urges and passions of the subjects in modern society, under a collective ideal and vision of a common destiny. So, first Bataille, Horkheimer, Adorno, focused on suppression, and later, the strategic arousal of suppressed urges. The difference is that, for Horkheimer and Adorno the result of such an arousal is delusion or false happiness, whereas for Bataille the arousal is a moment of empowering the subject to go beyond the conventional and thereby a freeing. Hence, "in the erotic and in the sacred, Bataille celebrates an "elemental violence"" (Ibid., 220). As Habermas sees it, such position of Bataille runs into the difficulty of failing to distinguish between an emancipatory ideal that utilizes the passions of the masses for revolutionizing current states of affairs versus the subsuming of such a revolutionary undertaking in the final analysis under a dictatorial, totalitarian rule. Habermas goes on to look at how Bataille tried to subsequently come up with a critique of modernity that bridges the gap of the "transition from reification to sovereignty" (Ibid., 221). According to Habermas, in his 1933 Treatise on the Concept of Waste alongside Marxist forms of analysis, Bataille conceives labor as the way through which humans make themselves by making products. But rather than focusing on how humans have been deprived of their labor in capitalism, Bataille focuses on the difference between merely producing for survival and a "luxurious" way of laboring where one goes beyond the basic necessities and produces surplus, and locates "sovereignty" and "authenticity" of the subject on the latter. For Bataille, and not Marx, producing beyond necessity is an expression of freedom and a sign of going beyond the conventions. Hence, Bataille according to Habermas: ears that true sovereignty would also be suppressed in a world of material abundance as long as the rational -according to the principle of balancing payments -use of material and spiritual goods did not leave room for a radically different form of consumption-namely, of wasteful expenditure in which the consuming subject expresses himself (Ibid., 222). Based on this analysis, one of the defects of modern capitalist society is its tendency to subsume everything into the production process and this has a subversive effect of destroying entertainment and pursuing of luxury as an expression of Freedom. Alongside these lines, for Bataille; "he essence of sovereignty consists of useless consumption of 'whatever pleases me' (Ibid., 224). Habermas thinks that using the thesis of the intrinsic relation between "sovereignty and power"; to explain how capitalist relations of production for profit emerge, is not sufficient to show how throughout human history the "sacred" have been excluded. In favor of the "profane", Bataille also cannot appeal to Marxian categories of thought, since his analysis already deviates from that of Marx in the attempt to go beyond reason, the conventional, and the fixed standards. It also deviates in emphasizing how the problem of labor is not of being subsumed into capitalism from expression oneself to surplus production, but instead from entertaining of the luxurious as an expression of one's superiority to an endless pursuit of surplus production under capitalism. Instead, Bataille appeals to Weber's thesis of protestant ethic and rise of capitalism and applies it to how the ethical determines negation of the sacred. Habermas thinks that this can be broken down to three points, humans are different from other animals not just in the fact that they create themselves through labor, but also in the fact that their actions, desires, and wishes are constrained by the standards and conventional ways of being that are found in the world they inhabit (Ibid.,230). In this context, Bataille's "excess" is going beyond the forces and standards that limit the freedom of the individual. One should conceive the conventional rules and standards beyond their role in keeping the societal order intact. Instead, the focus should be on how their transgression leads to new ways of experiencing the world and ways of being. One can also sketch the development of a practical reason and moral rules from ancient times to the present, which succeeded in making individuals, conform to different ethical ideals, and hence bound to conventionality. In the final analysis, Bataille like Nietzsche is faced with the problem of trying to go beyond reason, and the limits set by norms but still not being able to come up with a theory that can comprehend this. What kind of theory can go beyond discourse, if all discourse is repression, and if there is a need to burst out of the boundaries of language, then what kind of theory could account for this? 7. Communicative rationality vs. the philosophy of consciousness These days, there is a general consciousness amongst philosophers that a philosophy for contemporary society should go beyond metaphysical thinking. The metaphysical tradition failed to provide a viable alternative. In some cases, it holds absolutistic claims that fail to recognize particularity. In others, it propagated relativistic assumptions that fail to recognize the universal dimensions to humanity. The tradition also focused on isolated individuals, subjected the individual to oppressive relations, and so on. Habermas claims that the kind of rationality he identifies and tries to develop in modern societies is postmetaphysical, in that it's situated in daily uses of language having both particular and universalistic dimensions. Here, I will employ James Gordon Finlayson's discussion of what Habermas generally means by the philosophy of consciousness, to show in fact that Habermas does go beyond this orientation. Habermas' analysis of speech acts is part of the 'linguistic turn' in twentieth century philosophy, which abandons the look for absolute truth and certainty within a subject. Instead it focuses on analyzing the language we speak and employ. It inquiries into what this language tells us about the basic questions of reality, knowledge, values and so on. Finlayson identifies under the term philosophy of consciousness, seven major orientations in Western philosophy that Habermas' communicative paradigm supposedly stands in opposition. 1. In "Cartesian subjectivity" (Finlayson, 2005: 29), it's assumed that there is a clear essence that we can ascribe to the individual, and that this is thinking or generally thought. We can say that, Habermas goes beyond this orientation since in his approach; the 'I' cannot be separated from others. Rather, there is a world of claims through which one affirms unique individuality by raising distinct claims in relation to others. 2. In "metaphysical dualism" (Ibid.), it is assumed that there are two major kinds of substance in the world, one reflective and other corporal. Habermas doesn't assume that one can distinguish between thinking and the body, either in the individual or the individual as thinking and the world as body. He stresses that modern individuals have the space in which they raise their claims to one another thereby coordinating their actions. In escaping such metaphysical trapping and orientations under which modern societies are disempowered, Habermas' communicative paradigm is unique in the following terms. First of all, in principle it's a rationality in which everybody can participate. It's procedural in that, it depends on following certain rules and raising certain claims. Secondly, it's a kind of rationality which invites continuous revision since every time validity claims are raised the truth is in question and in chance of being revised. Thirdly, it's a rationality that encompasses the entire objective, social and subjective dimensions, going beyond a strict insistence on isolated subjectivity, the objective world, and macro social reality and incorporating all in modern individual's claims towards the world. Habermas' communicative paradigm does go beyond the metaphysical trappings in western thought. Still, beyond the postmetaphysical nature of Habermas' communicative paradigm, the question should be that of how this secular discourse stands in relation to non-secular communities, whether it addresses concrete asymmetrical relations, and sufficiently deconstructs the negative, other side of modernity. Habermas' discourse of modernity and charges of misreading In this section, I will briefly present charges of misreading on Habermas' discussions of Weber, Hegel, Derrida, Nietzsche, Foucault, Heidegger and Bataille in The Philosophical Discourse of Modernity. Austin Harrington tries to look at Habermas' appropriation of a universal process of rationalization from Weber's sociology of religion. Harrington tries to examine the extent to which Habermas' attempt to extract the intersubjective communicative process of modern societies from Weber's "theory of social evolution", remains faithful to Weber's original ideas (Harrington, 2000: 84). Harrington asks, did Weber really regard the process of rationalization which takes foot in the West as representing the highest stage in the rationalization of humanity in general, or was he trying to point out the unique aspects of the rationalization of the occident. Harrington admits that Habermas certainly developed the optimistic aspects of Weber's work, when he tried to develop the spheres as hosting a distinct rationality, instead of Weber's celebrated thesis that modern society is being trapped in an instrumental rationality. Habermas also diverged from Weber's intention, when focusing in his emancipatory ideal not on the courageous individuals which "devote themselves to their chosen value axioms", but on the everyday communicative action of modern societies which hosts critical` and emancipatory claims towards the objective, social and subjective dimensions of reality (Ibid., 87). As Harrington sees it, the Habermasian analogy between Kant's three critiques i.e. of pure reason, practical reason and judgment and the three value spheres in Weber, (i.e. theoretic, practical and aesthetic ones) is flawed. This is because Weber enumerated "five spheres; the economic, the political, the aesthetic, the erotic and the intellectual" (Ibid., 88). Also, Habermas' attempt to extract universal moral principles from Weber's empirical observation on the development of a protestant ethic are questionable, since Weber by no means took these principles as being universal or laying objective grounds for discussion of moral issues. Harrington adds, even though the intention of Weber's empirical inquiries into the rationalization of the occident were aimed at grasping the extent to which this process managed to implant a universal structure, still it should be noted that Weber called for a further empirical inquiry and held that the universal significance of the West, is debatable. Also, Habermas' insistence on the creation of a ground where a single value sphere addresses a specific validity claim is questionable, since "it is possible to challenge one sphere from the standpoint of another sphere in a way that is not a priori refuted by the terms of the first sphere" (Ibid., 95). On our day to day lives we usually make aesthetic judgments about the moral, moral judgments about the scientific and so on. Thus, the idea of a single validity claim addressed in a distinct realm is questionable. Weber's ideas on the universal significance of rationalization in the West could be interpreted as instances of a civilization that strives for universality and not necessarily a civilization that implanted its lasting influences on humanity in general. At the heart of Habermas' ideas on the inauguration and development of modernity, is the role given to Hegel as the one who pioneered the attempt to grasp what modernity is, by looking at the historical process through which modernity concretely established its own status and inquiring into the issue of normativity in the modern project. Fred Dallmayr expresses some of his reservations towards Habermas' appropriation of Hegel. As Dallmayr sees it, there is no clear distinction between the "young' Hegel which expounded his views on religion, aesthetics and mythology and the 'mature" Hegel who tried to accommodate everything into the "spirit". In other words, Hegel remained faithful to his earlier ideas, eventhough he developed his ideas in a larger context. (Dallmayr, 1987: 699) Dallmayr remarks that; "Hegel never abandoned his early views on "ethical totality or did he dismiss the notions of public religiosity the 'nexus of guilt' or the function of art as emblems of an ethical social bond. He simply proceeded to reformulate these notions in accordance with the needs of this overall system" (Ibid.). Also, for Dallmayr, Habermas' interpretation of Hegel failed to fully capture the progress of reason in history and thought, and instead focused on the right Hegelian interpretation of pointing out the universal significance of Hegelianism or the left Hegelian attempt to put reason in contact with the concrete. In his discourse of modernity, Habermas accuses Derrida of trying to destroy the distinction between philosophy and literature. Habermas also sees Derrida's threat as of emphasizing the aesthetic aspects of language and interpretation in general. Sandler wonders whether there is such a distinction taking into mind the usual employment of metaphor and nonliterary forms in most philosophical texts. Simply invoking the instance of Plato's Dialogues will show that the nature of these works as artistic forms and philosophical conversations is equal. So, "how are we to decide which function of language is the dominant in Plato's dialogs" (Sandler, 2007: 3). Hence the view that philosophy is rational argumentation, and literature "ficticious" needs to be questioned. Sandler situates Derrida's project as one of introducing a moderate approach that emphasizes both literal and non-literal elements and giving a voice to the various contexts in which meaning is formed. The tendency of the Western philosophical tradition, to strictly insist on the argumentative nature of philosophy, excluding other elements dates all the way to platonic dialogs where, "the wonderfully comical and humorous nature of most dialogs is also discarded and only reappear as Socratic irony when the argument derived from the text clearly contradicts the line of argument traditionally viewed as Platonic" (Ibid., 5). Sandler adds, Habermas, is right in pointing out that in Derrida, "literature and literary criticism" are conflated. But this is just Derrida's way of finding a form of writing that gives a sufficient space to diverse aspects of meaning formation, while simultaneously being critical of other forms of writing. This was not clearly addressed by the existing metaphysical tradition which operated on a theoretic, binary form, amongst others. For Sandler, the insistence of Derrida on not being authoritative, fixing meaning and hence making texts open could easily be demonstrated by looking at the terms he employs like "difference" which are neither words nor concepts" (Ibid., 8). Habermas' analysis of Derrida also errs in not directly interrogating Derrida's works but secondary interpretations and the application of deconstruction in American universities. Further, trying to come up with an inclusive form of writing that goes beyond argumentative and nonargumentative forms is essential as a critique to restrictive theories of meaning and the binary operations of the Western metaphysical tradition, which Habermas did not give a sufficient voice to. For Thomas Blebricher, most of the defenses of Foucault against Habermas' severe attacks in The Philosophical Discourse of Modernity are focused on showing that the former has been misread and that he could be defended against such charges of "presentism", "relativism" and "cryptonormativism". Still, what's lacking in such defenses is a broader understanding of what caused Habermas' misreading in the first place. Blebricher identifies two major causes for such grave misunderstandings. These are, first reading Foucault's later works through his archaeological method and two, misreading Nietzsche's genealogy and then reading Foucault through Nietzsche's genealogy. Further, Habermas attacks the objectives tendencies of Foucault's genealogy when in fact the latter sees genealogy as forwarding "very modest truth claims of a peculiar character" (Blebricher, 2005: 1-2). As Blebricher sees it, Habermas' readings of Nietzsche could be traced to the Former's Knowledge and Human Interests, where Nietzsche is credited with developing a this-worldly, practical, approach to knowledge and truth. Nietzsche is interpreted as criticizing metaphysical conceptions of truth and putting knowledge in touch with practical interests. Still, Habermas was also critical towards Nietzsche, since he interpreted the latter as advocating "a perspectivism of values" where all we have is different interpretations, different ways of cognizing and bringing reality into our control, and that there is no good and bad, right and wrong. Blebricher maintains that; "While both philosophers agree in their critique of the positivists sciences that deny the link between knowledge and interests, Habermas treats the 'illusions' of mankind difference between the useful illusions of causality and other rather dream like illusions the implementation of which necessarily fails in the face of the materiality of nature" (Ibid., 5). Both Habermas and Nietzsche recognize this-worldliness of all values and claims to truth. Still, Habermas interprets Nietzsche as blurring the distinction between perspectives that enhance life and those that devalue it. Habermas interprets Nietzsche as bestowing an equal value to all perspectives. Later, Habermas' views towards Nietzsche became harsher. In The Philosophical Discourse of Modernity, Nietzsche's views were reduced to introducing a destructive reading of modernity, and anticipating the postmodernist movement which stood against the values of reason and Enlightenment. For Blebricher, what especially worries Habermas is the destruction of the clear-cut distinction between the theoretical, practical and aesthetic spheres, the emphasis on a heterogeneous meaning formation and reduction of all statements to that of "artistic preferences" (Ibid., 6). Habermas locates two paths out of Nietzsche's critique of modern reason. These are a critique of reason in terms of a will to power, and seeking an alternative in reason's other. For Blebricher, "it is this clear cut distinction between two strategies and two respective 'paths' into postmodernity that lies at the bottom of Habermas' mistaken or at least impoverished account of Foucault" (Ibid.). Blebricher adds that both Nietzsche and Foucault focused on the emergence of diverse conception of the moral, the aesthetic and generally truth in human history, without assuming objectivity or continuity between various conceptions. Further, Foucault's genealogy didn't claim to have an objective standard by which the various discursive formations could be viewed. Foucault himself was aware that his own method was a particular power/knowledge formation and that a science having "an outside perspective" was not realizable (Ibid., 11). Still, Foucault was also looking for a way through which he could go beyond a description of such formations and offer an emancipatory critique. Blebricher argues that Nietzsche's critique of modernity should not be reduced to "aestheticizing" or of reducing all questions to that of tastes. Nietzsche's project also contains diverse insights from scientific, artistic and biological backgrounds. Further Habermas' reading of Foucault like that of Nietzsche tries to reduce Foucault's project under labels such as power/knowledge nexus. Hence, both Nietzsche and Foucault tried to introduce a new form of critique that offers a "hybrid" approach (Ibid., 15). In both Nietzsche's and Foucault's genealogy there is an attempt to combine different forms of interrogations and insights, and Habermas' critique misses this point. Habermas shortsightedly assumed that Foucault was trying to extract the scientific element of Nietzsche's works. This is the attempt to identify various formations and apply it to genealogy that sees itself as an objective science gazing at power/knowledge formation. For Blebricher, Habermas didn't deliberately distort the ideas of Foucault to consolidate his communicative paradigm. Rather, Habermas erred in reading Foucault through his interpretation of Nietzsche. Thus, "Habermas' misunderstanding of Foucault does not have to be seen as an intentional misreading, neither are we dealing with a strategic deformation of the Foucaultian oeuvre, the creation of a straw man" (Ibid., 17). Habermas charges Heidegger's philosophy of Being as being unable to address problems that a rise in everyday world, not establishing a place for scientific analysis, and being fatalistic. For David Kolb, Habermas needs to address the difference between his theory of communication action where one finds himself in an intersubjectivist communicative arena which is open to argumentation, and Heidegger's "temporality" where Dasein is thrown into a horizon that necessarily determines its destiny in providing the existential structures through which one dwells. Further, in Heidegger, there is a place for the individual in the sense that the individual reaffirms himself by creating meanings out of the inherited horizon. Also, the claims raised in a particular horizon can have a universal significance, even though they are necessarily measured with their respective horizons (Kolb, 1992: 689). Heidegger, as Kolb sees it, should not be interpreted as conceiving Being as restrictively supplying the frameworks through which we lead our lives, but the space one is thrown into and gains "authenticity" or "inauthenticity" in its attempt to actualize its unique ontico-ontological significance. Put in simple terms, there is enough space for one to define oneself within a horizon. Hence "Heidegger's destructive point is not that validity claims are world bound but that the limited revelation of Being within a world is what makes possible any cognitive or practical claims at all" (Ibid., 690). Andrew Stein raises doubts about Habermas' reading of Bataille, and tries to offer an interpretation that situates Bataille's philosophy in the historical context in which it developed. As Stein sees it, Habermas' analysis is flawed in trying contextualize Bataille in terms of German history "rather than French intellectual history" (Stein, 1993: 21). Bataille's attempt to go beyond the conventional boundaries of Western thought is interpreted as representing a Nietzschean fascism and irrationalism, and Stein tries to defend Bataille from such charges. For Stein, what motivates Habermas' reading of Bataille amongst other factors is the revival of thinking in Germany which tries to evade responsibly over crimes against the Jews. Habermas advocates a responsibility for history and proposed a tradition which will take foot in a transparent and accountable public sphere. Along these lines, Habermas sees in Bataille the rebirth of an authoritarian, Fascist German philosophy developed before the Second World War by appealing to the ideas of Nietzsche. According to Stein, for Bataille, Nazis and Nietzscheanism were not equivalent since in Nazism there is a homogeneous population led by the head of the system, while "Nietzscheanism exploded the authoritarian will that leads to fascism and all ideal metaphysics" (Ibid., 42). Bataille was interested in studying the psychological aspects of fascism and wondered how fascism was able to "mobilize the aggressive instincts of the masses'' but this admiration did not lead Bataille into advocating fascism. (ibid) Stein also questions the degree to which Bataille is an irrationalist, and stood against science and rationality. Stein argues, Bataille was trying to develop a science of heterogeneous states and how these states are being expressed in various practices and institutions. Bataille's science was "heterology". He studied excess and deviation "not as pathology" but as ways of going beyond repressive religious, capitalistic, conventional boundaries. (Ibid, 49) Further, in "heterology", reason was seen as the opposite of unlimited experience, a way of analyzing how this excess is being manifested in various institutions and practices, and also a boundary and "limit" to excessive experience (Ibid., 50). Generally, against Habermas' charges, Stein argued that Bataille clearly denounced fascism, and that there is also a place for science and rationality in Bataille's heterology which criticizes excessive rationality and tries to awaken suppressed energies, as a viable alternative. Conclusion In positing the postmetaphysical stance against the absolute, Habermas tried to critique Heidegger's philosophy of Being and Bataille's heterogeneous. As such, Habermas charged Heidegger with neglecting everyday problems and actuality, the concrete and situated dynamism of experience and undermining critique by resigning to fate in his restricted conception of the life world. Again, Bataille's heterology advocating the unconventional, excessive and heterogeneous against the normal and three-dimensional process of validating envisions an idea which going beyond reason fails to realize emancipation. Finally, sensitivity must be developed towards charges that Habermas misinterprets authors in his discourse of modernity. |
Diffraction compensation of slope errors on strongly curved grating substrates In 2019, the Institut fr angewandte Photonik (IAP) e. V. in cooperation with Nano Optics Berlin (NOB) GmbH and SIOS Metechnik GmbH has made an important progress in the technology for precision soft X-ray optics the development of three-dimensional (3-D) reflection zone plates (RZPs) with diffractive compensation of slope errors. 2-D mapping of spherical and toroidal grating substrates was used for the metrology of their individual profile. Based on these data, the inscribed grating structure, which corrects the slope error distribution, was computed. The correction algorithm has been implemented as a Python script, and first pilot samples of slope error compensated RZPs are in fabrication process. The 3-D device can replace two or three components in an optical scheme and, therefore, reduce absorption losses by several orders of magnitude. Beyond, the fabrication of customized 3-D Fresnel structures on curved substrates promises considerable improvements for efficiency, resolution and energy range in wavelength dispersive applications. As an example, we present simulations for a compact instrument within (150 250) eV. Further development of this approach toward commercial availability will enable the design and construction of compact soft Xray monochromators and spectrometers with unique parameters. |
The adequacy of the international cooperation means for combating cybercrime and ways to modernize it The era of scientific and technological development has witnessed an extensive use of Internet and electronic devices in various aspects of life. This widespread use has increased security risks, privacy and cyberattacks that threaten both individuals and States. This kind of crime is difficult to prevent as a result of the constant digital technological advances and globalization. There is a growing concern among States and government agencies that such intrusions could critically affect the security and the economics of any State. Combating this kind of crime requires international cooperation. Therefore, many States have called for the need to define cybercrime and to hold conventions to adopt effective legal framework to combat and restrict the progress of cybercrime worldwide. This study concluded that cooperative mechanisms are needed to coordinate and unify joint efforts and to modernize means of combating cybercrime using the latest techniques. In addition, it is necessary to upgrade existing mechanisms and develop other methods to achieve various aspects of cooperation. |
Increased Blood Pressure Variability Contributes to Worse Outcome After Intracerebral Hemorrhage: An Analysis of ATACH-2 Background and Purpose Increased systolic blood pressure variability (BPV) is associated with worse outcome after acute ischemic stroke and may also have a negative impact after intracerebral hemorrhage. We sought to determine whether increased BPV was detrimental in the ATACH-2 (Antihypertensive Treatment of Acute Cerebral Hemorrhage II) trial. Methods The primary outcome of our study was a 3-month follow-up modified Rankin Scale of 3 to 6, and the secondary outcome was a utility-weighted modified Rankin Scale. We calculated blood pressure mean and variability using systolic blood pressure from the acute period (224 hours postrandomization) and subacute period (days 2, 3, and 7). Results The acute period included 913 patients and the subacute included 877. For 5 different statistical measures of systolic BPV, there was a consistent association between increased BPV and worse neurological outcome in both the acute and subacute periods. This association was not found for systolic blood pressure mean. Conclusions In this secondary analysis of ATACH-2, we show that increased systolic BPV is associated with worse long-term neurological outcome. Additional research is needed to find techniques that allow early identification of patients with an expected elevation of BPV and to study pharmacological or protocol-based approaches to minimize BPV. |
Conceptualising Male Vulnerability in a Ghanaian Context: Implications for Adult Education and Counselling Gender advocates have bemoaned the diatribe about women inequality at the neglect of males vulnerability in abstract narratives. We propose that achievement of female empowerment will be complimented by empirically exploring mens vulnerability themes wrapped in masculinity with cultural differences. This study documented views on male vulnerability in the Ghanaian environment using mixed-method design with 189 respondents conveniently. Chi square goodness-of-fit test, and thick descriptions were applied to the open-ended questionnaire items. Indeed, 74% of the participants agreed that Ghanaian males were vulnerable with 26% expressing contrary views. With nine overarching themes generated, gender was not a significant factor in categorising male vulnerability (2 =10.836, p>.05). We concluded that both sexes appear to have shared views on Ghanaian males vulnerability issues and recommended for gender advocates to expand the equality discourse to cover males vulnerability. Implications for adult education and guidance and counselling practices are indicated. |
Faecal mycobacteria and their relationship to HIV-related enteritis in Lusaka, Zambia. The prevalence of infection with mycobacteria, both typical and atypical, is increasing along with prevalence of infection with HIV. Patients with pulmonary tuberculosis (PTB) and patients with chronic diarrhoea are forming a growing proportion of the patient population in hospitals in central Africa. To investigate the possibility that mycobacteria may be responsible for some of the HIV-related enteropathy seen in Lusaka, we studied 89 patients in four different diagnostic groups, clinically, by Mantoux test and by microscopy and culture of stool specimens for mycobacteria. In the HIV-positive group with chronic diarrhoea (n = 31), two patients were found to have mycobacteria on faecal smear and three were culture positive while of the 15 HIV-negative controls, three were smear positive and three were culture positive. Of the 15 patients with proven PTB, three had positive faecal smears but none were culture positive. In the fourth group of 24 patients with suspected PTB, seven were smear positive and five, culture positive. Only in this last group was there some correlation between smear results and culture results. Although this last finding is difficult to explain, it appears that there is no correlation between the symptom of chronic diarrhoea and the presence of mycobacteria in the stool. We conclude that mycobacteria do not play a significant role in the pathogenesis of HIV-related enteropathy in Lusaka. |
The xanthones gentiacaulein and gentiakochianin are responsible for the vasodilator action of the roots of Gentiana kochiana. Gentiana kochiana Perr. et Song. (Gentianaceae), a plant used in the traditional medicine of Tuscany (Italy) as antihypertensive remedy, exerts a vasodilator action on in vitro aortic rings that is probably linked to the blocking of the ryanodine-sensitive Ca++ channels. In the present study, three known xanthones were isolated from the crude methanolic extract of the roots: gentiacaulein, gentiakochianin, and swertiaperennin. The first two showed a vasorelaxing activity in rat aortic preparations, pre-contracted by 3 microM norepinephrine (pIC50 = 5.00 +/- 0.032 for gentiacaulein, pIC50 = 4.95 +/- 0.068 for gentiakochianin), 20 mM KCl (pIC50 = 4.90 +/- 0.15 for gentiacaulein; 4.59 +/- 0.069 for gentiakochianin), or 5 mM caffeine; on the contrary, in the same conditions, swertiaperennin did not show any vasodilator effect. In conclusion, gentiacaulein and gentiakochianin seem to be the compounds responsible for the vasorelaxing properties of the crude extract of Gentiana kochiana roots. |
Hybrid Stabilization of Thoracic Spine Fractures with Sublaminar Bands and Transpedicular Screws: Description of a Surgical Alternative and Review of the Literature Stabilization of unstable thoracic fractures with transpedicular screws is widely accepted. However, placement of transpedicular screws can cause complications, particularly in the thoracic spine with physiologically small pedicles. Hybrid stabilization, a combination of sublaminar bands and pedicle screws, might reduce the rate of misplaced screws and can be helpful in special anatomic circumstances, such as preexisting scoliosis and osteoporosis. We report about two patients suffering from unstable thoracic fractures, of T5 in one case and T3, T4, and T5 in the other case, with preexisting scoliosis and extremely small pedicles. Additionally, one patient had osteoporosis. Patients received hybrid stabilization with pedicle screws adjacent to the fractured vertebral bodies and sublaminar bands at the level above and below the pedicle screws. No complications occurred. Follow-up was 12 months with clinically uneventful postoperative courses. No signs of implant failure or loss of reduction could be detected. In patients with very small thoracic pedicles, scoliosis, and/or osteoporosis, hybrid stabilization with sublaminar bands and pedicle screws can be a viable alternative to long pedicle screw constructs. Introduction Indications for surgical interventions in thoracic fractures are neurological symptoms: feared neurological aggravation, unstable fractures, or unbearable pain with ongoing immobilization, respectively. Fractures of thoracic vertebrae are usually stabilized by an internal fixateur with transpedicular screws and rods. To achieve adequate biomechanical stability, long posterior constructs are recommended. Our treatment protocol includes stabilization of two vertebrae above the fractured one and two below with eight screws in total. The screws are linked by two vertical rods and, if necessary, one cross link. The anatomic specifics of the thoracic vertebrae often lead to problems in placing the pedicle screws. The pedicles are smaller and formed differently. Misplaced screws can lead to severe complications. The Universal Clamp System (Zimmer, Warsaw, USA) was developed as an advancement of the Luque Wiring where sublaminar wires were placed around the lamina. The sharp wires could easily cause injuries of the spinal cord resulting in neurological deficiencies. The Universal Clamp (UC) System is based on the same idea but uses flexible bands made from polyethylene. After the positioning of the band, it is fixed in a clamp which is fastened to the vertical rod. The system is typically used in deformity surgery and has been occasionally used in fractures. In comparison to multisegmental transpedicular stabilization, a so-called "hybrid stabilization" with sublaminar clamps and only four transpedicular monoaxial pedicle screws has the advantage that less screws have to be placed in the thoracic spine. With the hybrid stabilization the two vertebrae adjacent to the fractured one, the one above and the one below, are treated with pedicle screws and the two vertebrae next to them, again one cranial and one caudal, are fixed by sublaminar clamps. We report for the first time two cases of thoracic fractures treated with the above described hybrid stabilization. Surgical Technique. The sublaminar bands were used in combination with monoaxial (Medtronic, Minneapolis, USA) or polyaxial (Stryker, Kalamazoo, USA) pedicle screws of 5.5 mm diameter. The vertebrae next to the fractured one were supplied with monoaxial or polyaxial transpedicular screws whereas the vertebrae next to these were fixed with sublaminar clamps. First, the four necessary pedicle screws are set in a standard open technique under image intensifier control. Then the vertebrae are prepared to pass the sublaminar bands. Therefore, the ligamentum flavum was partially removed. An arcuated dissector can help in checking the free space between the lamina and the dura. If the space is adequate, the sublaminar bands can be passed. First, the stiff part of the band is run in the clamp and then moved under the lamina from caudal to cranial. The band should always stay in contact to the lamina so the dura is not endangered. The surgeon has to verify that the band is flat against the lamina and that it is not twisted. Then the band is preassembled with the clamp and is pushed into the rod. The rods have to be prepared and should be long enough so they can hold the clamps and the screws. Anatomical bending of the rods has to be done before fixing them to the implants. After preparing the rods, the clamps can be connected to them. The rods should now connect the screws and all the clamps. The pedicle screws are fixed to the rods before fixing the sublaminar bands. With this maneuver the first step of reduction of the fracture can be ensured. The final reduction is achieved by using the reposition tool. The bands are strained until the clamps are properly fixed to the laminae. When all implants are set in the desired position, the clamp screws of the UC System are fixed and the bands are shortened. She did not suffer from any neurological deficit. Because of the instability and severe pain we decided to treat the patient surgically. We stabilized the thoracic spine from 3rd to 7th thoracic vertebra from posterior with pedicle screws and the sublaminar bands in the above described hybrid technique. Case Reports Since the patient had very small thoracic pedicles (Figure 1(c); 2.5 mm on the left side of the vertebra above the fractured one; on the right side the pedicles were blind, 1.6 mm in the vertebra below the fractured one), the only way to put the pedicle screws was parapedicular. The sublaminar bands were placed at the 3rd and 7th thoracic vertebra. Two rods were placed and fixed at the screws and the sublaminar clamps. A posterior fusion was added by using local bone graft together with demineralized bone matrix (DBM Pasty, Synthes, West Chester, USA) and tricalcium phosphate (chronOS, Synthes, West Chester, USA). No intraoperative complication occurred. The blood loss was 500 mL and the total operating time was 180 minutes. The postoperative course was uneventful. The patient could be mobilized without orthosis. She could be discharged on the 10th postoperative day. At final follow-up after 12 months (Figures 2(a) and 2(b)) patients did not suffer from any back pain. The patient did not have any neurological deficit. The injury was treated surgically because of the instability and the deformation. Reduction and stabilization were performed from the 1st to the 7th thoracic vertebra. The surgery was carried out one day after the accident. The 1st and 7th thoracic vertebrae were supplied by sublaminar bands whereas the 2nd and 6th thoracic vertebrae were supplied by polyaxial pedicle screws. A posterior fusion from T1 to T7 was initiated by demineralized bone matrix (DBM Pasty) and tricalcium phosphate (chronOS). No complication occurred. The blood loss was 550 mL and the total operation time was 280 minutes. The postoperative course did not show any complicating events. The patient was mobilized without orthosis. She could be discharged to a geriatric rehabilitation clinic on the 12th postoperative day. When leaving the clinic she could walk with a walking frame by herself. At final follow-up at 12 months (Figures 4(a) and 4(b)) patient was ambulatory and complained about moderate back pain (VAS 4). Discussion The two cases illustrate the feasibility of the hybrid technique in thoracic fractures with difficult anatomical conditions. To the best of our knowledge, the used hybrid technique has not been published yet. Gazzeri et al. conducted a study with a different hybrid construct and stabilized thoracolumbar vertebrae with pedicle screws and UC System. Some patients suffered from a vertebral fracture, and the surgeons implanted screws in two to three segments underneath the fracture and sublaminar bands above the fractured vertebra. Our construct differs in the formation of the sublaminar bands and pedicle screws. The biomechanical characteristics of the implants are well distributed, so the stronger pedicle screws with a biomechanical failure strength of 1000 N are adjacent to the fracture, potentially resulting in less loss of reduction. Long constructs are widely recommended in thoracic fractures and have advantages over the short-segment stabilization concerning the biomechanical stability and loss of correction. McLain claimed that long-segment stabilization has different advantages when used in thoracic fractures. Beside others, the advantages of the long constructs are the multiple fixation points which distribute the forces necessary for the correction over a greater number of segments. So, the force on every point is reduced and the risk of pullout failure is minimized. Disch et al. studied the biomechanical stability of different types of stabilization after spondylectomy and showed that, in the thoracolumbar spine, the long-segment stabilization has a higher stiffness in all motion planes. Placing thoracic screws in the thoracic vertebrae often is difficult because of the special anatomic features of the thoracic pedicles. The pedicles have smaller diameters compared to the rest of the thoracolumbar spine. Typically, the smallest ones can be found in the 4th thoracic vertebra with an average of 4.5 ± 1.2 mm. If the screw has a bigger diameter than 80% of the pedicle, it can cause morphological changes of the pedicles. This can result in pedicle fractures, breakout of the screws, and extension of the pedicle. A burst fracture of the pedicle simplifies the screw breakout. In weak bone especially, like in patients suffering Case Reports in Orthopedics 5 from osteoporosis, the risk of a breakout is higher than that in healthy bone. The consequences of the above-mentioned anatomical situation are frequently misplaced pedicle screws, mostly in the lateral direction because of the thinner pedicle wall on the lateral side. With the proposed hybrid stabilization, the risk of screw misplacement is lowered due to the simple fact that less screws have to be placed. In special cases, the anatomy of the thoracic vertebrae can be even more challenging, for instance in deformed spines like scoliosis, osteoporosis, and deformation of the vertebrae, like in wedge-shaped vertebrae after a former fracture. Patients who are suffering from a scoliosis have small pedicles on the concave side of the spine, especially on the apex and the main curve. Therefore, some authors suggest that the screw insertion on the apex of the curve should be avoided. Pedicle diameters are declared between 2.5 and 4.0 mm. Kotani et al. reported a pedicle perforation in 11% of the observed patients with a scoliosis. In our case the patient had a preexisting scoliosis with pedicle diameter less than 3 mm. Placing the sublaminar bands was therefore much easier and safer than inserting the pedicle screws. Hybrid stabilization with pedicle screws and sublaminar bands might be beneficial in patients with osteoporosis. Chao et al. showed in a biomechanical study that in patients with a -score of −5.2 the pull-out force amounts to 144.3 ± 92.1 N. In comparison, in healthy bone, the pull-out force accounts for around 1000 N. In osteoporotic spines, the posterior part with the lamina is stronger than the anterior parts. So, the lamina should be an ideal part to fix an implant. The Universal Clamp System showed high failure loads of 401 ± 120 N in fresh frozen human thoracic spines. In our case number two, no postoperative implant failure occurred despite severe osteoporosis and long construct. No cement augmentation of the screws was necessary. Another problem for the surgeon is the fact that it can be difficult to visualize bony thoracic structures with the image intensifier intraoperatively. When using sublaminar bands, no radiological control is necessary. Also the use of sublaminar bands has some limitations. First, a decompression is necessary which carries the risk of dural tears or even damaging the cord. If a laminectomy is performed, no clamps can be fixed at this level. Finally, inserting the sublaminar band takes more time than placing a pedicle screw, at least in our hands. A slightly prolonged surgical time should be considered. Limitations are the short follow-up time of one year and the missing CT scan after one year. So the fusion rate cannot be determined for sure. Conclusion In patients with a combination of an unstable fracture and difficult anatomic conditions the hybrid stabilization with sublaminar bands and pedicle screws is a reliable technique. Scoliosis, osteoporosis, or small pedicles are risk factors for pedicle screw failure. The use of sublaminar bands can help to avoid such complications. |
Results of G004, a phase IIb study in recurrent glioblastoma patients with the TGF-b2 targeted compound AP 12009. 1553 Background: In high-grade glioma (HGG), TGF-2 expression strongly correlates with tumor grade and is highly predictive of disease outcome. The compound AP 12009 inhibits TGF-2 expression. Preclinical results revealed strong multimodal activity including reversal of TGF- induced immunosuppression, inhibition of tumor cell migration and proliferation. In 3 preceding phase I/II dose escalation studies, 24 HGG patients had been treated with AP 12009. METHODS G004 is an international open-label, actively controlled, dose finding phase IIb study. Objective is a comparison of two doses of AP 12009 and standard chemotherapy for efficacy and safety. 145 patients with histopathologically confirmed recurrent anaplastic astrocytoma (AA, WHO grade III) or glioblastoma (GBM, WHO grade IV) were randomized into one of 3 treatment arms. 134 patients received treatment AP 12009 10M, AP 12009 80M or standard chemotherapy (TMZ or PCV). AP 12009 was applied locoregionally by convection-enhanced delivery during a 6-month active treatment period with 7-day-on, 7-day-off cycles. Primary endpoint is tumor response by local and central MRI reading. All patients have completed active treatment. Follow-up for survival and tumor response assessed by local and central MRI reading is ongoing. RESULTS Here we report on patients with recurrent GBM (AA see separate Abstract). 96 GBM patients (37% female, 63% male; median age 51 years, range 20-74; median Karnofsky performance status 90, range 70-100) have been treated. 63 GBM patients received AP 12009 (28 pt. 10 M, 35 pt. 80 M), 33 patients received standard chemotherapy. Data were evaluated by an independent Data and Safety Monitoring Board. Up to now, in 89 patients treated with AP 12009 (AA and GBM patients) 6 SAEs possibly related to the study drug and 37 procedure related SAEs (92% mild or moderate) were documented. Several long-term tumor responses were observed by local MRI reading. Exact response rates are being determined by central reading. CONCLUSIONS Responses in patients treated with AP 12009 in both AA and GBM patients are long lasting with a good quality of life. Phase III clinical trials in AA and GBM patients are currently in preparation. . |
Study on Dynamical Behaviors Mechanism of Sine Voltage Compensation in Buck Converters This paper analyzes the effect of stable behaviors when amplitude, phase of sine voltage compensation signal are added in the system, reveals that the dynamical behaviors mechanism that sine voltage compensation signal changes feedback voltage-mode controlled buck converter lies in changing the duty cycle without impacting the system stable error via analyzing the change of period multiplier in Monodromy matrix and conditions of period bifurcation, and finally achieves stabilization control for bifurcation and chaotic behaviors. The simulation and experimental results prove the correctness of the theoretical analysis. |
Effects of Ursolic Acid Derivatives on Caco-2 Cells and Their Alleviating Role in Streptozocin-Induced Type 2 Diabetic Rats In this study, the effect and mechanism of a series of ursolic acid (UA) derivatives on glucose uptake were investigated in a Caco-2 cells model. Their effect on hyperglycemia, hyperlipidemia and oxidative stress were also demonstrated in streptozocin (STZ)-induced diabetic rats. 2--2-deoxy-glucose (2-NBDG) was used as a fluorescein in Caco-2 cells model to screen UA derivatives by glucose uptake and expression of glucose transporter protein (SGLT-1, GLUT-2). Moreover, STZ-induced diabetic rats were administered with these derivatives for 4 weeks of treatment. The fasting blood glucose (FBG), insulin levels, biochemical parameters, lipid levels, and oxidative stress markers were finally evaluated. The results of this study indicated that compounds 10 and 11 significantly inhibited 2-NBDG uptake under both Na+-dependent and Na+-independent conditions by decreasing SGLT-1 and GLUT-2 expression in the Caco-2 cells model. Further in vivo studies revealed that compound 10 significantly reduced hyperglycemia by increasing levels of serum insulin, total protein, and albumin, while the fasting blood glucose, body weight and food intake were restored much closer to those of normal rats. Compounds 10 and 11 showed hypolipidemic activity by decreasing the total amounts of cholesterol (TC) and triglycerides (TG). Furthermore, compound 10 showed antioxidant potential which was confirmed by elevation of glutathione (GSH) and superoxide dismutase (SOD) and reduction of malondialdehyde (MDA) levels in the liver and kidney of diabetic rats. It was concluded that compound 10 caused an apparent inhibition of intestinal glucose uptake in Caco-2 cells and hypoglycemia, hypolipidemia and augmented oxidative stress in STZ-induced diabetic rats. Thus, compound 10 could be developed as a potentially complementary therapeutic or prophylactic agent for diabetics mellitus and its complications. have lower levels of glutathione (GSH) and superoxide dismutase (SOD), the primary endogenous antioxidants. In contrast, malondialdehyde (MDA), a highly toxic byproduct generated partially by lipid oxidation and ROS, is increased in patients with diabetes. Thus, enhancing the antioxidant capacity is one method to relieve diabetes and its complications. In the present study, an investigation was undertaken to evaluate the effects of UA and its derivatives shown in Figure 1, which were synthesized in our laboratory, on the expression of glucose uptake and glucose transporter protein (SGLT-1, GLUT-2) in Caco-2 cells model. Furthermore, the anti-hyperglycemic, anti-hyperlipidemic, anti-oxidant inhibitory effects of these derivatives in STZ-induced DM rats were also investigated. Cell Toxicity As shown in Table 1, the cytotoxicity of UA and its derivatives against Caco-2 cells was studied. The results indicate that the viability of Caco-2 cells was greater than 90%, with the exception of cells treated with compound 6, for which the cell viability was 86.12%, thus, it is not suitable for in vivo studies. Values denote mean ± SD, n = 4. Inhibitory percentage of cells treated with each compound at a concentration of 100 M for 24 h. Figure 2. Effects of selected compounds on glucose uptake in Caco-2 cells under condition of (A) Na + -dependent and (B) Na + -independent. Cells were treated with vehicle, positive and the indicated compounds (100 M). The data reported represent the means (n = 3) ± SD. * p < 0.05, ** p < 0.01. The sodium-independent glucose uptake study was also performed like the sodium-dependent one, but NaCl and Na 2 HPO 4 were replaced by KCl and K 2 HPO 4. As shown in Figure 2B, only phlorizin and compounds 10 and 11 had a significant reduction effect on glucose uptake (* p < 0.05), which was significantly decreased to 75%, 70%, 74% of the control value, whereas the other compounds had no detectable effect under these conditions. Effect of UA and Its Derivatives on Glucose Transporter Protein Expression SGLT-1 and GLUT-2 are the most significant transporters involved in glucose transportation in the intestines. Caco-2 cells were treated with 0.1% DMSO, 100 M phlorizin, 100 M phloretin, 100 M compounds 10-12 for 6 h, then Caco-2 cells were lysed and analysed by western blotting. The effect of compounds 10-12 on glucose uptake suggested that these compounds may exert their effects through modulation of glucose transporter expression. As shown in Figure 3, compounds 10 and 12 are able to significantly reduce the protein expression of SGLT-1, particularly compound 12, whereas compounds 10 and 11 had significant effects on reducing the expression of protein GLUT-2. Protein expression levels of SGLT-1 and GLUT-2 were quantified with respect to the level of -actin and expressed as relative changes in comparison to the vehicle control. The data reported represent the means (n = 3) ± SD. * p < 0.05, ** p < 0.01. Acute Oral Toxicity Study of UA and Its Derivatives The acute oral toxicity study revealed these tested compounds' non-toxic nature; no toxic reactions or lethality were observed at the doses of 100, 200 and 500 mg/kg. Based on the result, 100 mg/kg was selected as the maximum dose for oral administration. Effects of UA and Its Derivatives on Fasting Blood Glucose Levels During the 4 week treatment, the blood glucose levels were changed in normal, diabetic control, and diabetic treated rats. The results are presented in Table 2. The fasting blood glucose levels of normal rats were not changed until the end of the period, while the blood glucose were significantly increased in untreated diabetic rats as compared with normal control. However, the high levels of blood glucose decreased in diabetic rats treated with compound 10 and glibenclamide. At the end of the 4 week treatment, 100 mg/kg of compound 10 decreased blood glucose levels (49.6%) as compared with diabetic control. The glibenclamide group achieved 54.4% drop in blood glucose levels. In addition, compounds 1, 11 and 12 had no significant effect on lowing fasting blood glucose levels. Table 2. Effects of glibenclamide and the selected compounds on fasting blood glucose levels of normal, diabetic control, and diabetic treated rats. The data reported represent the means (n = 6) ± SD; * Mean values that are significantly different from diabetic control group (p < 0.05). Effect of UA and Its Derivatives on Serum Biochemical Parameters After 4 weeks of treatment, the serum insulin, total protein and albumin levels in untreated diabetic rats were significantly reduced compared to the normal control group. Figure 4 shows that after 4 weeks of administration of glibenclamide (10 mg/kg) and compound 10 (100 mg/kg), the serum insulin, total protein and albumin levels in diabetic rats were increased significantly as compared with the diabetic control. Compounds 1, 11 and 12 had a certain but non-significant effect of increasing insulin levels. In addition, compound 1, 11 and 12 caused no appreciable improvement in increasing total protein and albumin levels in diabetic rats.. Effects of compounds 1, 10, 11 and 12 on serum biochemical parameters of STZ-induced diabetic rats in comparison with normal and diabetic control rats after 4 weeks treatment. At the end of the treatment period, rats were fasted for 12 h and blood was drawn to collect the serum. Panels denote (A) serum insulin; (B) albumin and total protein levels. The data reported represent the means (n = 6) ± SD. * Mean values that are significantly different from diabetic control group (p < 0.05). Group Fasting Blood Glucose Level (mg/dL) Effect of UA and Its Derivatives on Body Weight and Food Intake The effect of tested compounds 1, 10, 11 and 12 on changes of body weight and daily food intake are presented in Figure 5. As shown in Figure 5A, it could be concluded that there was a significant increase in body weight for compound 10-treated diabetic rats when compared with the diabetic control and the glibenclamide treatment group by the end of the treatment period, while the other compounds had no such appreciable effect. Moreover, as shown in Figure 5B, there was a significant increase in food intake in diabetic rats. After oral administration of glibenclamide, compound 10 and 11, the three compounds could reduce daily food intake significantly compared with the untreated diabetic rats. In contrast, compounds 1 and 12 presented no obvious decrease. The data reported represent the means (n = 6) ± SD. * Significant difference compared to diabetic control (p < 0.05). Figure 6 shows the effect of glibenclamide and UA derivatives on serum TG, TC, LDL-C and HDL-C levels in diabetic rats and normal control. Effect of UA and Its Derivatives on Hyperlipidemia Serum TG, TC and LDL-C levels were significantly elevated in diabetic control when compared with the normal control, while the HDL-C levels in diabetic control were significantly decreased when compared with those in normal control. However, Serum TC, TG levels were significantly decreased after treatment with glibenclamide, compounds 10 and 11. Compared with diabetic control, serum LDL-C levels were also lowered, while HDL-C levels were higher with the tested compound treatment but none of gave a significant effect (p < 0.05). Effect of UA and Its Derivatives on Oxidative Stress Figure 7 reveals that the levels of GSH and SOD in diabetic control were reduced while the levels of MDA were significantly increased when compared with those in normal control. All tested compounds produced an increase in GSH levels in liver and kidney. In particularly glibenclamide and compound 10 demonstrated significant effects on the liver, while compounds 1, 11 and 12 showed a great effect on kidney. There was a great improvement of SOD level in the liver and kidney of diabetic rats after treatment with glibenclamide, compounds 10 and 11. Compared with the diabetic control, the MDA levels in the liver of diabetic rats were obviously down-regulated by daily administration of glibenclamide, compounds 1 and 10, while the MDA levels were significantly reduced by glibenclamide and compound 10 (p < 0.05). Figure 6. Effects of compounds 1, 10, 11 and 12 on serum lipid profiles of STZ-induced diabetic rats in comparison with normal and diabetic control rats after 4 weeks of treatment. At the end of the treatment period, rats were fasted for 12 h and blood was drawn to collect the serum. The data reported represent the means (n = 6) ± SD. * Significant difference compared to diabetic control (p < 0.05). The data reported represent the means (n = 6) ± SD. * Significant difference compared to diabetic control (p < 0.05), ** Significant very difference compared to diabetic control (p < 0.01). Discussion UA has been reported to possess a wide range of bioactivities, including positive effects in curing various complications of diabetes and lowing blood glucose levels. However, the anti-diabetic potential of UA derivatives and their mechanism(s) of action have not been thoroughly investigated. In the present study, the activities of UA and some of its derivatives were assayed in a Caco-2 cell model. Moreover, the potential of these derivatives in STZ-induced diabetic rats model was also investigated. The results of MTT assays indicated that at a concentration of 100 M none of the derivatives showed toxicity to Caco-2 cells, except for compound 6, which is in line with the results of acute oral toxicity studies in vivo. The effects of these derivatives on glucose uptake under Na + -dependent and Na + -independent conditions in Caco-2 cells were assessed. The fluorescent glucose analog probe 2-NBDG was used to measure glucose uptake rates. Specificity of glucose transporters to the analog probe was confirmed by inhibition of uptake of 2-NBDG by D-glucose and phlorizin. Before a meal, glucose concentration in plasma is much higher than that in lumen. Any glucose could be quickly captured by SGLT-1, because SGLT-1 is a low-capacity, high-affinity transporter and is the only transporter capable of moving glucose against a concentration gradient; GLUT-2 is a high-capacity, low-affinity facilitative transporter that equilibrates glucose between plasma and enterocytes. To gain insight into the effects of UA and its derivatives on glucose uptake, we studied the most important glucose transporter proteins as the major transporters are responsible for the absorption of glucose by Caco-2 cells. Under sodium-dependent conditions, it is expected that both SGLT-1 and GLUT-2 would operated at the apical surface to absorb glucose, while under sodium-free conditions, only GLUT-2 worked to carry out glucose uptake. As shown in Figures 2 and 3, compound 10 could reduce the glucose uptake by the reduction of both SGLT-1 and GLUT-2 expression under Na + -dependent conditions. The effect of compound 11 relied on the decrease of GLUT-2, while compound 12 relied on the inhibition of the expression of SGLT-1. The results also suggested that the inhibition of GLUT-2 by the tested compounds was greater than that of SGLT-1. Phlorizin, compounds 10 and 11 all reduced glucose uptake under both Na + -dependent and Na + -free conditions, which meant that the inhibition of GLUT-2 by these three compounds was greater than that of SGLT-1. This was consistent with the results of a protein expression study. According to the result of an in vitro study, the anti-diabetic potential of UA and its derivatives was investigated in STZ-induced DM rats. After daily administration of the tested compounds, blood glucose levels were significantly reduced by compound 10, whereas insulin levels were increased, and compound 10 (100 mg/kg) had a similar effect as glibenclamide (10 mg/kg) ( Figure 5). Elevated insulin levels in diabetics usually normalize the serum and tissue proteins by improving protein synthesis and reducing protein degradation or protein glycosylation. The characteristic loss of body weight associated with STZ-induced DM rats could be attributed to dehydration and catabolism of fat or breakdown of tissue proteins, which leads to muscle waste. The recovery of body weight observed in the compound 10 treated DM rats could be the result of increased glucose uptake, insulin secretion and decreased fasting blood glucose levels, as an indication of improved glycemic control in the rats. The dyslipidemia associated with diabetes is typically a combination of hyperlipidemia with insulin resistance and even modest abdominal obesity involves elevation of LDL cholesterol, an increase of TG and a decrease of HDL cholesterol. The DM rats treated with compounds 10 and 11 showed that the two compounds have significantly effect in reducing serum TC and TG levels ( Figure 6), which were demonstrated as the effectiveness of these compounds against experimental STZ-induced DM in rats. Streptozotocin has been widely used to induce DM, which may result from an increase of reactive oxygen species (ROS) and the inhibition of free radicals defense system. ROS may cause peroxidation by react with lipids, resulting in elevated lipid peroxidation. The increase of lipid peroxidation might be an indication of a decrease of enzymatic and non-enzymatic antioxidants in defense mechanisms. It is well known that glutathione (GSH) is a major endogenous antioxidant, one of its major functions being that its sulfhydryl (SH) group is a strong nucleophile that confers antioxidant protection, so the depletion of liver and kidney GSH levels reflect augmented oxidative stress. Compound 10 showed a significant restoration of GSH content in the liver of DM rats ( Figure 7A). The natural cellular antioxidant enzyme superoxide dismutase (SOD) plays a pivotal role in oxygen defense metabolism by reducing superoxide to water and molecular oxygen. Compounds 10 and 11 showed a significant increase in SOD levels in the liver and kidney in DM rats model ( Figure 7B). Malondialdehyde (MDA) as the final product of lipid peroxidation is an indicator of peroxidation level. That is, the higher the concentration of the MDA, the more serious peroxide levels in the body are. Compound 10 showed a significant elevation of MDA levels in the liver and kidney in DM rats ( Figure 7C). After treatment with compound 10, there was an increase of the activities of SOD and GSH, and a decrease in the activity of MDA. These results indicated that compound 10 could effectively protect cells against oxidative stress by scavenging free radicals. Hyperlipidemia is characterized by the metabolic syndrome in addition to a disproportionate elevation of apo B levels. The measurement of fasting glucose and apo B in addition to the fasting lipid profile can help to estimate the risk of coronary artery disease and to guide treatment decisions in patients with the metabolic syndrome. It is known that there are a certain number of physiological antioxidants in the type 2 DM model. The depletion of antioxidants in the diabetic condition is a main cause of diabetic pathogenesis. Present therapeutic strategies typically attempt to relieve the clinical manifestation of diabetes and its complications, which justifies the therapeutic use of anti-diabetic agents coupled with antioxidants in cases of fast emergence of diabetes. In this study, the bioactivity of UA and its derivatives in Caco-2 cells model and STZ-induced diabetic rats were studied. Regarding the mechanism of action of compound 10 could be concluded that it played an important role in both Na + -dependent and Na + -independent conditions by inhibiting glucose transporter protein expression, and compound 10 has significant capability of reducing hyperglycemia, hyperlipidemia and oxidative stress in STZ-induced DM rats, so compound 10 not only can reduce the absorption of glucose in STZ-induced DM rats intestinal, but also has the effect of alleviating diabetic complications and may therefore contribute to effective diabetes management in the future. Preparation of UA and Its Derivatives UA was purchased from Nanjing Zelang Medical Technology Co., Ltd. (Nanjing, China), with over 98% purity. UA derivatives (see Figure 1) were obtained from our research team. These compounds were purified by column chromatography and their structures were all confirmed. In vitro Cell Culture Human intestinal Caco-2 cells (ATCC, Rockville, MD, USA) were incubated at 37 °C in a humidified atmosphere of 5% CO 2 in air. The cells were cultured in Dulbecco's modified Eagle's modified Eagle's medium (DMEM) with 10% heat-inactivated fetal bovine serum, 1% non-essential amino acids, 1% L-glutamine, 1% penicillin/streptomycin (Gibco Life Technologies, Grand Island, NY, USA). The cells were sub-cultured at confluence by 0.05% trypsin-0.5 mM EDTA treatment before they were applied in the experiment. Cytotoxicity Assay A MTT test was conducted to determine the possible toxicity of UA and its derivatives to Caco-2 cells. Briefly, cells were seeded at a density of 1 10 4 cells/mL in a 96-well plate and incubated for 24 h. The next day, cells were incubated with vehicle (0.1% DMSO) and compounds at 100, 50, 25, 12.5 mol/L for 24 h, and then 5 g/L MTT solution (20 L) was added for 4 h. Absorbance at 570 nm was measured using a Multimodel Plate Reader (Infinite 200, Tecan, Mannedorf, Switzerland). The results were expressed as the percentage of control cells demonstrating cell viability after treated with the testing compounds. Glucose Uptake by Caco-2 Cell Monolayers Caco-2 cells, which were derived from human colon adenocarcinoma, were selected for intestinal absorption studies because these cells express the morphological characteristics and most of the functional properties of differentiated small-intestinal enterocytes. Caco-2 cells were seeded on 24-well plates at the density of 2 10 5 cells/well. The medium was changed every 1-2 days, and the culture was carried out for 13 days. Then Caco-2 cells were placed in serum-free media for 24 h and then they were washed twice with Hanks' balanced salt solution (HBSS, pH 7.5, 140 mM NaCl, 5 mM KCl, 1.2 mM Na 2 HPO 4, 2 mM CaCl 2, 1.2 mM MgSO 4, 20 mM HEPES, 0.2% bovine serum albumin) before the uptake studies. When a sodium free buffer was required, NaCl and Na 2 HPO 4 were replaced with equal amounts of KCl and K 2 HPO 4, respectively. After washing, the cells were incubated for 15 min at room temperature in HBSS before the commencement of this experiment. The uptake studies were initiated by adding HBSS containing either control or test solution (100 M) and 2-BNDG (100 M) for 30 min at 37 °C. Glucose uptake was stopped by adding twofold volume of ice-cold PBS and the wells were washed with ice-cold PBS three times. Fluorescent intensity was measured by a Multimodel Plate Reader (Tecan Infinity 200) with the 485 nm ex and 535 nm emiss filter set before and after adding 2-NBDG. Then the cells were lysed in 200 L lysis buffer (10 mM Tris-HCl pH = 7.4, 150 mM NaCl, 1% Triton-x-100, 1 mM EDTA, 0.1% SDS) and supplemented immediately before they were processed with 10 g/mL PMSF. Cells were allowed to lyse on ice, and then vortexed and sonicated. The lysate was used for protein determination, protein concentrations were determined as described by BCA kit (Boster, Wuhan, China) with bovine serum albumin as standard. Western Blot Analysis Caco-2 cells were cultured on 6-well plates at a density of 1 10 6 cells/well for 13 days. The cells were placed in serum-free medium for 24 h prior to the treatment. Then UA and its derivatives (100 M) were added, after 6 h incubation, the cells were immediately washed three times with ice-cold PBS and lysed in 200 L lysis buffer. Cells were allowed to lyse on ice, and then scraped, vortexed, sonicated and stored at −80 °C for further analysis. Protein concentrations were determined as described and cell lysate was subjected to western blot analysis. Briefly, proteins with same amount were denatured in sample buffer, separated on 8% SDS-PAGE gel, and transferred onto a polyvinylidene diflouride membrane. The membranes were blocked for 1 h at 37 °C with 10% defatted milk in Tris-buffered saline (10 mM Tris-HCl, pH 7.5 and 137 mM NaCl) containing 0.05% Tween 20 (TBST). Membranes were then incubated overnight at 4 °C in blocking buffer with antibodies for GLUT-2, SGLT-1 and -actin (EMD,Millipore, Billerica, MA, USA). Membranes were washed with TBST and incubated 1 h with appropriate secondary antibodies. Revelation was performed by using the enhanced chemiluminescence reagent and blue-light sensitive film. Densitometric analysis of films was performed with a Hewlett-Packard scanner equipped with a transparent adaptor and UN-SCAN-IT software. In Vivo Experimental Animals Adult male albino rats of Sprague-Dawley rats (180-220 g) were obtained from the Guangdong Medical Lab Animal Center, and maintained under standard laboratory conditions in a temperature (25-30 °C) and light-controlled (12-h light/dark cycle) room with 35%-60% humidity. The animals were acclimatized for 10 days before the experiments and provided with rodent chow and water ad libitum. The investigation was conducted in accordance with the Guide for the Care and Use of Laboratory Animals and approved by the institutional Animal Ethics Committee. Oral Acute Toxicity Studies Healthy adult SD rats were used for this test. The rats were fasted overnight and divided into 13 groups (n = 4), the rats were orally fed with compound 1, 10, 11 and 12 with an increasing dose of 100, 200 and 500 mg/kg body weight. Control group was given vehicle solution (0.5% CMC-Na). These rats were continuously observed for 2 h for their behavioral, neurological, and autonomic symptoms and then observed once more after a period of 24 to 72 h for any sign of lethality or death. Induction of Diabetes Diabetes was induced by a single intraperitoneal injection of streptozotocin (Sigma-Aldrich, St. Louis, MO, USA) at a dose of 65 mg/kg body weight. STZ was dissolved in 0.1 M cold citrate buffer, pH 4.5. Control group were injected with citrate buffer alone. After 7 days for the development of diabetes, plasma glucose levels were determined of each rat. Rats with fasting blood glucose (FBG) range of above 16.7 mmol/L were considered as DM rats and were used for further study. Experimental Procedure According to the body weight and plasma glucose levels the rats were divided randomly into seven groups with six rats each and treated as follows: Group I: Normal rats treated with 0.5% CMC-Na (2 mL/200 g body weight). Group II: Diabetic rats treated with 0.5% CMC-Na. Group III: Diabetic rats treated with 10 mg/kg of glibenclamide. Group IV: Diabetic rats treated with 100 mg/kg of compound 1 (UA). Group V: Diabetic rats treated with 100 mg/kg of compound 10. Group VI: Diabetic rats treated with 100 mg/kg of compound 11. Group VII: Diabetic rats treated with 100 mg/kg of compound 12. The treatment was continued daily for 28 days. Blood glucose levels and body weight were measured every week to ascertain the status of diabetes. After 4 weeks of treatment, the animals were fasted for 12 h and sacrificed by cervical dislocation under mild anaesthetization, blood was collected into heparinized tubes and centrifuged at 2,000 rpm for 10 min. Livers and kidneys were removed, washed and homogenized in ice-cold normal saline. This homogenates were centrifuged at 3,000 rpm for 10 min at 4 °C, and the supernatants were collected for the estimate of antioxidant markers. Biochemical Analysis Fasting blood glucose levels were estimated on days 0, 7, 14, 28 with the single touch glucometer (Life Scan, Johnson & Johnson Company, New Brunswick, NJ, USA). Serum insulin level was measured by use of a commercial immunoassay kit by enzyme linked immunosorbent assay (Uscn. Life Science Inc., Wuhan, China). Serum total protein, albumin, triglycerides (TG), total cholesterol (TC), high-density lipoprotein cholesterol (HDL-C), and low-density lipoprotein cholesterol (LDL-C) were measured with an automatic biochemical analyzer (Sinnowa D240, Nanjing, China). Assessment of Oxidative Stress Markers The tissue homogenates of liver and kidney were used for the estimate of antioxidant markers. Glutathione (GSH) level was measured with 5, 5'-dithiobis-(2-nitrobenzoic acid) (DTNB) as described by Sedlak and Lindsay. Superoxide dismutase (SOD) activity was assessed according to the method of Sinha. Malondialdehyde (MDA) activity was estimated with the thiobarbituric acid method. Statistical Analysis All experiment results were expressed with mean ± standard deviation (SD). The significant differences between the means of the experimental groups were determined with the analysis of variance (ANOVA), followed by a Tukey-Kramer multiple comparisons test (Graph Pad version 5.0; Graph Pad Software Inc., San Diego, CA, USA). The values were considered significant when p < 0.05. Conclusions In summary, our study suggests that compound 10 displays an inhibitory effect on 2-NBDG uptake through inhibiting SGLT-1 and GLUT-2 transporter protein expression in Caco-2 cells. This observation was corroborated by its benefits in attenuating hyperglycemia, hyperlipidemia and oxidative stress in a DM rat model. Moreover, compound 10 is easily obtained from UA by one step structure modification. These finding are supportive to the application of compound 10 as a potential source for the discovery of active new and active oral medicine for future therapy of diabetes and its complications. |
Sound and Vibration Damping Properties of Flax Fiber Reinforced Composites In recent days, automobile and construction industries are focus on the light weight, environmental friendly materials with good mechanical properties. Glass fiber reinforced composites have excellent specific properties and are widely used because of reduced mass. However the manufacturing of glass fibers and end of life disposal are the major problem to the environment. To overcome these problems, natural fibers are used to manufacture composites. Flax is the one of the naturally available fiber having good mechanical properties than other natural fibers. It needs to be pointed out that most of research effort about the flax fiber reinforced composites focuses on the manufacturing techniques and primary mechanical properties and not on the secondary properties like sound absorption and vibration damping. In this paper, sound absorption and vibration damping properties of flax fiber reinforced composites were characterized and compared with the glass fiber reinforced composites. It was experimentally observed that the sound absorption coefficient of flax fiber reinforced composites has 21.42% & 25% higher than that of glass fiber reinforced composites at higher frequency level (2000 Hz) and lower frequency level (100 Hz). From the vibration study it was observed that the flax fiber reinforced composites have 51.03% higher vibration damping than the glass fiber reinforced composites. The specific flexural strength and specific flexural modulus for flax fiber reinforced composites also good. These results suggest that the flax fiber reinforced composites could be a viable candidate for applications which need good sound and vibration properties. Introduction Traditional fiber reinforced composite materials consist of either glass or carbon fibers, coupled with a resin. These materials are strong, stiff and light-weight, often providing superior mechanical performance at a reduced weight compared to their metallic counterparts. Utilizing these materials provides an even greater improvement in mechanical performance and is preferred in many applications with weight constraints. Unfortunately, the lightweight and stiff properties of traditional materials (glass, carbon, kevlar) as structures make them efficient noise radiators compared to metallic structures. Noise refers to the irregular and chaotic sound which disturbs people's work and impairs people's health. In recent years, with rapid development of modern industry and transportation, noise pollution has become increasingly prominent, and has become a major cause of environmental pollution and personal unhealthiness. There are two main methods to control the noise pollution. One is the control of the noise sources, that is, to make the big vocal sound inaudible through a small device or equipment; the other one is to use a variety of noise reduction materials with special structures, in terms of transmission route. In the last few years, acoustics and materials science experts have introduced new noise reduction materials and there are more and more applications of fibers in the control of noise pollution. Noise reduction materials are divided into two broad categories. One is the sound-absorbing material and the other is acoustic material. Many of the fiber reinforced composites are used as sound-absorbing material and only a small number of them are also used as acoustic materials. Many engineering structures made from composites, including military equipment, automobiles, aircrafts, boat hulls, wind turbine blades and spacecraft's, often suffer from the menace of vibrations during their normal operations. Micro-cracks present in these structures propagate rapidly due to fatigue caused by vibrations, resulting in premature failure. Sound absorbing properties and vibration damping properties are often secondary design criteria in composite structures, whereas mechanical performance and weight are primary concerns. Thus rather than sacrificing mechanical performance, common state-of-art methods involve the addition of sound absorbing and vibration damping material, which is expensive, labor-intensive, and adds more weights to the structure; this often raises the issue of structural integrity. Over the last couple of decades there is an increasing demand for materials that are more environmental friendly. There have been many studies performed on natural material based which take advantage of both natural materials as well as superior mechanical performance over metallic structures and also it has good sound absorbing and vibration damping properties. These natural materials could essentially be grown for the purpose of manufacturing composites, in turn providing such benefits as being both biodegradable and recyclable. Moreover, replacing synthetic materials with natural materials results in a reduction carbon emissions, since oil and other carbon products are needed for the fabrication of synthetic structures. The goal of this study is to explore the sound absorbing and damping properties of composites composed with natural materials and compare them over commonly used traditional composites, such as glass reinforced composites. By utilizing such materials in the fabrication of structures, significant reductions in emissions could be achieved, along with the ability to have materials which are renewable, recyclable and biodegradable. Natural material based composites with improved acoustic performance and damping properties will be an environmental friendly solution to the structure-noise radiation challenge. Among the natural fibers flax has good mechanical properties and it is nearly equal to E-glass fiber. In this present study three different types of composite materials (glass, flax, combination of glass and flax) are fabricated using Vacuum Assisted Resin Transfer Molding (VARTM) and sound, and vibration test has been carried out in order to investigate the sound absorption and vibration damping properties. Materials for fabrication of composite laminates The utilization of lightweight and low cost, natural fibers offers the potential to replace a large segment of the glass and other synthetic fibers in numerous automotive and construction applications. Natural fibers such as kenaf, hemp, flax, jute, and sisal are providing reinforcement due to reductions in weight and cost, less CO 2 consumption for manufacturing, recyclability, and the added benefit that these fiber sources are "green" or eco-friendly. Among the all-natural fibers flax has good mechanical properties. Properties of the flax are nearly equal to the E-glass fiber. In this study utilizes flax as a natural reinforcement for manufacturing composite laminates and Eglass as synthetic reinforcement and E-glass and flax as hybrid reinforcement for manufacturing composite laminates for comparison purpose. Flax was purchased from Lineo, Belgium as a woven rovings (FlaxPly BL 300 GSM) and E-glass was purchased from CovaiSeenu& Company Pvt. Ltd., India as woven rovings (E-glass Ply BL 300 GSM). Epoxy was used as matrix for manufacturing composite laminates for both Flax, and E-glass fibers. Epoxy (Araldite LY 556) was purchased from CovaiSeenu& Company Pvt. Ltd., India. Fabrication of composite laminates The composite laminates have been manufactured by Vacuum Assisted Resin Transfer Moulding (VARTM). The mold surface used for the fabrication of the composite laminates was cleaned using acetone (cleaning agent). A coating of wax is applied on the mold surface for easy removal of composite laminates. The fiber materials (woven roving) were cut to the required shape and size and it is placed on the mold surface. No. of layers of reinforcementwere used based on thickness required. Table 1 shows the details of the fiber materials used for fabrication. The Peel ply is placed over the fibers and a distribution medium is laid on the top of the peel ply. The entire layup was then bagged and placed under vacuum. The resin used in this fabrication process was Araldite LY 556 is mixed with hardener (1:10 ratio) and that is impregnated throughout the reinforcement under the vacuum. After about 12 h, the bag was removed, and the laminate was left exposed to the atmosphere to finish venting the styrene gas produced from the curing resin. The laminates are then cut to the required shape and dimensions. Sound absorption coefficient measurement The sound absorption coefficient can be measured with the help of an impedance tube tester as per the ASTM standard E 1050. The impedance tube testing method is implemented by the generation of plane wave in a tube by a sound source and then the sound pressures are measured in a microphone position in close proximity of the sample. Fig. 2.shows the schematic diagram of impedance tube tester. An impedance tube was used with a sound source (loudspeaker) connected to one end, and the test sample shown in fig. 3 was mounted to another end. The dimensions and no. of layers of reinforcements for different specimens are listed in table 2. The loudspeaker generates the broadband random sound waves and the sound waves propagating as plane waves in the tube hit the sample, get partially absorbed, and subsequently reflected. The acoustical properties of the test sample were tested in the frequency range of 100-2000 Hz. This system tests a sound absorptive material, processes the results, and reports the results in a graph of the absorption coefficient in various frequencies. Thus, the absorption coefficient of each sample was obtained. The ability of the composite material to absorb unwanted noise is based on dissipation of the sound wave energy upon passing through the material and being directed by the fibers, and also on conversion of some of the energy into heat. The amount of original energy less the remaining unabsorbed energy compared to the original energy lead to the measurement referred to as the absorption coefficient (). Vibration damping factor measurement The damping factor was determined by using the free vibration method as per the ASTM standard E756. Specimens were cut into the specified dimensions from the laminates. The dimensions and no. of layers of reinforcements for different specimens are listed in table 3. The specimens are placed in the form of a cantilever beam structure by using fixture. Figure 5 shows the different vibration test specimens. An accelerometer sensor supplied by Dytran whose sensitivity of 96.72 mV/g was placed at the tip of specimen at the free end. The output signals are received by means of connecting the accelerometer sensor with a Data Acquisition (DAQ) card 9234 which is connected to the PC and interfaced with Laboratory Virtual Instrumentation Engineering Workshop (LabVIEW) software supplied by National Instruments. Fig. 6 shows the free vibration response of composite laminates obtained from the LabVIEW software while the specimen was made to oscillate. Numbers of trails were taken to obtain accurate readings for the calculation of damping factor (). The damping factor was calculated by taking the values of the successive peaks from the free vibration response graph (wave form graph) obtained from the LabVIEW software and substituted in the Equation 1. Flexural strength measurement: The flexural test was made as per the ASTM D 790. It is used to determine the flexural strength and modulus of the composites. Specimens were tested in 3-point bending apparatus with a span to thickness ratio of 8. The dimensions and no. of layers of reinforcements for different specimens are listed in table 4. The specimens were tested using Zwick/roell universal test machine at a cross head speed of 1.2 mm/min. Figure 6 shows the different flexural test specimens. During the test, load vs. extension of the beam was recorded. fig. 8, it is evident that, the sound absorption coefficient of FE is greater than that of GE and GFE in all frequency levels. The maximum sound absorption is observed at 1000 Hz for all the reinforcements. The sound absorption of FE has 21.42% and GFE has 14.28% higher than that of GE at higher frequency of 2000 Hz. At lower frequency level (100 Hz), the FE has 25% of higher in sound absorption over GE. The GFE has a similar sound absorption over GE. Out of the three developed fiber reinforced composites, FE with higher sound absorption coefficient can be suitably used in the applications were sound absorption is considered as important design criteria. fig. 9, it is inferred that when compared to the GE, the FE have 51.03% higher and the GFE have 10.73% higher in vibration damping, which means that the utilisation of natural fiber reinforced composites have better results in vibration damping properties. From the fig. 10 it evident that FE has 36.07% lower in flexural strength compared to the GE and GFE has 29.93% lower in flexural strength compared to the GE. Fig. 11 shows the flexural modulus of FE, GFE, andGE.The loading in the bending test consists of tension, compression and shear forces. All specimens were tested along the fiber direction and they generally experienced brittle failure in the outer ply as delamination on the tensile surface. The delamination starts at the middle of the specimen because of the maximum bending moment, in the middle section of the tensile surface fiber rupture occurred. Delamination was observed on both tensile and compressive surfaces of the specimens. Until the fiber failure, large deflection was achieved. Conclusions By using natural fiber based composite materials, it is possible to create a composite laminates with superior acoustic and vibration damping performance without sacrifices in stiffness-to-weight ratios. The sound absorption coefficient of flax fiber reinforced composites has 21.42% higher than that of glass fiber reinforced composites at higher frequency level (2000 Hz). At lower frequency level (100 Hz), the flax fiber reinforced composites has 25% higher in sound absorption over glass fiber reinforced composites. From the vibration study it is observed that when compared to the glass fiber reinforced composites, the flax fiber reinforced composites have 51.03% higher in vibration damping, which means that the utilization of natural fiber reinforced composites have better results in vibration damping properties. The weight of the flax fiber reinforced composites also 33.33% lesser than that of glass fiber reinforced composites. The specific flexural strength and specific flexural modulus of flax fiber reinforced composites are also at par with that of glass fiber reinforced composites. These results suggest that the flax fiber reinforced composites could be viable candidate for applications which need of good sound and vibration properties. |
Studies of the Toxicological Potential of Tripeptides (L-Valyl-L-prolyl-L-proline and L-lsoleucyl-L-prolyl-L-proline): II. Introduction The consumption of fermented milk to maintain good health, including the maintance of normal blood pressure, is an ancient tradition in a number of areas of the world (e.g., East Asia, France). Recent studies have suggested that fermented milk has a normotensive effect in hypertensive rats and humans, but no effect on blood pressure in normotensive rats and humans. Two tripeptides, L-valyl-L-prolyl-L-proline (VPP) and L-isoleucyl-L-prolyl-L-proline (IPP), have been identified as possessing significant angiotensin-converting enzyme inhibitory activity and are therefore believed to be the source of the normotensive effects. This document, the second of nine chapters, provides information on these two tripeptides, including physical/chemical properties, molecular weights, chemical structures, normal consumption in the diet, manufacturing information, regulatory approval in Japan, and Japanese consumption of food containing enhanced levels of VPP plus IPP. In addition, the results of studies in rats and humans conducted to evaluate the effect of these substances on blood pressure are presented. The research suggests that in adult normotensive volunteers, consumption of up to 7.92 mg of VPP and 4.52 mg IPP daily for 2 weeks causes neither clinical signs nor biologically meaningful effects on systolic or diastolic blood pressure, pulse rate, or clinical pathology (serum chemistry or hematology). However, when a similar study was performed using mildly and moderately hypertensive adults as subjects and they consumed 2.52 mg of VPP and 1.64 mg of IPP per day, a significant drop in systolic blood pressure was detected for a prolonged time interval. This chapter also introduces the issue of safety testing for these substances and describes the information to be found in the subsequent seven chapters. |
The business case for a healthy office; a holistic overview of relations between office workspace design and mental health. The role of the physical workspace in employee mental health is often overlooked. As a (mentally) healthy workforce is vital for an organization's success, it is important to optimize office workspace conditions. Previous studies on the effects of the physical workspace on mental health tended to focus on the effects of a specific element of the physical workspace on one or only a few mental health indicators. This study takes a more holistic approach by addressing the relationship of physical workspace characteristics with ten broad indicators of work-related mental health. Results of a systematic review of empirical evidence show that many aspects of (day)light, office layout/design, and temperature and thermal comfort have been proven to be related to many mental health indicators. Less tacit workspace characteristics (e.g., noise, use of colors) have been explored too, but so far have only been related to few mental health indicators. |
Predictors of Radiation Field Failure After Definitive Chemoradiation in Patients With Locally Advanced Cervical Cancer Purpose We identified the predictive factors for locoregional failure after definitive chemoradiation in patients with locally advanced cervical cancer. Methods Altogether, 397 patients with locally advanced cervical cancer (stage IB2IVA) were treated with definitive chemoradiation between June 2001 and February 2010. Platinum-based concurrent chemotherapy was given to all patients with median radiation dose of external beam radiotherapy 50.4 Gy in 28 fractions and intracavitary radiotherapy 30 Gy in 6 fractions. Competing risk regression analysis was used to reveal the predictive factors for locoregional failure. Results During the median follow-up of 7.2 years, locoregional failure occurred in 51 (12.9%) patients. The estimated 3-year rate of locoregional control was 89%, whereas the overall survival rate was 82%. After univariate and multivariate analyses, large tumor size (>5 cm), young age (≤40 years), nonsquamous histology, positive lymph node on magnetic resonance imaging, and advanced stage (IIIIV) were identified as risk factors for locoregional failure (P = 0.003, P = 0.075, P = 0.005, P = 0.055, and P < 0.001, respectively). After risk grouping according to the coefficients from the multivariate model, we identified a high-risk group for locoregional failure after treatment with definitive chemoradiation as follows: tumor size larger than 5 cm, and at least 1 other risk factor or tumor size 5 cm or less, and at least 3 other risk factors. The cumulated estimated 3-year rate of locoregional failure of the high-risk group was 26%, which was significantly higher than that of the low-risk group (7%, P < 0.001). The 3-year overall survival rates of the 2 groups were also significantly different (57% vs 86%, P < 0.001). Conclusions Large tumor size (>5 cm), young age (≤40 years), nonsquamous histology, positive lymph node on magnetic resonance imaging, and advanced stage are all risk factors for locoregional failure after definitive platinum-based chemoradiation in patients with locally advanced cervical cancer. In the high-risk group, further clinical trials are warranted to improve the locoregional control rate. |
Stylometry for E-mail Author Identification and Authentication The identification of the authorship of e-mail messages is of increasing importance due to an increase in the use of e-mail for criminal purposes. An authors unique writing style can be reduced to a pattern by making measurements of various stylometric features from the written text. This paper reports on work to optimize and extend an existing C# based stylometry system that identifies the author of an arbitrary e-mail by using fifty-five writing style features. The program has been extended to provide feature vector data in a format appropriate for distribution to other project teams for subsequent data mining and classification experiments. |
Investigating the Influence of Anaesthesiology for Cancer Resection Surgery on Oncologic Outcomes: The Role of Experimental In Vivo Models : The incidence and societal burden of cancer is increasing globally. Surgery is indicated in the majority of solid tumours, and recent research in the emerging field of onco-anaesthesiology suggests that anaesthetic-analgesic interventions in the perioperative period could potentially influence long-term oncologic outcomes. While prospective, randomised controlled clinical trials are the only research method that can conclusively prove a causal relationship between anaesthetic technique and cancer recurrence, live animal (in vivo) experimental models may more realistically test the biological plausibility of these hypotheses and the mechanisms underpinning them, than limited in vitro modelling. This review outlines the advantages and limitations of available animal models of cancer and how they might be used in perioperative cancer metastasis modelling, including spontaneous or induced tumours, allograft, xenograft, and transgenic tumour models. Introduction In 2020, an estimated 18 million cancer cases were newly diagnosed (excluding nonmelanoma skin cancer), accounting for approximately 10 million cancer related deaths. Surgical resection of solid tumours remains a mainstay of management of > 60% tumours because it offers the best chance of cure. Cancer related mortality is rarely caused by the primary tumour itself, but instead results from the metastatic process and consequent organ dysfunction, which accounts for up to 90% of cancer-related deaths. The original hypothesis that the anaesthetic technique during primary cancer resection surgery of curative intent might influence the risk of cancer recurrence or later metastasis was first proposed over a decade ago. This included debate around a potential pro-tumorigenic effect of opioids. Subsequently, the question arose whether opioid sparing anaesthesia-analgesia techniques (e.g., regional anaesthesia) and/or Total Intravenous Anaesthesia (TIVA) techniques can reduce the risk of cancer recurrence and improve survival outcomes for primary cancer surgery. To ultimately prove these hypotheses, large prospective randomised-controlled clinical trials are required to establish if a causal relationship exists between anaesthetic techniques and the risk of cancer recurrence following primary cancer surgery. However, preclinical laboratory models, primarily in vivo animal models, retain an important role in translational cancer research for several reasons. Firstly, when designing a robust, prospective, randomised clinical trial it is recommended that the hypothesis is underpinned by quality laboratory evidence to support the trial's rationale. Secondly, animal models allow researchers with limited resources to test numerous hypotheses, within a more realistic time frame than may be expected in the human clinical setting. Thirdly, in vivo models allow investigators to study the pharmacodynamic effect of various anaesthetic, analgesic, and perioperative interventions on a whole-organism model of cancer biology, which may in turn generate new hypotheses. Lastly, evidence emerging from clinical trials is a slow process and dependent on several external factors, including: the ability to recruit trial participants, availability of personnel, environmental and equipment resources, and large scale funding. For example, the emergence of the CCOVID-19 pandemic had a profound negative impact on ongoing clinical trials other than COVID-19 associated trials. Therefore, experimental evidence from in vivo models will continue to play an important role in supporting or refuting cancer treatment hypotheses. The emergence of onco-anaesthesiology as a distinct clinical subspecialty has driven the exploration of translational research utilising animal models of cancer, traditionally undertaken by oncology researchers. Therefore, we aimed to summarise animal models of cancer commonly encountered within in vivo translational cancer research and how they may be applied to ongoing research in onco-anaesthesiology. Animal models of cancer may be classified in a variety of ways. Most simply, they are either spontaneous or induced and mammalian or non-mammalian. Alternatively, they may be categorized by the method of inducing cancer occurrence. However, spontaneously occurring cancers may occur in genetically engineered animal strains or such genetic-engineering may be induced following exposure to various carcinogens, so there is some cross-over in these descriptions. Non-mammalian animals such as zebrafish benefit from being high-throughput and low-cost, ideal for molecular investigation and chemical screening studies, however, significant phenotypical differences limit their usefulness for translational research so they will not be discussed further. Table 1 summarizes the commonly utilized animal models of cancer, each of which are described below. Xenograft Model A xenograft model ( Figure 1) involves the transplantation of cancer cells from one species (e.g., human) into a host animal of a different species (e.g., mouse). This model is immediately constrained because it requires an immunocompromised host animal to prevent immunological rejection of the non-species cancer cells. Transplantation may be ectopic (deposition of cancer cells beneath the skin) or orthotopic (deposition of cancer cells targeted at the organ of interest). Alternatively, cancer cells may be administered intravenously to mimic metastatic spread or the seeding that is thought to occur during solid tumour cancer surgery. Patient-derived xenografts represent an evolution of this approach utilizing transplantation of fresh tumour biopsies obtained directly from patients into immunocompromised mice to create so-called tumour grafts or avatar mice to enable testing and identification of individualized therapies. The advantages of xenograft models are that they are relatively inexpensive when using commercially available cancer cell lines and attractive for translational research due to the ability to mimic cancer cell biological traits and the direct evaluation of therapeutic targets in human derived cancer tissue. However, several disadvantages arise from the requirement for immunocompromised animals, such as a lack of representative immune response or inflammation, superficial vascularization of the grafts and limited stroma-tumour interactions. Tumour growth rate is variable and often slow, tumour cell composition may not represent the heterogeneity present in the parent cancer and it may not result in metastatic spread, all of which limit the detection of clinically significant metastatic outcomes or falsely increase the perceived efficacy of experimental therapeutic interventions. The xenograft model also requires quality control because many cell lines have unknown sources or poorly documented receptor expression, and regulatory safeguards are needed to protect researchers from the high communicability risk from handling human cancer tissue. Despite these shortcomings, xenograft models have been used extensively to identify potential molecular mechanisms underlying the observed anti-tumour effects of propofol and local anaesthetics as well as pro-and antitumour effects following exposure to inhalational anaesthetics. Allograft Model An allograft model (Figure 1) involves the transplantation of cancer cells between animals of the same species (e.g., mouse cancer cells into another mouse), whereas a syngeneic allograft specifically refers to transplantation of cancer cells between genetically identical animals, which effectively eliminates confounding from inter-species interactions. The main advantage this has over xenograft models is the ability to evaluate the host animal's cancer-related immune response when assessing the effect of potential therapeutics. Compared to xenografts, allografts produce larger tumours that metastasize more quickly and reliably, enabling consistency when assessing for clinically relevant endpoints. They do, however, present their own limitations. Firstly, allograft models are often artificial, because mice may not form these cancers spontaneously. They also differ from human cancers in a variety of ways, including mice having more resilient immune systems than humans, evident in both innate and adaptive immune responses and variability in the observed stromal interactions. The preservation of the host immune response underpins why these models are often favoured for investigation in onco-anaesthesiology, where immunomodulation by anaesthetic drugs is one of the most frequently proposed mechanisms to explain how differences in perioperative pharmacotherapy may influence cancer outcomes A common example from onco-anaesthesiology literature is the 4T1 syngeneic allogenic mouse model of breast cancer that has been used to demonstrate the anti-metastatic effects of systemic lidocaine, in combination with propofol or sevoflurane anaesthesia as well as the pro-metastatic effects of methylprednisolone on cancer progression. Companion Animals Testing potential therapeutics on spontaneous cancers that develop in household pets ( Figure 1) is an often-underutilized approach that can address some of the limitations encountered in the traditionally favoured mouse models. Cancer occurs in dogs at twice the frequency of humans at an average age of 8.4 years, and their shorter lifespan makes it easier to collect and analyse survival data Commonly occurring cancers in dogs include lymphoma, melanoma, and mammary carcinoma and the resulting tumours are the closest clinical and histopathological resemblance to human cancers of any other animal model. This includes some identical tumour oncogenes and tumour suppressor genes involved in promoting its development and progression. To our knowledge, companion animals have not been studied in onco-anaesthesiology research, however, veterinary anaesthesiology could potentially be an avenue for further translational research in this field. Transgenic Models Transgenic models (Figure 2) of cancer require genetic engineering that produces genome mutations via either environmental exposure to carcinogens or genetic editing of fertilized embryos to increase the likelihood of cancer occurrence. These are predominantly performed in mice. Chemical carcinogens include N-butyl-N-(4-hydroxybutyl) nitrosamine and asbestos, but induced mutations occur at random and require high-throughput genome sequencing to identify them and extensive validation to identify the specific role of each mutation. Knock-in or knock-out mice can be generated that either promote the expression of various oncogenes such as HRAS in breast cancer, or silence the effect of tumour-suppressor genes such as Brca1 in breast cancer, respectively. Genetic editing may be performed in a variety of ways including retroviral infection, microinjection of DNA (standard transgene approach) or the 'gene-targeted transgene' approach. The 'gene-targeted transgene' approach involves targeted manipulation of embryonic stem cells where the desired mutation is identified and expanded before reinjection into mouse blastocytes, whereas it is not possible to control the location and pattern of gene expression using traditional methods, which may lead to unexpected results due to effects on neighbouring genes. Novel alternatives include transposon-based insertional mutagenesis or the clustered regularly interspaced short palindromic repeats (CRISPR)/associated (Cas9) engineered nuclease system, which further enhance and/or simplify the ability to target desired genetic mutations. However, generating new genetically engineered mouse models remains costly, labour-intensive, and time-consuming; it often requires multiple generations of mice to achieve the desired pattern of gene expression and some mutations may be lethal to the embryos or cause developmental abnormalities or sterility. This issue may be partially addressed by the generation of conditional knock-in or knock-out mice where further genetic engineering renders the mutation conditional on certain environmental conditions such as the exposure to tetracycline or tamoxifen. Despite these limitations, several colonies of live mouse models of cancer are commercially available and are frequently utilised for cancer research either alone or as donors for transplantation in allogenic models. Transgenic cancer models may play a significant role in onco-anaesthesiology research as they allow for testing of drug effects on the onset and progression of an expected cancer in immunocompetent animals, such as what has already been performed with morphine, which had no effect on the onset of cancer development but did hasten cancer progression. However, choosing a favourable strain can be one of the most challenging parts of experimental design so a number of electronic databases such as Cancer Models (CaMOD) have been developed to assist in the selection. In conclusion, there are several in vivo animal models of cancer that may be utilized for conducting translational research in onco-anaesthesiology. Whilst xenograft models are attractive for the ability to tailor the cancer cell biology around the human cancer under investigation, a lack of any representative immune response severely hinders its suitability for onco-anaesthesiology research. Transgenic models demonstrate significant promise in providing representative animal cancers and may be used alone or as donors for allogenic models. Multiple transgenic mouse colonies and cancer cell lines are commercially available to enable rapid integration of in vivo mouse models into novel research, however, due care must be taken to ensure the model chosen is most appropriate for the hypothesis under investigation. |
Beyond the Dusty Shelf: Shifting Paradigms and Effecting Change Abstract : This paper addresses how to make happen the improvements in the quality of health care that have been identified from significant investments in patient safety research by the Agency for Healthcare Research and Quality (AHRQ). We make the case that the usual supply-side research model is inefficient to produce the health care changes expected from AHRQ. We propose a shift to a demand-side paradigm that engages users throughout the research process, and two models to guide the management of "action production." The first model is based on Rogers' model of diffusion of innovations, which indicates that users must absorb a great deal of information in a variety of staged and specific ways in order to make a successful passage from knowledge to action through tactics including awareness, persuasion, adoption, implementation, and confirmation. The second is a decision model, termed distillation, which provides a framework for determining the potential utility and priority of an innovation based on the strength of the science, potential impact, adoptability, and readiness. We address lessons learned from the application of these models to the early implementation experiences of five early outputs from the AHRQ patient safety portfolio. We find that the implementation of the early findings places a strong reliance on information dissemination mostly at the awareness and persuasion stages--efforts directed at the later stages of decision, implementation, and confirmation have been modest. Ongoing evaluation of the impact of these approaches on patient safety practices and quality of care will indicate if the models provide useful guidance in making changes happen. |
Implementing Community Resource Referral Technology: Facilitators And Barriers Described By Early Adopters. Health care organizations are increasingly implementing programs to address patients' social conditions. To support these efforts, new technology platforms have emerged to facilitate referrals to community social services organizations. To understand the functionalities of these platforms and identify the lessons learned by their early adopters in health care, we reviewed nine platforms that were on the market in 2018 and interviewed representatives from thirty-five early-adopter health care organizations. We identified key informants through solicited expert recommendations and web searches. With minor variations, all platforms in the sample provided similar core functionalities: screening for social risks, a resource directory, referral management, care coordination, privacy protection, systems integration, and reporting and analytics. Early adopters reported three key implementation challenges: engaging community partners, managing internal change processes, and ensuring compliance with privacy regulations. We conclude that early engagement with social services partners, funding models that support both direct and indirect costs, and stronger evidence of effectiveness together could help advance platform adoption. |
A new development of triterpene acidcontaining extracts from Viscum album L. displays synergistic induction of apoptosis in acute lymphoblastic leukaemia Objectives: Aqueous Viscum album L. extracts are widely used for anticancer therapies. Due to their low solubility, triterpenes (which are known to act on cancers), do not occur in aqueous extracts in significant amounts. Using cyclodextrins, we have found it possible to solubilize mistletoe triterpene acids and to determine their effects on acute lymphoblastic leukaemia (ALL) in vitro and in vivo. |
Closed-loop performance diagnosis using prediction error identification This paper presents a methodology to detect the origin of closed-loop performance degradation of model-based control systems. The approach exploits the statistical hypothesis testing framework. The decision rule consists of examining if an identified model of the true system lies in a set containing all models that fulfill the closed-loop performance requirements. This allows us to determine whether performance degradation arises from changes in system dynamics or from variations in disturbance characteristics. The probability of making an erroneous decision is estimated a posteriori using the known distribution of the identified model with respect to the unknown true system. |
The Impact of COVID-19 on Salafi-Jihadi Terrorism The purpose of this article is to evaluate how COVID-19 might impact the future threat posed by Salafi-Jihadi groups and to explain how the current crisis might re-shape the Salafi-Jihadi central message and strategy and in turn impact recruitment, tactics, capability, and leadership, and even doctrine Salafi-Jihadi groups have found themselves in a dilemma as they have to reckon with the fact that Muslims are not spared from infection despite fervent prayer If the Coronavirus is the wrath of God against the infidels, why is it also killing the Mujahedeen, and how do you explain it while still maintaining credibility to potential recruits? How do you maintain the Jihad during a global lockdown, where movement is curtailed and resources dry up? To better understand what we should expect from Salafi-Jihadist groups in the future, the analysis explores three challenges that Jihadi groups will most likely have to overcome as a result of the current crisis: First, the challenge to their strategic mission and capabilities, especially relating to the operationalization of motivations for martyrdom and revenge Second, the challenge to their ideology, faith, and religious interpretation of scriptures, with impacts on the consistency of their doctrine and brand And third, the challenge to their unity and ability to provide members with a shared group identity, which may influence recruitment How Jihadi groups and their leaders address these multi-level challenges will impact their cohesion and effectiveness, and the credibility of their message It may also have repercussions on leadership and control, which could determine the relevance of the group as a future global threat The analysis suggests that Salafi-Jihadi terrorism remains a threat both in the short and long-term © 2020, Partnership for Peace Consortium of Defense Academies and Security Studies Institutes All rights reserved |
A novel strategy to access high resolution DICOM medical images based on JPEG2000 interactive protocol The demand for sharing medical information has kept rising. However, the transmission and displaying of high resolution medical images are limited if the network has a low transmission speed or the terminal devices have limited resources. In this paper, we present an approach based on JPEG2000 Interactive Protocol (JPIP) to browse high resolution medical images in an efficient way. We designed and implemented an interactive image communication system with client/server architecture and integrated it with Picture Archiving and Communication System (PACS). In our interactive image communication system, the JPIP server works as the middleware between clients and PACS servers. Both desktop clients and wireless mobile clients can browse high resolution images stored in PACS servers via accessing the JPIP server. The client can only make simple requests which identify the resolution, quality and region of interest and download selected portions of the JPEG2000 code-stream instead of downloading and decoding the entire code-stream. After receiving a request from a client, the JPIP server downloads the requested image from the PACS server and then responds the client by sending the appropriate code-stream. We also tested the performance of the JPIP server. The JPIP server runs stably and reliably under heavy load. |
Cofilin phosphorylation by protein kinase testicular protein kinase 1 and its role in integrin-mediated actin reorganization and focal adhesion formation. Testicular protein kinase 1 (TESK1) is a serine/threonine kinase with a structure composed of a kinase domain related to those of LIM-kinases and a unique C-terminal proline-rich domain. Like LIM-kinases, TESK1 phosphorylated cofilin specifically at Ser-3, both in vitro and in vivo. When expressed in HeLa cells, TESK1 stimulated the formation of actin stress fibers and focal adhesions. In contrast to LIM-kinases, the kinase activity of TESK1 was not enhanced by Rho-associated kinase (ROCK) or p21-activated kinase, indicating that TESK1 is not their downstream effector. Both the kinase activity of TESK1 and the level of cofilin phosphorylation increased by plating cells on fibronectin. Y-27632, a specific inhibitor of ROCK, inhibited LIM-kinase-induced cofilin phosphorylation but did not affect fibronectin-induced or TESK1-induced cofilin phosphorylation in HeLa cells. Expression of a kinase-negative TESK1 suppressed cofilin phosphorylation and formation of stress fibers and focal adhesions induced in cells plated on fibronectin. These results suggest that TESK1 functions downstream of integrins and plays a key role in integrin-mediated actin reorganization, presumably through phosphorylating and inactivating cofilin. We propose that TESK1 and LIM-kinases commonly phosphorylate cofilin but are regulated in different ways and play distinct roles in actin reorganization in living cells. |
Colorizing Educational Research Although previous authors have offered persuasive arguments about the salience of race in the scholastic enterprise, colorism remains a relatively underexplored concept. This article augments considerations of social forces by exploring how color classifications within racial arrangements frame pathways for communities of color and, therefore, must inform educational inquiries. Consistent with the rich tradition of ethnic studies, I draw on sources in the humanities, legal profession, and social sciences to demonstrate how colorism surfaces in lived experiences. The African American community is used as an exemplar for illustrating historical foundations of color bias, discussing implications of complexion difference, and offering suggestions for scholarship that advances educational research agendas. |
. OBJECTIVE To observed the prevention efficacy of secretory otitis media after radiation therapy by the Myrtol Standardized Enteric Coated Soft Capsules. METHOD Sixty patients with nasopharyngeal carcinoma who Diagnosis without secretory otitis media before radiation therapy were divided into experimental group and control group, 30 cases in each group. After the start of radiation therapy,the experimental group patients oral the Myrtol Standardized Enteric Coated Soft Capsules, each 0.3 g, 3 times a day, 7 days a course of treatment, oral the medication three months, the patients in the control group received no treatment. 3 months and 6 months after the end of radiation therapy, whether there is a difference comparison of experimental group and the control group in symptoms, signs, pure tone audiometry and tympanogram change. RESULT Seventeen patients (18 ears) (56.67%, 17/30) in the control group were suffering from secretory otitis media, 7 patients (7 ears) (23.33%, 7/30) in the experimental group were suffering from secretory otitis media. The difference between the two groups was statistically significant (P < 0.01). 17 patients (17 ears) in the control group and 7 patients (7 ears) in the experimental group were suffering from tinnitus. 20 patients(20 ears) in the control group and 9 patients (10 ears) in the experimental group have ear choking feeling. The difference between the two groups was statistically significant (P < 0.01). The air conduction hearing threshold of the experimental group before radiation therapy is (7.5 +/- 2.0) dB HL and the air conduction hearing threshold of the control group patients is (8.3 +/- 4.0) dB HL. The difference between the two groups was not statistically significant (P > 0.05). 3 months after radiation therapy,the gas conductive hearing threshold of the experimental group is (25.6 +/- 3.0) dB HL, but the data in the control group is (40.7 +/- 5.0) dB HL. The difference between the two groups was statistically significant (P < 0.01). CONCLUSION Patients with nasopharyngeal carcinoma oral the the Myrtol Standardized Enteric Coated Soft Capsules before radiation therapy can effectively reduce the incidence of secretory otitis media after radiotherapy, it can prevent the occurrence of secretory otitis media. |
A Body of Knowledge: Somatic and Environmental Impacts in the Educational Encounter Abstract The author explores the somatic experience in educational encounters and considers several associated themes, including the importance of space, landscape, and environment. Examples are provided to illustrate the impact of bodily experience on the learning process in addition to material featuring educational work outdoors. Further consideration is given to ecopsychological theory and specifically the integration of human and nonhuman dynamics. Final thoughts are offered regarding implications for the training and certification of transactional analysts. |
Dreaming a cinematic dream: Jean Cayrol's writings on film Abstract Dreams, whose nature is cinematic, provide a fundamental link between Jean Cayrol's work in literature and his work in cinema making his novels, scripts and films, as well as his film and literary criticism, into an almost organic totality in which the same aesthetic ideas circulate. Cayrol's reading of his experiences in concentration camps, his dreams in particular, puts his writing apart from that of other Holocaust writers and reinforces his membership in the nouveau roman group marked by its active engagement with cinema. His conception of dreams as cinematic also points to surrealism as one of the most important elements to shape his creative imagination. Cayrol is unique among the nouveaux romanciers in extending his work in cinema beyond scriptwriting and directing into film criticism. The literary underpinnings of his conception of cinema outlined in Le Droit du regard challenge the phenomenological dogma of the fifties Cahiers du cinma and foreshadow the linguistic turn in the mid-sixties, making Cayrol's writing on cinema a perfect reflection of the contemporary trends in film criticism. |
Downlink Femtocell Interference Mitigation and Achievable Data Rate Maximization: Using FBS Association and Transmit Power-Control Schemes How to associate a femto user (FU) to an appropriate femto base station (FBS) to avoid interference and maximize the achievable data rate (ADR) of FUs has attracted a lot of attention in femtocell networks. The evidence provided in this paper proves it to be an NP-complete problem. To maximize the ADR of the FUs, the methods for assigning FUs to neighboring FBSs are examined to determine the required transmit power level (TPL) adjustments for the FBSs to mitigate the downlink (DL) interference among the FBSs and the macro base station (MBS). Traditionally, the linear programming (LP) approach is adopted to obtain the optimal solution for this problem. However, the LP approach requires substantially more time to obtain the solution. To lessen this drawback, we propose an easy but smarter scheme, i.e., the smart FBS selection algorithm (SFSA), to distribute FUs to the noninterfered FBSs and to obtain the maximal ADR for each FU. If the SFSA fails to solve this problem, a DL power-control algorithm (DPCA) is employed to find a solution that causes the least amount of interference. The simulation results show that the SFSA and DPCA require only a minor amount of computation time to obtain a feasible solution. The results show that the maximal data rates of the FUs obtained using the SFSA and DPCA are comparable to that of the LP approach. |
Anti-inflammatory effects of angiotensin II AT1 receptor antagonism prevent stress-induced gastric injury. Stress reduces gastric blood flow and produces acute gastric mucosal lesions. We studied the role of angiotensin II in gastric blood flow and gastric ulceration during stress. Spontaneously hypertensive rats were pretreated for 14 days with the AT1 receptor antagonist candesartan before cold-restraint stress. AT1 receptors were localized in the endothelium of arteries in the gastric mucosa and in all gastric layers. AT1 blockade increased gastric blood flow by 40-50%, prevented gastric ulcer formation by 70-80% after cold-restraint stress, reduced the increase in adrenomedullary epinephrine and tyrosine hydroxylase mRNA without preventing the stress-induced increase in adrenal corticosterone, decreased the stress-induced expression of TNF-alpha and that of the adhesion protein ICAM-1 in arterial endothelium, decreased the neutrophil infiltration in the gastric mucosa, and decreased the gastric content of PGE2. AT1 receptor blockers prevent stress-induced ulcerations by a combination of gastric blood flow protection, decreased sympathoadrenal activation, and anti-inflammatory effects (with reduction in TNF-alpha and ICAM-1 expression leading to reduced neutrophil infiltration) while maintaining the protective glucocorticoid effects and PGE2 release. Angiotensin II has a crucial role, through stimulation of AT1 receptors, in the production and progression of stress-induced gastric injury, and AT1 receptor antagonists could be of therapeutic benefit. |
A Simulated Annealing Algorithm with a Dynamic Temperature Schedule for the Cyclic Facility Layout Problem In this paper, an unequal area Cyclic Facility Layout Problem (CFLP) is studied. Dynamic and seasonal nature of the product demands results in the necessity for considering the CFLP where product demands as well as the departmental area requirements are changing from one period to the next one. Since the CFLP is NP-hard, we propose a Simulated Annealing (SA) metaheuristic with a dynamic temperature schedule to solve the CFLP. In the SA algorithm, both relative department locations and dimensions of departments are simultaneously determined. We benchmark the performance of the proposed SA algorithm with earlier approaches on different test problems from the literature and find out that the SA algorithm is promising. |
The New Senior Volunteer: A Bold Initiative to Expand the Supply of Independent Living Services to Older Adults ABSTRACT This article presents findings of the evaluation of the Experience Corps for Independent Living (ECIL) initiative. The ECIL initiative was a two-year demonstration program designed to test innovative ways to use the experience, time, and resources of volunteers over 55 to significantly expand the size and scope of volunteer efforts on behalf of independent living services for frail elders and their caregivers in specific communities. Six demonstration sites were selected to participate in this initiative. The intensive volunteers, the critical component of the program, were more highly skilled than typical volunteers from existing senior volunteer programs. ECIL volunteers collaborated with agency partners to develop new programs, supervise direct service activities, and enhance the performance of the agencies being served. The ECIL initiative was particularly successful in meeting its goals of expanding the supply of independent living services to frail elders and their families in the communities served. |
Comparative Studies on Crystal Structures of Four Members of Cytochrome P450 Superfamily. Secondary and steric structures and hydropathy plots of the 4 crystals of P450cam, P450terp, P450eryF and P450BM3 were compared to illustrate the structural conservation of cytochrome P450 superfamily proteins. Although sequence identities of four P450s are generally low (19%-26%), their topology is quite similar. All four structures have 13 alpha-helices and beta1-beta4 sheets in common. Four crystal structures were superimpossed by root-mean-square (RMS) fit of the prophyrin ring carbon atoms of prosthetic group heme to obtain the structure-based sequence alignment of four proteins. The RMS deviations of Calpha distance of each motifs were analyzed by hierarchical cluster analysis. The structural subsets were divided into four categories of structural conservation: the most conserved region, the less conserved region, the less variable region and the variable region. The first two groups (56.9 percentage of the aligned positions that have no gaps) include all the interior structures and active site residues. All four P450 proteins have the common hydrophobic and hydrophilic segments by hydropathy plots analyses. All the comparison results of P450 protein crystal structures provided the basis for structure-based sequence alignment of cytochrome P450 proteins. |
Toward developing a sustainability index for the Islamic Social Finance program: An empirical investigation Several previous studies state that the Islamic Social Finance program has not fully succeeded in creating prosperity, and there are no definite measurements to show the sustainability impact of the program. Thus, a measurement is needed to analyze various aspects in achieving the success and sustainability of Islamic social finance programs. This study developed an index for performance evaluation with an emphasis on the success and sustainability of the Islamic Social Finance program. The study used the Analytical Network Process to determine and analyze priority components. Furthermore, the Multistage Weighted Index method was used to calculate the final index score. The index was built by taking into consideration various factors, stakeholders, aspects, and indicators. This study indicates that aspects of funding contribution from donors (0.22), involvement of donors in giving advice (0.99), and controlling of supervisor (0.08) are priority aspects in the success and sustainability of the program. An empirical investigation was performed on three different programs in Indonesia: A, B, and C. Program A (0.81) and C (0.80) have succeeded in improving the beneficiaries quality of life to the level of economic resilience, although at a low level of sustainability (7684.33). On the other hand, program B (0.73) is at the economic reinforcement and has not yet achieved sustainability. This index can be seen as a comprehensive tool for measuring the success and sustainability of the program at several levels. Introduction The management of Islamic Social Finance continues to grow, including the distribution of funds. Initially, social finance management channeled Islamic Social Funds in consumptive forms. Also, the waqf fund is still generally distributed for non-productive forms. However, in line with the goal of Islamic Social Finance in achieving prosperity, the Islamic social funds began to be channeled into a more productive state. One of the effective distributions is through business capital provision, training, and mentoring. Effective forms of distributed funds develop knowledge and skills and impact beneficiaries' income increase and independence. As a Muslim majority country, Indonesia has a great potential for Islamic Social Finance instruments. According to the recent reported data, the potential for Islamic Social Finance instruments have reached USD 22.8 billion for zakat, USD 137.9 million for land waqf, and USD 13 billion for cash waqf. However, Islamic social funds channeled to official Institutions only reached USD 31 million. This figure is considered insignificant because the distribution of Islamic social funds outside the institution/non-administrative reached IDR 30 trillion. With this fact, Islamic social funds that are professionally managed and distributed through effective empowerment programs are limited. For this reason, Islamic Social Finance must prepare and implement the empowerment program mindfully to have a positive and significant impact on the quality of life of the beneficiaries. Indonesia is the most generous country globally. In contrast, society does not yet fully have the literacy and belief that Islamic social funds must be managed professionally through the Institution. This condition is not ideal because people who channel their funds directly to the community are generally in the form of consumptive distributions that do not have a long-term effect on improving the economic, social, spiritual status, and abilities of beneficiaries. The urgency of Islamic social funds professional management is reflected through empowerment programs. First, distributing Islamic social funds can achieve welfare by maintaining five aspects of maqashid sharia so that the community can sustain the quality of human life. Second, Indonesia's poverty rate is 144 out of 172 countries in the world and 6 out of 10 countries in Southeast Asia. This ranking is higher than Malaysia, a zakat collecting and Muslim-majority country, ranking 8. Poverty in Indonesia reached 10.4%, exacerbated by the poverty depth index, which is 1.71 points. The higher the poverty depth index value, the farther the average public expenditure is from the poverty line. Poverty is also a severe problem in other Muslim-majority countries, such as Pakistan, Afghanistan, and Bangladesh. These countries are also concerned about managing Islamic social funds. Third, the Gini ratio problem is enormous. Based on the data from Statistic Indonesia, Indonesia's inequality level in March 2021 reached 0.384, an increase of 0.003 from March 2020. Fourth, Indonesia's Human Development Index (HDI) level only grew by 0.30% in 2020 The inequality level needs to be of concern, where the HDI reflects that the population has difficulty obtaining income, education, health, and other development outcomes. Fifth, several prior studies show that Islamic social funds have not fully impacted welfare. For these reasons, the community must manage Islamic Social Finance professionally through potential empowerment programs. The management of the Islamic Social Finance program is closely related to sustainability, where Islamic Social Finance plays a role in ensuring the sustainability of the beneficiaries' life. Therefore, the Islamic Social Finance program is not only directed at the success of improving the quality of life of beneficiaries so that they can get out of the poverty but also at the sustainable quality of life. Conceptually, in Brau et al., there are two paradigms in sustainability, namely, institutionalist and welfarist paradigms. The institutionalist paradigm considers that sustainability is closely related to the ability of institutions to obtain funds so that they can be independent and not dependent on others. In contrast, the welfarist paradigm considers the main point: the institutions can feel the benefits of sustainability. In the context of the Islamic Social Finance program, by referring to the abovementioned definition of sustainability, it can be concluded that sustainability occurs when the program can encourage the beneficiaries to be independent (able to meet their own needs) and the program can prosper the beneficiaries. Thus, it needs various inputs and appropriate indicators to guide the empowerment programs and measure their success and sustainability. Moreover, empowered programs in Indonesia are diverse due to various Islamic Social Finance institutions in the country. Several studies have developed Islamic social fund indexes/indicators based on the literature review, focusing on institutional and program indexes. Amin developed an index for the impact of zakat on Muslimpreneurs. On the other hand, Abdullah et al. developed the Zakat Index (ZEIN) to measure the effectiveness of the Zakat Institution. Further, Wahab et al. built a service quality index for Zakat Institutions. BAZNAS Center Strategic Studies, a government institution that focuses on zakat research, has developed several indices, including Zakat Utilization Index, Zakat Village Index, Zakat Transparency Index, National Zakat Index, BAZNAS Welfare Index, Zakat Coordination Index, Zakat Literacy Index, and Zakatnomics Development Index. In addition, the National Waqf Board developed National Waqf Index. Previous research has focused on measuring institutions and programs from just one aspect, such as service quality and accountability. The index in this study focuses on the concept of sustainability and maqashid sharia, based on the role of all stakeholders in the management of Islamic Social Finance. In contrast to previous research, this study developed a new performance measurement model for the Islamic Social Finance program. This research has several novelties. First, this study develops previous research by establishing an index to see the success and sustainability of Islamic Social Finance programs. Second, the researchers conducted empirical investigations on three Islamic Social Finance programs to measure their success and sustainability. Third, the main criterion of the empowerment program is to achieve prosperity for beneficiaries, hence, this research also emphasizes the importance of integrating Islamic social funds in achieving prosperity. Finally, this research built a comprehensive measurement, in which researchers can assess a program for its level of success in transforming the welfare of beneficiaries, represented through 4 ER (Economic Rescue, Economic Recovery, Economic Reinforcement, Economic Resilience). This study indicates that the funding contribution from donors (0.22), some advice from donors (0.9), and control by supervisor (0.8) are priority aspects in achieving the success and sustainability of the program. An empirical investigation conducted on three programs-A, B, and C-shows that programs A (0.81) and C (0.80) are at the level of economic resilience but with low sustainability. Meanwhile, program B (0. 73) has not yet achieved sustainability and is still at the level of economic reinforcement. This research has implications in several ways. First, this study emphasizes the importance of the involvement of donors in paying Islamic social funds to institutions to be managed productively. Islamic Social Finance institutions must formulate strategies to increase donor engagement, such as providing various payment platforms that make it easy or creating potential programs that have long-term impacts. The government, as a regulator, can issue a series of policies and regulations that advise the public to pay Islamic social funds through Institutions. Second, increasing public literacy related to Islamic Social Finance is essential to increase donor involvement. Third, an empirical investigation of the Islamic Social Finance Program at the level of economic reinforcement and economic resilience with low sustainability shows that several aspects need to be improved. As program managers, Islamic Social Finance Institutions must formulate various program optimization strategies and coordinate with multiple stakeholders, such as regulators, academics, associations, and supervisors. As for the structure of this paper, this article is divided into five main sections. The first section discusses the background of the writing. The second section presents the theoretical basis of index development. The third section describes the methods used. The fourth section identifies the results and analysis. Finally, the conclusions are shown in the fifth section. Islamic social finance concept Islamic Social Finance is an integrated socio-economic empowerment system divided into traditional and contemporary financial systems. Traditional Islamic Social Finance instruments classified into two types: Philanthropy-based instruments, such as zakat, infaq alms, and waqf, and cooperation-based instruments, such as qardh and kafalah. Meanwhile, contemporary Islamic Social Finance instruments are in the form of Islamic microfinance. Philanthropy has different shapes and definitions. This concept involves four main parties: social welfare initiators, social finance providers, social ecosystem coordinators, and beneficiaries. Each of these parties plays a clear role in the conception, implementation, coordination, financing, and improvement. Some of the expected goals of Islamic social finance are alleviating unemployment and poverty, considering the ethical orientation created between awareness and knowledge of the programs of Islamic Social Finance. The previous research shows differences in understanding Islamic Social Finance (ISF) instruments (see Table 1). Zakat has several characteristics that distinguish it from other types of levies, including as a religious obligation where the subject includes Muslims or business entities (corporations or companies) owned by Muslims, organized manner of zakat collection managed by the government, the object of zakat is widely imposed on all business activities ranging from livestock, agriculture, commercial activities, mining, to immovable assets, imposing on individuals with the nishab (limit) or minimum wealth, and the recipients of zakat (mustahiq) have been determined in the Qur'an, in the form of eight groups. On the other hand, waqf is a voluntary donation generally also known as part of sadaqah (good deeds) and infaq (expenditures to please Allah). Islam makes waqf one of the divine gifts for society's prosperity (religion bounds the community). The primary difference from the waqf instrument is that the donation should not reduce the principal value of the waqf property. Consequently, the waqf property management must put Sharia compliance as a priority. Kayikci interprets infaq as a gift to Allah and an investment of eternal goodness. Infaq in the Islamic system conceptually implies giving, including the giver and the relatives, for a better society and its people. In infaq, there are no provisions regarding the form, implementation time, or the amount. Also, sadaqah is a type of good deed broader than zakat and infaq. Qard Hasan is one of the tools to achieve economic and social justice as aspired by Islamic economics. With Zakat, Qard Hasan can be used for financial inclusion because the poor can have easier access to financial services at lower costs if Islamic Financial Institutions can channel them effectively. Referring to the explanation above, what is meant by ISF is zakat, infaq, alms, and waqf. Furthermore, this study will examine the integration of zakat, infaq, alms, and waqf funds and analyze how they influence the welfare of mustahiq. Concept of sustainability in Islamic Social Finance programs In the context of Islamic Social Finance, several studies have discussed sustainability and Islamic Social Finance. Khalfan et al. stated that there are three systems in waqf sustainability: beneficiaries, property, and maintenance. Robani and Kamal researched the potential of Islamic Social Finance for sustainable development, focusing on social justice, balance, and how it produces maslahah for the people. Sulaiman and Alhaji analyzed the financial sustainability of waqf institutions. They used the Tuckman and Chang model to measure sustainability from four aspects: operational margin ratio, equity balance, revenue concentration, and administrative cost. Jouti et al. built a sustainable Islamic Social Finance ecosystem with an integrated approach, which indicates the need for cooperation of all stakeholders -zakat institutions, waqf institutions, banking, Islamic microfinance institutions, Islamic capital markets, etc.-in creating a sustainable ecosystem. Hassan et al. explained that sustainability in waqf includes three elements: maintenance of the waqf property, increasing the productivity of waqf, and increasing the chance of providing benefits to the community. Sapuan and Zeni described that efficient management is the key to achieving the sustainability of Islamic Social Finance. This study examined the sustainability of Islamic Social Finance in terms of its institutions and instruments. The program and how it can create sustainability for the beneficiaries also mirror the sustainability of Islamic Social Finance. One of the critical goals of Islamic Social Finance programs is to ensure that sustainability is the empowerment program. However, research on empowerment programs still focuses on impact measurement [6,14,. Others indicate that the empowerment program had not yet fully impacted beneficiaries. On the other hand, research related to Islamic Social Finance and sustainability programs is still limited. Hence, this study attempted to build a comprehensive index of the success and sustainability of Islamic Social Finance Programs. Based on Brau et al., there are two paradigms in sustainability: Institutionalist and welfarist. Institutionalist refers to financial self-sufficiency, where the provision of services for the poor depends on the institution's sustainability closely related to the institution's ability to obtain funds continuously. Welfarist emphasizes that sustainability is closely related to poverty alleviation, where sustainability is assessed from social motives, not profit motives. Pischke analyzed the sustainability of microenterprises of lenders and stated three sustainability levels. Level 1 of sustainability is a healthy portfolio flow where there must be a balance between outflows, inflows, and generated surplus. In achieving this level, the enterprise needs excellent risk management. At level 2, the enterprise is considered sustainable if the generated surplus at level 1 can cover operational costs. The third level of sustainability is how the generated surplus can cover other expenses after being used to cover operating costs. According to Morduch, who analyzed the sustainability of microfinance schism, sustainability refers to returning the employed capital and obtaining sufficient profits to increase the invested capital. The existence of products, services, management practices, risk management, targeting, supporting policies, regulations, and impact assessments is essential to achieving sustainability [26,. Based on the research above, it can be concluded that the sustainability parameters of the Islamic Social Finance program are: (a) it creates financial self-sufficiency for the beneficiaries where they become financially independent, (b) it supports the recipients to generate income on an ongoing basis, (c) it provides the beneficiaries a balanced financial flow, in which the income can meet daily basic needs and can be further invested, (d) it provides beneficiaries with the necessary skills to maintain or improve their quality of life through increased productivity. Integration of various stakeholder roles is also needed to achieve sustainability. Concept of program performance The program builds from a need to be relevant to the program's purpose, and the results will meet those needs. An active movement can achieve the program's objectives by designing a logic model. The logic model is a program implementation structure/scheme that requires a connection between the purposes and the program implementation process, which includes inputs (resources), activities, outputs, and actual outcomes in the short and long term. An assessment of program performance needs to be done for evaluation, ex-ante (before implementation), and post-ex. This assessment involves internal institutions, experts, respondents, beneficiaries, and those who do not receive the program. The measurement uses a nominal, ordinal, interval, and ratio scale. The ANP and MWI methods are used in this study to evaluate the performance of the Islamic Social Finance integration program. In this study, performance analysis of Islamic Social Finance integration is critical, with the following considerations: First, no research has been conducted to assess Islamic Social Finance integration programs with a focus on sustainability; Second, performance analysis is critical in determining the extent of the impact of a given program as well as decision-making material in program improvement. Integrated Islamic Social Finance Integration is merging several parts into a unified whole. Integration is also a collaboration between parties to achieve a common goal. Several previous researchers have analyzed the integrated Islamic Social Finance instruments. Jouti developed a theoretical scheme by combining Islamic Social Finance instruments, such as zakat, waqf, sharia microfinance, and sukuk. Sulistyowati et al. developed an integrated Islamic Social Finance model to address disaster-related problems. Haneef et al. utilized the SEM analysis on integrated ISF (waqf and microfinance) and examined the relationship between takaful, waqf, and human resource development. Hassanain developed three integration models combining zakat, waqf, and sharia microfinance. Previous research has remained focused on conceptual models and failed to demonstrate integration's efficacy in improving well-being. Previous research has also proven that ISF integration positively impacts ISF management. Amuda investigated the role and effectiveness of cash waqf, zakat, alms, and public funding in alleviating poverty in Muslim communities in Nigeria. The study showed that if managed efficiently, the integration of cash waqf, zakat, alms, and public funds can significantly contribute to community empowerment. Haneef et al. developed waqf-based Islamic microfinance (IsMF) system to address poverty in Bangladesh. The study found that the integrated waqf and Islamic microfinance model could help reduce poverty in the community. Furthermore, according to Jouti, integrating several ISF instruments is one of the prominent factors in developing the ISF ecosystem. Widiastuti et al. analyzed Islamic social finance integration solutions and strategies, showing that data integration is the most priority solution. The Islamic Social Finance ecosystem can foster integration among Islamic Social Funds. Several Islamic social funds integration programs have been implemented in Indonesia. For example, the Ahmad Wardi Hospital program was built on waqf land and funded by zakat and infaq funds. As a result, mustahiq can freely use the health program. Furthermore, several Zakat Institutions in Indonesia run agricultural programs in which the land is provided by waqf lands, such as the programs run by the Zakat Institutions Al-Azhar and the Infaq Management Board. During the farming season, operational needs are met using infaq funds through the qardhul hasan scheme. This integration program can significantly accelerate community welfare and maximize the potential of Islamic social funds. Previous studies on Islamic Social Finance index Several previous studies have built Islamic Social Finance Indexes, as shown in Table 2. Previously, research has focused on measuring the institution's performance by focusing on a single aspect of measurements, such as service quality, accountability, or management efficiency. Furthermore, most previous studies used qualitative techniques, such as the Analytical Network Process (ANP) technique, without quantifying the indicators. As a result, this study built on previous research by developing a novel measurement that focuses on sustainability factors to assess the sustainability of Islamic Social Finance programs. Furthermore, this study uses the developed measure to determine the success of three empowerment programs implemented by various Islamic Social Finance Institutions. Method and technique analysis The research method is separated into two phases. The first method stage is a qualitative approach using the Analytical Network Process (ANP) to formulate the index components of Islamic Social Finance programs' success and sustainability. The Analytical Network Process is a development of the Analytical Hierarchy Process, built by Saaty. The researcher chose the ANP method with several considerations [38,: (a) ANP is a method of multi-criteria decision making that can overcome complex network structures, (b) excellent in evaluating all PLOS ONE Toward developing a sustainability index for the Islamic Social Finance program relationships between clusters and elements, and (c) ability to determine priorities of all proposed indicators. There are three stages in the ANP method. The first is the decomposition stage, which aims to build and validate the framework of the ANP model by determining the objectives, factors, aspects, and indicators that influence the success and sustainability of Islamic Social Finance programs (see Fig 1). The results of Focus Group Discussion (FGD) and in-depth Interviews with Islamic Social Finance experts from representatives of academics, associations, regulators, and practitioners develop the ANP framework. The purpose is to obtain information on program management from various perspectives. The study conducted three FGDs and two in-depth interviews. From the first and second FGDs, the researchers obtained an overview of successful and sustainable program management. The results of FGD 1 and 2 built the framework of the ANP model. The study also acquired previous research from reputable international journals Scopus Q1-Q3 and Top Tier to construct the model framework. Then, the third FGD and in-depth interviews validated the built framework. The second ANP stage is a pairwise comparison to create the designed questionnaire from the ANP model. The questionnaire includes information on factors, stakeholders, aspects, and indicators that affect the success and sustainability of the program. The questionnaire was compiled based on the general ANP format using a scale of 1-9 for each compared point. A scale of 1 indicates very unimportant meaning; on the contrary, a scale of 9 means very important/relevant/influential. The questionnaires were distributed to experts to assess each ANP network. The third stage is the analysis. It includes the priority weight of each index. The priority weight is determined following agreement results from experts. The calculation of priority weight uses the help of Microsoft Excel and Super Decision. Based on Saaty, the measure of each index is derived from the calculation of the Geometric Mean and Rater Agreement. Geometric Mean is a calculation of opinion in a cluster from expert assessments. On the other hand, the Rater Agreement (W) is the level of agreement of the experts in assessing the weight where the value of W = 1 indicates a perfect agreement. The formula for calculating the Geometric Mean and Rater Agreement is as follows: Where R is respondent and n is number of respondents. Meanwhile, T is the transpose matrix from the results of data processing, p is the number of nodes, n is the number of respondents and S is the number of squared deviations from the calculation results of T and the average value of the absolute priority. In detail, Fig 2 shows the stages of the ANP and MWI methods. After all the phases in the ANP method have been carried out, the next step is to estimate the index value using the Multi Stage Weighted Index method (phase 2). Like the ANP method, the MWI method has three stages. The first stage is to develop criteria for each built indicator in the ANP model. The standards were compiled based on the results of the FGD, indepth interviews, and literature reviews. The assessment of criteria is described using a Likert scale of 1-5, where 1 indicates the least ideal condition, while 5 shows the most optimal condition. The second stage is determining the weight for each factor and aspect to formulate the index calculation. In this phase, there is a relationship between ANP and MWI where the weights' measurement uses the ANP data processing results. The third stage is to estimate the index value with the following formula: Where MWI IND is the average score of MWI score from each stakeholder, NVA is Normalized Value of Aspect, NVS is Normalized Value of Stakeholder, and NVF is Normalized Value of Factor. This study built an index of the success and sustainability of Islamic Social Finance programs using the score of 0 to 100. In addition, the index is able to formulate the possibility of sustainability, as shown in Table 3. Data and samples Primary data sources are focus group discussions, in-depth interviews, and field studies. The results of focus group discussions and in-depth interviews formulated the ANP model and criteria assessment for the MWI method. This study involved a total of 15 respondents consisting of four representatives of academics, four representatives of practitioners, three representatives of associations, and four representatives of regulators. In the ANP method, the number of respondents is not the main criterion but the expertise and experience. The respondents filled out the questionnaire through several procedures. First, the researcher team sent an application letter to fill out a questionnaire. After receiving a response, the research team scheduled the time with the respondents. Second, the respondents were given several choices in filling out the questionnaire: (a) assistance from the research team through zoom meetings, (b) offline, and (c) independently. The research team structured the questionnaire in easy-tounderstand sentences to avoid misinformation in the distributed questionnaires and provided a glossary for specific words/sentences. Full access to the questionnaire can be accessed by contacting the corresponding author. The data that can be accessed are research questionnaires using the ANP method, and answers from respondents to the questionnaire. The selection of respondents used the purposive sampling technique. The respondents must had more than two publications in reputable international journals related to Islamic Social Finance for the academic group. In the group of practitioners, respondents were in managerial positions as chairpersons or program managers. The respondents were leaders in the Islamic economic community association and the Islamic Social Finance management forum for association groups. Next, the respondents were from the Indonesian Waqf Board, the National Amil Zakat Agency, the Ministry of Religion, and KNEKS and held managerial positions for the regulator group. On the other hand, field studies present the empirical results of the constructed index of success and sustainability. The study analyzed three Islamic Social Finance institutions with the following criteria: (a) official institutions that are granted a license to operate by the government, (b) have the status of a national or provincial level institution, and (c) conduct empowerment programs. Furthermore, the empowerment program's criteria must include the following terms. First, it is an Islamic Social Finance integration program. The funds disbursed are not only from one type of source. Second, the program has run for at least one year. Third, the program implements benefits to mustahiq, who deserve to be empowered. Similar to the procedure for collecting questionnaire data with experts, the researcher also confirmed the willingness of the Islamic Social Finance Institution to become research respondents. Data entry and interviews with representatives from ISF institutions were managed offline (2 respondents) and online (1 respondent). Before taking the questionnaire, experts had to fill out a consent form. Each expert completed the questionnaire consciously and in a normal mental state. Further, the experts were not subjected to any pressure from any party. This research follows the rules and ethics of research conduct and writing by the Institute for Research and Community Service (LPPM) of Universitas Airlangga. This research also refers to Airlangga Chancellor's Regulation Number 34 of 2019 regarding the rules of behavior in article 16 (b); researchers must be honest, objective, and pay attention to all aspects of the research process and are not allowed to manipulate data and research results. Article 16 (f) states that researchers must respect and appreciate the object of research, whether the thing is a human or an animal, whether living or dead. Further, this research is supervised by the Center of Research and Publication (3P) in the Faculty of Economics and Business Universitas Airlangga, which functions as an ethics committee according to the Dean Decree Number 88/ UN3.1.4/2020. This research is also supervised by the Institute for Research and Community Service (LPPM) Universitas Airlangga as the ethics committee at the university level and as a funder. Concept of 4 ER (economic rescue, recovery, reinforcement, resilience) The 4 ER concept is a concept that shows the extent to which the Islamic Social Finance program can prosper the beneficiaries. The researchers of this study created the concept based on literature reviews on prior studies and previously conducted focus group discussions. The 4 ER concept was developed based on the maqashid sharia. According to Al-Ghazali, maqashid sharia (objectives of Islamic Law) aims to achieve maslahah (means welfare/advantage/merit) by maintaining five aspects: Faith, Soul, Mind, Lineage, and Prosperity. This study used the research of Kusuma and Ryandono as the basis for constructing the 4 ER. Kusuma and Ryandono determined the economic category of society, as shown in Fig 3. In Fig 3, the economy of society is divided into 3, namely, underprivileged and entitled to receive zakat (0-Y KOMZ = C 0 ), pre prosperous but not entitled to receive zakah nor must pay zakat (Y KOMZ = C 0 ), and prosperous and to pay zakat (Y ON = C ON ). Thus, the researcher developed indicators in these five aspects and determined the level of concept 4 ER, as shown in Table 3. The first is Economic Rescue, the lowest level, where the Islamic Social Finance program cannot improve the beneficiaries' economy (Y < S, no job). This level means that the program has not been able to encourage recipients to be able to meet the needs in the five aspects of maqashid sharia. The value score of the Economic Rescue is 1-25%. Secondly, Economic Recovery was defined as the level where the program can provide economic change and encourage beneficiaries to meet their needs, but it is not a significant change. In Economic Recovery, program recipients have not been able to save their income (Y < S), but the most basic needs have been fulfilled, although it is unstable. According to Chang and Rose, economic recovery is an activity to recover economic conditions from a disaster. The score for Economic Recovery is 26-50%. Next, Economic Reinforcement is the third level, showing that the Islamic Social Finance program can increase the beneficiaries' economy, and the income exceeds the needed (Y > S). At this stage, the program's recipients have been able to save, invest, and donate their income. The score for Economic Recovery is 51-75%. Lastly, Economic Resilience is the top level where the program has created prosperity for its beneficiaries. According to Shima et al., economic resilience can be defined as transforming and improving an organization's performance. This stage is considered economic resilience if the program's recipients can save income, grow business assets, and be able to pay zakat. The score for Economic Reinforcement is 76-100%. This study developed the concept of 4 ER by specifying a more detailed level: Low, middle, high, and sustainable. The program is sustainable if it has achieved Economic Resilience. At this level, the researcher divided the sustainability level into low, middle, and high. Index of success and sustainability of Islamic Social Finance program This study built an index of the success and sustainability of Islamic Social Finance programs. The index was constructed by analyzing factors, stakeholders, aspects, and indicators compiled through focus group discussions, in-depth interviews, and literature reviews. Table 4 shows the weights of each index component processed using the ANP method. Factor and involved stakeholder Internal and external factors influencing the ISF program's success show the same important value (0.5). Four stakeholders play a role in internal factors, including institutions (0.38), donors (0.24), beneficiaries (0.24), and supervisors (0.14) (see Table 4). Islamic Social Finance institutions (0.38) are the most crucial stakeholder in the program's success. Islamic Social Finance institutions must be professional in creating and developing programs supported by competent human resources, proper program planning, ability to collect, manage and empower, conduct systematic monitoring and evaluation, and perform transparency of program. On the other hand, the critical roles of donors, beneficiaries, and supervisors help achieve the program's success and sustainability. Donors have an important role in channeling funds to institutions. Beneficiaries are essential stakeholders because Islamic Social Finance programs aim to improve the beneficiaries' quality of life. Meanwhile, the supervisor ensures that the program does not violate the applicable laws and regulations. Meanwhile, the external stakeholders include academicians (0.29), associations (0.28), and the government (0.43). The government is the most influential stakeholder in the success and sustainability of the program. The government is the party that permits the establishment of institutions and implementation of Islamic Social Finance programs. Moreover, the government is the decision-maker regarding issuing regulations and policies that support the success and sustainability of the program [5,. On the other hand, academics and PLOS ONE Toward developing a sustainability index for the Islamic Social Finance program associations also play an essential role. Academics are responsible for developing competent human resources and increasing literacy in Islamic Social Finance. Associations play a role in collaborating and coordinating with all parties to create an ecosystem that supports the success and sustainability of the program. Table 4 shows the results of the data processing for each aspect of each stakeholder that supports the success and sustainability of the Islamic Social Finance program. Overall, funding contribution from donors (0.22) is the priority in supporting aspects of the success and sustainability of the program. The result follows the concept of sustainability in microenterprise proposed by Pischke, which is balanced incoming and outgoing funds. In Islamic Social Finance, incoming funds come from donors/muzakki who pay zakat/infaq/alms/waqf to the institution, which will then be managed and distributed to beneficiaries. The result is reinforced by a prior study's finding that the limited funds distributed to recipients caused the program's failure. The more funds raised, the more allocated amount given for the welfare of the beneficiaries. The effort is to achieve the welfarist paradigm in sustainability, proven by the increasing number of beneficiaries. Islamic Social Finance institutions also play an essential role in encouraging the involvement of donors to pay the funds. Thus, Islamic Social Finance institutions need to design various strategies that increase the literacy and interest of donors to donate their funds through the institution. If donors prefer to pay outside the institution, it may result in the unmanageability of professional, productive, and sustainable funds. The second priority is the involvement of donors (0.9) in providing input/advice to Islamic social finance institutions. This involvement includes the ability to capture donors' aspirations, trust, and loyalty to pay Islamic social funds (zakat, waqf, infaq, alms, and so on) to the institution. Controlling (0.08) is the third priority aspect that supports the success and sustainability of the program, as supported by Ghosh et al.. The responsibility for distributing Islamic social funds includes legal and sharia duties. Thus, control from the supervisor is necessary to ensure that the program does not violate the legal and sharia rules. The controlling aspect can also increase program transparency. Empirical study This study attempted to measure the index of success and sustainability of Islamic Social Finance programs using three programs from different institutions represented by initials as programs A, B, and C. Table 5 shows the descriptions of the empowerment programs in this study. Table 5 describes the types and objectives of the empowerment programs analyzed in this study. The programs are integrated, in which the funds distributed to recipients are not only from one Islamic social fund source but also more. Programs A and C have the same empowerment focus area: rice farming. At the same time, program B focuses on mushroom farming. After the analysis, Table 6 was generated to show the success of the programs and their sustainability index scores. Based on Table 6, the success rate of program A in transforming mustahiq into muzakki is at the level of Economic Resilience with a score of 81% (see Table 3). Program A has the potential to create sustainability for its beneficiaries even though it is still at a low level of sustainability (see Table 3). On the other hand, Program B has an index score of 73%, which indicates the program's success is at the level of Economic Reinforcement. In this study, Economic Reinforcement means that the program has improved the beneficiaries' economy but has not yet reached the level of prosperity. The achieved level implies income and expenses of the mustahiq are still balanced. Next, program B has not yet achieved sustainability but has gained high Economic Reinforcement (see Table 3). Program C scored 80% and reached the level of Economic Resilience with low sustainability. Fig 4 shows the index scores for each stakeholder in three different programs. In Islamic Social Finance institutions, aspects of planning and human resources have the highest scores in achieving success and sustainability in programs A and B. The planning aspect relates to how Islamic Social Finance institutions plan programs carefully by developing short-term and long-term strategies and formulating risk mitigation. The institutions are also responsible for planning an exit strategy to create financial self-sufficiency for beneficiaries and eliminate dependence on constant assistance. Proper program planning must be supported by the availability of competent human resources. Human resources have an essential role in creating innovative and strategic programs. There are various problems in the implementation of programs. One of them is differences in the beneficiaries' environment, the level of motivation to self-develop, and other problems that require human resources to build strategies so that the program runs optimally and creates maximum sustainable benefits. Further, the management and collection aspects have the highest scores in achieving success and sustainability in program C. In implementing the program, there must be good governance [99,. Like the collection aspect, Islamic Social Finance needs an innovative collection strategy, an accurate database, and an excellent optimal service to support donors' loyalty. From the donor aspect, program C scores higher than the other programs. Donors have an active role in the success and sustainability of program C to transform beneficiaries' welfare. The involvement of donors, primarily in distributing Islamic social funds to institutions, is crucial in achieving the success and sustainability of the program. This finding follows prior studies, which mentioned the institutionalist and welfarist paradigms in sustainability. In the context of Islamic Social Finance, the institutionalist paradigm concerns how Islamic social funds continue to be collected, managed, and distributed to recipients in empowerment programs. The welfarist paradigm happens when there is an expansion and addition of communities that can improve their welfare through empowerment. Thus, the Types and Objectives This program empowers farmer groups through business capital, training, and mentoring. Farmers were given training on Healthy Crop Management (HCM), where the program provider gives rice plants natural fertilizers and reduces chemical fertilizers. This program is an empowerment program for the poor (dhuafa). A group of dhuafa was given business capital as a Mushroom House to farm. This program focuses on agriculture. The program's beneficiaries receive assistance in business capital and subsidy. In addition to producing high-quality rice, HCM aims to increase the number of harvests in one year. During the program, the beneficiaries also receive training and assistance related to green farming. Generally, farmers will only have two harvests in one year. Meanwhile, with HCM, farmers can harvest three times in one year. Thus, farmers' income can increase. PLOS ONE Toward developing a sustainability index for the Islamic Social Finance program involvement and loyalty of donors to channel Islamic social funds into Islamic Social Finance institutions is crucial. From the beneficiary's perspective, commitment and motivation to change is the highest aspect of all programs' success and sustainability. This result is relevant to the previous study. There should be a commitment from the beneficiaries to improve their welfare condition so that empowerment programs can be productive. Scores for aspects of spirituality and quality of life are still low. The insignificant score explains why programs A, B, and C are still at Low Sustainability. However, beneficiaries of programs A, B, and C have a high commitment to change. The controlling aspect value is higher than the evaluation aspect for the entire program by seeing the supervisor stakeholder. The result means the controlling element has been carried out but not thoroughly evaluated. The coordination aspect score is higher than the cooperation for all programs from the association stakeholder's perspective. Coordination includes how associations coordinate with all stakeholders and build an ecosystem that supports program implementation. Program A has the highest score than programs B and C. In other words, program A gets more support from the association in achieving the success and sustainability of the program. From the academician perspective, programs B and C have identical scores and are higher than program A. The result means that programs B and C have full support from academics in program implementation. Academic support, in this case, is in the form of involvement in providing competent human resources and socializing the importance of Islamic Social Finance, increasing public literacy. Seen from the government stakeholder, the policy and regulation aspect is highly valued compared to the infrastructure support aspect in all programs. Program A has the highest score in policy and regulation compared to programs B and C. Based on the interviews with program A managers, the government collaborated with Islamic Social Institutions for program A implementation. The government supported by providing funds and inputs to optimize the execution. The government invited other stakeholders to discuss the direction of program A development. Policy and regulatory approval from the government are fundamental to achieving the success and sustainability of the program. The high score may explain why program A is more successful and sustainable than programs B and C. From the histograms shown in Figs 4-10, program A becomes the program with the highest index value (81%) among all programs. It excels in several aspects, such as planning, human PLOS ONE Toward developing a sustainability index for the Islamic Social Finance program resources, commitment and involvement of beneficiaries, beneficiaries' quality of life, controlling, coordination, cooperation, policy and regulation, and infrastructure support. The result means program A fulfills various important indicators in achieving the success and sustainability of the program. However, program A must improve coordination with academicians to raise public awareness about Islamic social funds and strengthen its governance. Further, Program A must increase donor loyalty to improve the amount of Islamic social funds distributed to beneficiaries. Program B has the lowest score and is on the economic reinforcement level. Compared to programs A and C, Program B must optimize coordination and teamwork with all stakeholders, particularly beneficiaries and the government. In addition, program B must increase beneficiary commitment and responsibility and develop strategies to improve beneficiaries' quality of life. Furthermore, program B must carry out synergy and coordination with the government, focusing on the synergy of assisting the government in program implementation. Program C has achieved the same level of success and sustainability as Program A. In general, program C must optimize aspects involving internal ISF stakeholders. This study analyzed three utilization programs so that they can be benchmarked against each other. The index helps Islamic Social Finance institutions to manage empowerment PLOS ONE Toward developing a sustainability index for the Islamic Social Finance program programs by tracking the level of success and sustainability of the program. The purpose is to understand the extent of the empowerment programs' impact on welfare transformation on program recipients. Theoretical implications The study adds to the literature by developing an index for measuring the performance of the ISF Institution by emphasizing the aspects of sustainability of the empowerment programs in transforming the welfare of mustahiq. The established index provides an exact measure of success and sustainability through the 4 ER concepts (Economic Rescue, Economic Recovery, Economic Reinforcement, Economic Resilience). Thus, the government and Islamic Social Finance institutions can use this index to measure the success and sustainability of a program. Further, this study adds to the literature regarding the integration theory of Islamic Social Finance in empowerment programs by implementing the index on three integrated programs in Indonesia. Practical implications The study gives managerial contribution in several ways. First, the study emphasizes the importance of the involvement of donors in paying Islamic social funds to Institutions to be managed productively. Islamic Social Finance institutions must formulate strategies to increase donor engagement. One is by providing platforms that can create more manageable payments, such as e-commerce, crowdfunding, mobile banking, and so on, or build potential programs that have long-term impacts. Other way is using digital technology, including social media, to educate and socialize Islamic Social finance by displaying creative content as educational media. Collaboration with influential figures from each generation (baby boomers, millennials, gen Z, and so on) is critical because each generation has unique characteristics and approaches. As a result, the purpose of socialization can be effectively communicated, and the amount of fundraising can increase. More than 30 universities in Indonesia host lectures on Islamic Social Finance/Islamic Economics/Sharia Economics. As a result, academics play an essential role at the university level in socializing the importance of Islamic social fund donation. According to the findings of this study, the government is the most influential external stakeholder. As a regulator and leader of economic countries, the government issues policies and regulations advising the public to pay Islamic social funds through institutions. Furthermore, local governments, such as the Aceh district through Qanun Aceh No. 10 Years 2018, have obligated civil servants to pay zakat. However, this policy is still subject to each local government policies. With this, the government must reconsider the urgency of requiring the payment of Islamic social funds for Muslims through the ISF Institution nationally. Second, an empirical investigation of the Islamic Social Finance programs shows that the programs are at the Economic Reinforcement and Economic Resilience levels with low sustainability. Islamic Social Finance institutions must develop short-, medium-, and long-term strategies to ensure mustahiq welfare. In the short term, Islamic Social Finance institutions must focus on strategies enabling mustahiq to meet their basic needs. The approach is purposed to alleviate distress and ensure the safety of the beneficiaries. In the medium term, Islamic Social Finance must develop a strategy to increase mustahiq's income and enable them to meet their basic needs independently through business management. The approach provides mustahiq with the necessary empowerment program to improve their entrepreneurial skills. Finally, in the long run, the Islamic Social Finance institution must devise a strategy to break mustahiq's reliance on the institution, allowing the mustahiq to be self-sufficient. Training and mentoring for mustahiq is one way to increase mustahiq's skills and reach prosperity. The government can help by providing beneficiaries with training and mentoring. The support certainly relieves Islamic Social Finance institutions when some have difficulty giving training to their recipients. Furthermore, one of the causes of the program's low sustainability is the limited funds. The government must support Islamic Social Finance Institution in running the empowerment program. The government must assist Islamic Social Finance institutions in implementing the empowerment initiative. To create welfare for mustahiq, the institutions must optimize teamwork and coordination with various stakeholders, such as associations, private corporations, and others. Conclusion This study developed a success and sustainability index for Islamic Social Finance Program using Analytical Network Process (ANP) and Multistage Weighted Index (MWI). The study analyzed various aspects and indicators to achieve program success and sustainability. The results show that the funding contribution of donors is a priority aspect. It indicates that more funds need to be paid by donors through institutions. The funds are then managed professionally in the form of empowerment programs. The programs must achieve success and sustainability to have a broader impact on the recipients. The realization of collections to institutions is still low, so the government must issue mandatory policies related to the payment of Islamic social funds in institutions. Therefore, Islamic Social Finance Institutions must manage the program professionally to transform welfare for the recipients with limited funds. This can be done by transferring knowledge and skills to beneficiaries through training and mentoring. In addition, this study highlights donors' involvement and supervisors' role in program monitoring to achieve program success and sustainability. This study conducted an empirical analysis of three empowerment programs in three different institutions. The finding shows that programs A (0.81) and C (0.80) have encouraged beneficiaries to improve their quality of life even though they are not economically stable. Meanwhile, program B (0.73) is at the level of economic reinforcement, which means the program has not yet achieved sustainability. The study has limitations. First, the index built in this study is based on specific conditions in Indonesia and cannot be generalized to countries with other different characteristics of Islamic Social Finance programs. Thus, it is necessary to develop cross-country research to apply the index to different countries. Second, the empirical investigation only includes integrated programs. Hence, further analysis can conduct empirical studies on integration and non-integration programs to determine the urgency of Islamic Social Finance integration. |
Psychiatric illness and family stigma. Considerable research has documented the stigmatization of people with mental illnesses and its negative consequences. Recently it has been shown that stigma may also seriously affect families of psychiatric patients, but little empirical research has addressed this problem. We examine perceptions of and reactions to stigma among 156 parents and spouses of a population-based sample of first-admission psychiatric patients. While most family members did not perceive themselves as being avoided by others because of their relative's hospitalization, half reported concealing the hospitalization at least to some degree. Both the characteristics of the mental illness (the stigmatizing mark) and the social characteristics of the family were significantly related to levels of family stigma. Family members were more likely to conceal the mental illness if they did not live with their ill relative, if the relative was female, and if the relative had less severe positive symptoms. Family members with more education and whose relative had experienced an episode of illness within the past 6 months reported greater avoidance by others. |
Impact of business model innovations on SMEs innovativeness and performance Purpose Business model innovations (BMIs), their drivers and outcomes are attracting increasing attention in academic literature. However, previous studies have mainly focused on large companies, while knowledge of BMI in small-to-medium-sized enterprises (SMEs) is limited. Therefore, the purpose of this paper is to add new insights into how related BMI drivers, practices and outcomes are in relation to SMEs. Design/methodology/approach An extensive review of the existing literature was performed. Consequently, the relationships between BMI drivers, BMI practices and outcomes of BMI were developed as a conceptual framework. An empirical study was carried out. A structural equation modeling (SEM) procedure was used to empirically test the model using a quantitative data set of Lithuanian SMEs (n=73). Findings The study provides insights into the relations between BMI drivers, BMI practices and outcomes of BMI in SMEs. The findings of SEM, four drivers (innovation activities, strategic orientation, market and technology turbulence, respectively) are indicated to contribute to BMI of SMEs. In addition, the results proved that the implementation of BMI practices leads to strategic and architectural changes in firms and has a positive impact on SMEs performance and innovativeness. Research limitations/implications Empirical research is focused on a limited number of internal and external BMI drivers, which have an influence on BMI in SMEs from one geographical region. Consequently, there are many external and internal BMI drivers which also may have an influence on BMI in SMEs, such as industry life cycle, organizational inertia and leadership. Meanwhile, SMEs possess multiple characteristics, i.e. a phase of maturity, gender of CEO, firm size and industry; therefore, the aforesaid aspects are considered to be significant limitations. In addition, the importance of SMEs characteristics as mediators for the effects on a firms performance and innovativeness should be considered in future research avenues. Practical implications Findings of this research can be used by SME managers to better understand how firms might actively engage in BMI practices, what drivers lead to BMI and, in turn, affect their firms performance and innovativeness. SME managers should be encouraged to pay attention to strategic and architectural changes of BM that can contribute to enterprise performance and innovativeness. Originality/value This paper adds to the stream of BMI research by empirically exploring drivers and outcomes of BMI in SMEs. In addition, this paper fulfills research gaps proposed by Bouwman et al., Foss and Saebi, Heikkil, Bouwman and Heikkil and Lambert and Davidson, and enhances the current overall understanding of BMIs. |
Maximum altitude of Late Devensian glaciation on the Isle of Mull and Isle of Jura Synopsis Evidence provided by striae, ice-moulded rock, erratics and perched boulders indicates that the last (Late Devensian) ice-sheet reached an altitude exceeding 760 m on Mull and 660 m on Jura. The highest summits on both islands support periglacial blockfields, suggesting that they remained as nunataks above the ice surface. This interpretation is supported by analyses of clay-fraction mineralogy, which shows that gibbsite (a pre-Late Devensian weathering product) is widespread in blockfield samples but rare in samples below the inferred limit of glaciation, implying removal by Late Devensian glacial erosion. Maximum ice-sheet altitudes of 760840 m and 660700 m are inferred for the Ben More massif on Mull and Paps of Jura respectively. Reconstruction of ice-sheet configuration in the Inner Hebrides area suggests that the 900 m ice-surface contour followed the west coast of the mainland, but the altitude evidence is insufficient to constrain the westwards extent of the ice sheet. Inferred ice-surface altitudes and directions of ice movement are incompatible with most theoretical models, and even the best fit model of ice dimensions in this area underestimates maximum ice thickness by ≥ 60 m. |
Increased nitrite and nitrate concentrations in sera and urine of patients with cholera or shigellosis OBJECTIVES:Nitric oxide (NO) is an important regulator of cell function. In the intestine, NO regulates blood flow, peristalsis, secretion, and is associated with inflammation and tissue injury. The objectives of this study were to assess and compare the role of NO in cholera, a noninflammatory enteric infection, and in shigellosis, a bacterial inflammation of the colon.METHODS:We determined serum and urinary concentrations of nitrite and nitrate during acute illness and early convalescence in 45 hospitalized children: 24 with cholera and 21 with shigellosis; 18 healthy children served as controls. Nitrite and nitrate concentrations were determined spectrophotometrically using Greiss reaction-dependent enzyme assay.RESULTS:Serum nitrite and nitrate concentrations were significantly (p < 0.05) increased during acute illness compared to the early convalescence in both cholera and shigellosis. Urinary nitrite and nitrate excretions were significantly (p < 0.01) increased during acute disease in shigellosis, but not in cholera. Nitrite concentrations correlated with stool volume (r2= 0.851) in cholera and with leukocytosis (r2= 0.923) in shigellosis.CONCLUSIONS:Both cholera and shigellosis are associated with increased production of NO, suggesting its pathophysiologic roles in these diseases. |
Endoscopic assessment of primary sclerosing cholangitis. Primary sclerosing cholangitis (PSC) is a rare chronic liver disease of unknown etiology for which the only known curative treatment is liver transplantation. The disease is defined by progressive inflammation and fibrosis of the bile ducts, causing biliary strictures and cholestasis. Common complications of the disease are the presence of biliary lithiasis requiring stone extraction, and development of dominant bile duct strictures requiring balloon dilatation and stent placement through endoscopic retrograde cholangiopancreatography. The increased development of cholangiocarcinoma is a dreaded complication in PSC, as it is often detected in an advanced stage and is associated with a poor prognosis. Several endoscopic techniques, including endoscopic ultrasound, confocal laser endomicroscopy and peroral cholangioscopy are applied in the management of PSC and detection of cholangiocarcinoma. Tissue sampling through different types of biopsies and biliary brush combined with fluorescence in situ hybridization are used to differentiate benign dominant strictures from biliary neoplasia. Nonetheless early detection of cholangiocarcinoma in PSC remains a clinical challenge requiring a specialized diagnostic workup. The aim of this review is to discuss the role of diagnostic and therapeutic endoscopy in management of PSC, providing an overview of current literature. |
Exploring the Notion of the Family Friendly City There is a common perception that downtown areas will never attract families and big cities are not the best place to raise children. Particularly the downtown areas of cities been depicted as the place where criminals, prostitutes, drug-sellers, and other dangerous strangers live. People with children are more likely to look for the suburbs to find bigger housing with more affordable prices, cleaner air, richer nature, a slower lifestyle, and safer environment. However, living in the modern suburb is not always easy and cheap, especially for those who need to commute to the central city. Dealing with the long commutes can be stressful and it affects the health, happiness, and well-being of family members. As the number of modern families with both parents in the workforce is rising, the demand to live closer to the workplace is getting stronger and growing. In some parts of the world, more families increasingly want to live in the cities. This trend can be seen in the United States, Japan, Korea, and Canada. Being family-friendly has become increasingly important for modern cities as more millennial generation show the tendency to raise their families in the urban area. Moreover, it is predicted that two-thirds of worlds population will live in cities by 2030. To accommodate the growing population, in particular, those with children, modern cities should be developed to suit urban families. But what criteria and qualities make one city more family-friendly than another? What would a family-friendly city look like? To date, the number studies exploring the notion of the family-friendly city has been very limited. Most studies have been focusing on the notion of family-friendly dwellings, family-friendly workplaces, or child-friendly cities. This paper brings together and examines the dominant and recurring ideas about the family-friendly city represented in the relevant literature and current urban practices. This paper also questions whether the concept of the child-friendly city is adequate to create a better environment to raise families in the city. Introduction By 2030, it is predicted that two third of the world's population will live in cities. This prediction signifies the potential increase in urban upbringing in the coming years and it highlights the necessity for modern cities to provide more conducive environments for raising families. Since 1925, cities, particularly the downtown areas, have been perceived as places where hobos, criminals, and poor people gather; thus, people believe that downtown will never attract families. The hustle and bustle of cities has been viewed as an inappropriate environment to raise a child. Once people start a family and have children, they are most likely to move to the suburbs area so they can get bigger housing and a safer environment for their child. Those, who are left in the cities, are mostly families that have no choice about where to live and are dependent on government subsidy for their well-being. In other parts, downtown neighbourhoods are mostly occupied by singles, gays, and empty-nesters. Many scholars believe that this influences the demographic balances, socio-economic conditions, and the sustainability of the city itself. However, recently there has been a surge of young people flocking back into the city. Millennials might not always stay in the urban core but increasingly young parents choose to raise their families in the urban settings. Compared to the earlier generation, these young urban professional parents (YUPPs) are less interested in having big houses and living far away from the city centre. They prefer to live closer to the urban amenities and adjust their lifestyle by living in a smaller space and using the whole city as their backyard. The middle-class families have started to reclaim the city by constructing their own spaces and are actively engaged in shaping the public domain such as by creating informal playgrounds on the sidewalks or on a larger scale by their consumption pattern and lifestyle choices. Some cities such as Vancouver, Tokyo, Amsterdam, and Toronto have recognized these growing demands and started to transform urban spaces to meet the needs of families such as by providing family apartment units, building public playgrounds and plazas, and providing more day-care, after-school hours and various family-related activities in the central area. The astonishing rise of millennial urban families signifies the necessity for urban planners to design a city that is more family-friendly. But what makes a city good for raising a millennial family? How could we define a family-friendly city? This paper brings together and examines the dominant and recurring ideas about the family-friendly city represented in the relevant literature and current urban practices. The paper begins with a review of the ills of urban living and the reasons why many families chose to settle in suburb areas. The next section discusses the suburban decline and the tendency of young professional families to reside closer to the urban core. Following this, the paper examines the term family-friendly used in the society and questions how we should define a city that suits family life. The last part of this paper summarizes the definition of the family-friendly city and the challenges that need to be addressed to create a better city for families. Finding the best place to raise a family: the city vs suburban areas 2.1 The ills of urban living Since the industrial revolution, cities have been portrayed as places that are unsuitable for raising a family. The revolution had transformed the urban landscape into a harsher environment for many people to live in. During this time, cities became the centre of industrial growth and attracted many migrants to work and live in their core areas. The expansion happened very quickly and gave very little chance for the cities to accommodate the growing population. The newly arrived migrants often settled in the poorest urban neighbourhoods and occupied the slums areas. Families were huddled in small, dirty and dilapidate tenement rooms. There were no proper waste disposal systems and the newly arrived residents often did not know how to manage their environment. Cities were very dirty, polluted, crowded and people no longer knew their neighbours well. The growth of cities led to unexpected living conditions where epidemics frequently broke out and crimes rose sharply. In addition to this, the urban growth divided the cities into several types of settlements and land-use. Central cities were mostly occupied by the newly arrived immigrants, the poor working class, and ghettos residents. The immigrants settled in these areas because housing values were relatively low and the areas were closer to the factories. The urban core was portrayed by Burgess, a famous Chicago School Theorist, as the zones of slums and 'bad lands' with their poverty, degradation, disease, and underworlds of crime. Those who accumulated more economic resourced moved to the zone of better-residence. This settlement offered more spacious areas for middle-class families. During this time, the affluent families were more likely to reside in the urban outskirts. The influx of immigrants and farmers to the central cities had driven the richer communities to escape the city and seek refuge in the suburban area. This practice continues in numerous countries. Even though some urban cores have become much better in terms of physical design, hygiene, and environment, they still end up in being designed as places where the residents tend to give up on having children and raising a family. The ultra-dense cities of eastern Asia and some American big cities are still childless. Big cities are still seen as inappropriate places for raising a family due to the shifting sexual demographics, and economic and moral disasters. This signifies how core cities could only attract the young and fail to keep the residents as they age and start families. It further signifies how cities, even with their impressive achievements and outstanding architectures, could not accommodate the whole lifecycle of many urbanites. The suburban dream The damage of the urban core had generated a massive exodus to the suburbs. People who want to escape from the crowds, noise, pollution, and fear of immigrants moved to the hinterlands to seek a better quality of life. Feller described one suburb housing-commercial brochure published during the industrial revolution as a tempting ad. The suburbs were illustrated as areas that offer freedom from dust and smoke and one could have desirable neighbours. During this time, suburban areas were portrayed as the urbanist paradise that offers a greener, cleaner, and safer place to raise a family. Suburbs in America were labelled as a growing-up place where the communities consisted of only the affluent, single-family homeowners and people of one race. Meanwhile, in other countries with homogenous races such as Indonesia, Japan, and China, suburban area not only become an escaping destination for the richer families, but also for the middle-low income families who are looking for more affordable housing in a healthier environment. The suburb appears as a settlement that can guarantee the family's happiness and can meet their expectations whether in neighbourhood qualities or good school systems. Compared to the city core, the urban hinterland offers lower density, which is often perceived as a place that is safer, more satisfying, and has a lower risk of depression. Safety is one of the features offered by the suburban housing developers hence many housing complexes were developed with excessive attention to the security. Security fortresses have become a common feature of suburban housing complexes. All buildings and transport systems are inter-connected and all points of entry are protected by private security forces. While suburbs seem to offer an ideal environment for raising children, some suburban residents expressed their concerns about the time and energy they lost during the commuting time. Christian described how for one hour increase in daily commuting, the male respondents in his research decreased their time with their spouse with 21 to 21.8 minutes and 18.6 minutes with their children. As for the female respondents, he described how their one hour increase in daily commuting is equal to a 11.9 minutes decrease in time with friends. However, Christian's study showed how females must reallocate time from other usages at greater rates. For example, they need to reduce exercise or skip preparing a healthy meal. The long commutes reduce the opportunity for one of the parents to engage in full-time work especially when they lack child-rearing support from the day-care or family members. This situation forces one of the parents to work from home or to sacrifice their career for the sake of their children. In addition to this, the cost living in suburban areas is not as cheap as expected. The commuting cost including the highway tickets and gasoline could be significant, particularly if the living area is not connected by integrated public transportation. Brown and Roberts argued that long commutes also sacrifice other activities that are beneficial to well-being and participation in family life. Parents' workstress will influence their children and it can also decrease parents' productivity in the workplace. A study by Li underlined how fathers' commuting activity influence children's social relationships. The farther the father's commuting distance, the less likely children will have good peer relationships. The long commutes prevent fathers to participate in child-rearing and they often report physical fatigue and strains when they come home. Li argues that parents' mental and physical health is a very important resource to promote healthy child development. Longer commuting trips are also associated with fewer socially oriented trips and less access to social capital. Delmelle also highlighted how commutes of 30 minutes or more have a negative effect on social satisfaction. In addition to this, suburban living also poses a risk of social isolation. Miller argued that the social and spatial structure of suburbia can promote familial isolation through a lack of public space and an emphasis on home maintenance and home-centred entertainment. In recent years, there have been some changes in the behaviour of young professional parents in some countries such as The Netherlands, the US, and Canada. Compared to the previous generations, today's parents show more interest in staying closer to the central area, particularly the young urban and professional parents. Urbanized suburbs and downtown areas have become some of the favourite places where these young parents prefer to live. The urbanization of edge cities has transformed the traditional residential community into a more complex mix of domiciliary and economic functions. These shifting suburban areas are often landlocked, therefore, they could not expand their territory outward. The only choice left is to expand the territory upward and with these vertical urbanism characteristics, the distinction between the cities and the suburbs has become blurrier, particularly in large contemporary metro areas. Some young urbanites started to look more comfortable in raising their kids in high-rise buildings. In many core cities, the downtown areas are no longer seen as dangerous and inappropriate areas for raising children. As for these young urbanites, the whole city is becoming their living room and hinterland areas have become destinations for their weekend getaways. Even though they probably live in smaller places than before, they have more access and freedom to explore the surrounding urban parks, museums, and community centres. Living with kids in a smaller space is stressful for them but the millennial parents make some efforts to stay closer to the city by adjusting their lifestyle such as recycling and reducing the number of furniture in their house. Living closer to the core Today's families, or also known as millennial families, are debunking the stereotype of the traditional family. The lifestyle and values they hold are quite different from the previous generation. Futurecast has divided millennial families into a number of categories, starting from those with a more stable income to the families on a tighter budget. As for the families with more stable income and higher education level, children come first and they were least likely to sacrifice time with family. Even though they have enough money to spend they shop wisely and teach their kids to value money. Parents in this category are also health conscious, child-oriented, and they actively seek parenting related information from online resources. In line with this, McCulloch, and McCann also argued that many millennial parents greatly value parenting, about a 10 percent increase over the previous generation. This may explain why the young urban professional parents (YUPPs) tend to avoid long commutes and live closer to the core so that they can have more time with their children. Vancouver and Toronto have started to respond to the rising demands of these YUPPs. These cities have started to accommodate the need for bigger rooms for the downtown condominiums after discovering that the residents have difficulties to store children's stuff in the small apartment rooms. From the very beginning, the condominiums were not built and designed with the family needs in mind, thus, there is not enough space to put the baby stroller and/or children's bicycles. This situation leads many families with small children to use the bathtub to store children's stroller and bicycles. To accommodate the growing needs of YUPP families, the government of Toronto established guidelines for vertical neighbourhoods and buildings fit for families with children. Vancouver also echoes this movement as the number of Vancouverite who move into multi-family units such as apartments and condominiums is rising significantly. The tendency of young families to live closer to the core area is also shown in other big cities such as Amsterdam and Tokyo. The rise of YUPPs who live in the downtown and urbanized city's edge has transformed the urban landscape and generated the birth of family-oriented consumption spaces. This situation can be seen also in Tokyo, which faces a significant increase of families living near urban core. The ultra-convenience of the Greater Tokyo Region has increasingly attracted young families to leave the countryside and reside nearer to the capital area. To respond to their needs, numerous housing units and condominium complexes for families have been built intensively within the Greater Tokyo Region. Even though the four cities above have started to identify the necessity to create a family-friendly environment for their young professional inhabitants, their initiatives are quite rare. Other core cities have not completely considered the needs of families or transformed their physical and social environment to suit family life. The ultra-dense cities of Eastern Asia such as Hong Kong, Singapore, 5 1234567890 ''"" and Seoul exhibit the lowest fertility rates on the planet. Kotkin further argues that to flourish, these core cities need to change and be responsive to the different human needs -from birth to the end of life. Otherwise, cities will be childless and only filled with singles who will leave the area or emigrate when they grow old and start families. The notion of family-friendly and the family-friendly city The term of family-friendly was introduced mainly in the field of personnel management . This term is used to define the opportunity for women in particular, to juggle their career and family life. The term family-friendly is also used to describe the type of community-based service given to children and families. However, there is very limited academic research investigating the term family-friendly from the perspective of urban physical and social planning. Recent studies have focused on the development of the child-friendly city instead of the family-friendly city. Compare to the term 'family-friendly city', a 'child-friendly city' has a more lucid definition and the term has been officially acknowledged by many governments and institutions as the platform to ensure the rights of every child in the city. The child-friendly city initiatives promoted by UNICEF have guided many cities in the inclusion of children's rights in their goals, programmes, components, and structures. Table 1. Child's rights in the city. No. Rights to 1 Influence decisions about their city 2 Express their opinion on the city they want 3 Participate in family, community, and social life 4 Receive basic services such as healthcare and education 5 Drink safe water and have access to proper sanitation 6 Be protected from exploitation, violence, and abuse 7 Walk safely in the streets on their own 8 Meet friends and play 9 Have green spaces for plants and animals 10 Live in an unpolluted environment 11 Participate in cultural and social events 12 Be an equal citizen of their city with access to every service regardless of ethnic origin, religion, income, gender, or disability There have been magnificent and outstanding movements to make today's cities friendlier for children such as providing more spaces where children can play in their neighbourhood or even developing official children's forums in the city. However, this paper argues that the concept of the child-friendly city (CFC) itself could not solely support young urbanites to raise a family in the city. The concept does not completely represent the needs or considers the characteristics of other family members who act as the children's guardians. In addition to this, the CFC concept does not discuss family's access to economic capital. The costs of raising a child in the city from birth to maturity are quite expensive. Apart from food budget, the young parents need to rent or buy property that suits family activities and is also close enough to their workplace, groceries, and childcare facilities. The provision of housing that suits families and of spaces and places for children in a family-friendly planning idea should be followed by the effort to increase affordable and decent housing for the middle-class family and even increase the service of public transportation or other mechanisms that would support parents' trip chains. Supporting parents' trip chains is very important to ensure that parents can balance their work and family life. In households with children, women create substantially more complex trip chains than women in households without children. This situation led women with children to travel in private vehicles to juggle multiple activities within a limited time. As women have always taken a bigger 6 1234567890 ''"" responsibility in maintaining the household, the lack of workplaces that offer family-friendly policies and benefits can also impact their choices to join the workforce or even to have a child. The attitudes toward women with children are still discriminative in some parts of the world. It is not always easy to find or change job for women with children. Inflexible working hours and long commutes often prevent women to take on full-time employment. If cities continue to be designed or end up being designed as places where women cannot balance work and family life, then women with higher education might prefer the childless lifestyle as what happened in many countries nowadays. As important as mothers are, today's fathers also play very important roles in raising their families. Unlike the previous generation, the millennial fathers are more likely to participate in child-raising. The change in fathers' attitudes signifies the necessity to improve the child-rearing facilities within the cities. The increased roles of fathers in family-raising should also be considered in the design of the family-friendly city for example by installing diaper-changing stations in men's toilets or introducing more incentives for fathers who participate in child-raising activities. The millennial fathers are also showing the tendency to participate in local movements and civic participation and they know that their voices can influence how the city works. Based on the phenomena discussed above, this paper argues that several matters need to be considered in the development of the family-friendly city. Both physical and non-physical factors contributing to the family's well-being in the city should be addressed carefully, such as family's access to economic capital, family-friendly transportation, considering current trends such as the shifting roles of modern parents and different characteristics of millennial families. Table 2. Key features of family-friendly planning. No. Key feature 1 Housing 2 Spaces and places for children 3 Inclusive, high quality of public space 4 Greening the city 5 Knowledge-based urban planning 6 Children as stakeholders Cities that suit families The above discussion showed how the use of the term family-friendly is not yet uniform and similarly, there is still very limited discussion about what a 'family-friendly city' is. There are very few studies that have investigated the notion of the family-friendly city and the things that should be developed to support family's well-being and sustainability in the modern cities. Even so, for a long time, people have believed that families could not strive well in the big city particularly the core areas. To some extent, the industrial revolution had influenced people's perception of the ideal place to raise a family. The following section discusses insights from some cases of the American ideal 'family-friendly city'. Smaller towns, suburb cities, and less dense cities are considered by most American families as a good place to start a family. The latest surveys by Forbes, Niche, and Kiernan revealed the criteria used to define a family-friendly city from the perspective of American families (see Table 3). Table 3 indicates how American families define some criteria of the family-friendly city in the same way. Some common criteria included the availability of affordable housing, urban safety, and education quality. Despite some similarities, there are also some subtle but important differences to help understand and enrich our perspectives. For instance, Forbes mentioned commuting as one of its familyfriendly city criteria. As described in the previous section, millennial parents are less likely to sacrifice their time with family, thus, commuting time has become one of their considerations in choosing a home location. However, commuting may not be the only factor contributing to the friendliness of a city for family life. A survey by Niche revealed different factors such as the importance of outdoor activities, the 7 1234567890 ''"" percentage of children under 17 in the area, and the diversity to make a city friendlier for families. These criteria are aligned with the latest research by Futurecast which described how millennial families value life-experience and changed the way people look at diversity. For these families, it is important to ensure that their kids have access to great outdoor experiences and encounter different people. The survey showed that millennials may not be very rich but they would spend their money on facilitating their children to experience more in life. Cost of living Crime and safety grade Health and safety 3 Housing affordability Higher education rate Education and Childcare 4 Commuting Cost of living grade Affordability 5 Owning homes Family amenities grade Socioeconomics 6 Crime Housing grade 7 Education Outdoor activities grade 8 Percentage of residents between the ages of 0 and 17 Diversity grade Similar but a little bit different, Kiernan also mentioned a criterion related to family's opportunity to engage in fun activities in the city. However, unlike the other surveys, Kiernan clearly specified about childcare facilities as one of a family-friendly city's criteria. As there has been a rise in dual-career couples in Western countries, there are big challenges for millennial families to balance the demands of two working partners with children at home. For these young professional parents, a family-friendly working environment and access to affordable and good child care services are very important. According to the above surveys, the best cities in the US to raise a family are Overland Park, Madison, Plano, Grand Rapids, Idaho, Provo, Naperville, The Woodlands, and Columbia. Nevertheless, none of the cities mentioned above appear in the ten best cities in the world to raise a family, according to Homeday's best cities for families index 2017 (see Table 4). Interestingly, cities that rank higher in the Homeday Index are mostly big cities with a large population. This finding simply refuses the idea that large cities are not appropriate for families. The criteria set by Homeday to measure the level of family-friendliness in the city are described in Table 5 Table 5. Criteria of Family-friendly city (Homeday version). No. Criteria 1 Housing 2 Education system 3 Safety 4 Unemployment 5 Pollution 6 Transportation 7 Maternity and paternity law 8 Healthcare 9 Happiness 10 Kids friendly airport 11 Activities for kids 12 Green spaces 13 Parents' perceptions 14 Professionals' perceptions Compared to the surveys carried out by the American counterparts, Homeday's survey covers different but interesting matters such as transportation, maternity and paternity law, happiness, and kids' friendly travel and activities. The difference in the family-friendly criteria is probably influenced by the European culture that values the family-work balance and independent mobility. Parents' right to take a leave after childbirth is carefully considered in many European countries and fathers are also expected to take part in the child-rearing activities. Thus, it explains why paternity law is also considered as the criteria of a family-friendly city. Transportation is also an important factor because European families tend to use public transport and/or bicycles in their trip chains. Cities without proper and integrated public transportation would pose more obstacles for parents to juggle their career and family life. The keyword of happiness is also used in Homeday's criteria of the best city to raise a family. The ability of the city to promote residents' happiness has been considered one of the important features in future urban design and planning. Leyden et al. underlined how cities that can provide easy access to convenient public transportation and to cultural and leisure amenities would promote happiness. In line with this, the research indicated that cities that are affordable and support urban upbringing would affect the residents' happiness. Surprisingly, a kids-friendly airport has also been considered as one of Homeday's criteria of a family-friendly city. This support another finding that millennials would rather travel than buy a home. The availability of activities for kids in the city during vacation, weekends, and bank holidays appears to be very important for parents. It is obvious that for the millennial parents, travel is not seen as a luxury but more of a necessity. Conclusion This paper attempts to provide a review of millions of people's efforts to find the best place to raise a family. The industrial revolution made many families think twice about raising a child in the city, particularly in the downtown area. Families who could not endure the city life would escape to the suburbs once they had enough more money to buy or rent a bigger house. The suburbs are filled with families who are trying to find a better quality of life for their families. However, suburbs also have several problems that force some families to flock back into the city. To sum up, people seek to find a better place for their families because they are looking for the following matters, (i) a healthier environment; (ii) a safer environment; (iii) a less dense settlement; (iv) a better social life; (v) better access to jobs and other economic opportunities; (vi) longer time to spend with their family; (vii) a better education system and quality; and (viii) work-family balance. The qualities that people seek in the new place signify the ideal urban characteristics for modern families. As more people will live in the urbanized areas, it is very important to understand the kind of cities that can support the lifestyle of 9 1234567890 ''"" However, the term 'family-friendly' itself is used frequently in personnel management, communitybased services, parenting, and real estate discussions. Compared to the child-friendly city, the concept and movement to build a friendlier city for families still lacks official recognition. A shared vision of what kind of city should be developed to accommodate the growing urban population is needed. The trends show that today's families prefer to live closer to urban cores and urbanized suburbs. The young families have gradually left the traditional suburbs and moved closer to the cities to provide better experiences for their children and spend more quality time with their family members. These families would put in some efforts to adjust to the vertical living and its different lifestyles, including dealing with the constant needs of recycling and space management. Therefore, this paper strongly argues the necessity for cities to carefully address and pay attention to the growing and changing demands of families that live in high-rise settlements. The paper underlines the necessity to develop a more solid definition and define a clearer scope of 'family-friendly city' development. The current concept of CFC has not covered a number of matters related to parents' ability and the things that parents need to raise their children in urban areas. The CFC concept is inadequate in ensuring family's well-being and sustainability. The well-being and sustainability of a family involve more complex issues ranging from the stability of parents' jobs, incomes and even their work and family balancing skills. Transportation to support parents', particularly mothers' trip chains, should also be listed in the features of the family-friendly city. However, the current family-friendly urban planning's concept has not touched upon this issue and to some extent is still much focused on fulfilling children's needs. With the women's issues in work-family balance and the shifting roles of fathers in modern society, this paper suggests the necessity to establish a new vision of a 'family-friendly city' that will help the urban planners and society to plan a friendlier city for the millennial families. The surveys carried out by private institutions and real estate companies have given us some clues about the characteristics of cities that are suitable for raising a family. A city that is affordable, safe, and offers good education quality for children can be defined as the simplest version of what a familyfriendly city should be -regardless its size and density. Even though there are some differences in families' perception about cities that are friendly to a family, there are some similar matters that need to be provided by cities to support urban upbringing such as housing that suits families and environments that are conducive for families to carry out their daily activities. The author of this paper believes that the more the city could provide families with various amenities and accessibilities, the friendlier they are to raise a family. Therefore, all criteria mentioned in the above surveys should be deeply considered. Finally, this paper suggests the necessity to research more communities to understand the growing demands of millennial families and the kind of cities that should be provided to support the young urbanites to raise a family. Further and bigger-scale research needs to be carried out to investigate more literature and current practices of family-friendly city development in various countries and continents around the world. |
Sensitivity analyses for unmeasured confounding assuming a marginal structural model for repeated measures Robins introduced marginal structural models (MSMs) and inverse probability of treatment weighted (IPTW) estimators for the causal effect of a timevarying treatment on the mean of repeated measures. We investigate the sensitivity of IPTW estimators to unmeasured confounding. We examine a new framework for sensitivity analyses based on a nonidentifiable model that quantifies unmeasured confounding in terms of a sensitivity parameter and a userspecified function. We present augmented IPTW estimators of MSM parameters and prove their consistency for the causal effect of an MSM, assuming a correct confounding bias function for unmeasured confounding. We apply the methods to assess sensitivity of the analysis of Hernn et al., who used an MSM to estimate the causal effect of zidovudine therapy on repeated CD4 counts among HIVinfected men in the Multicenter AIDS Cohort Study. Under the assumption of no unmeasured confounders, a 95 per cent confidence interval for the treatment effect includes zero. We show that under the assumption of a moderate amount of unmeasured confounding, a 95 per cent confidence interval for the treatment effect no longer includes zero. Thus, the analysis of Hernn et al. is somewhat sensitive to unmeasured confounding. We hope that our research will encourage and facilitate analyses of sensitivity to unmeasured confounding in other applications. Copyright © 2004 John Wiley & Sons, Ltd. |
Sodium visibility and quantitation in intact bovine articular cartilage using high field Na MRI and MRS. Noninvasive methods of detecting cartilage degeneration can have an impact on identifying the early stages of osteoarthritis. Accurate measurement of sodium concentrations within the cartilage matrix provides a means for analyzing tissue integrity. Here a method is described for quantitating sodium concentration and visibility in cartilage, with general applications to all tissue types. The sodium concentration in bovine patellar cartilage plugs was determined by three different methods: NMR spectroscopy of whole cartilage plugs, NMR spectroscopy of liquefied cartilage in concentrated HCl, and inductively coupled plasma emission spectroscopy. Whole bovine patellae were imaged with relaxation normalized calibration phantoms to ascertain sodium concentrations inside the articular cartilage. Sodium concentrations in intact articular cartilage were found to range from approximately 200 mM on the edges to approximately 390 mM in the center, with an average of approximately 320 mM in five separate bovine patellae studied. In essence, we have created sodium distribution maps of the cartilage, showing for the first time, spatial variations of sodium concentration in intact cartilage. This average concentration measurement correlates very well with the values obtained from the spectroscopic methods. Furthermore, sodium was found to be 100% NMR visible in cartilage plugs. Applications of this method in diagnosing and monitoring treatment of osteoarthritis are discussed. |
Development of an Insert Co-culture System of Two Cellular Types in the Absence of Cell-Cell Contact. The role of secreted soluble factors in the modification of cellular responses is a recurrent theme in the study of all tissues and systems. In an attempt to make straightforward the very complex relationships between the several cellular subtypes that compose multicellular organisms, in vitro techniques have been developed to help researchers acquire a detailed understanding of single cell populations. One of these techniques uses inserts with a permeable membrane allowing secreted soluble factors to diffuse. Thus, a population of cells grown in inserts can be co-cultured in a well or dish containing a different cell type for evaluating cellular changes following paracrine signaling in the absence of cell-cell contact. Such insert co-culture systems offer various advantages over other co-culture techniques, namely bidirectional signaling, conserved cell polarity and population-specific detection of cellular changes. In addition to being utilized in the field of inflammation, cancer, angiogenesis and differentiation, these co-culture systems are of prime importance in the study of the intricate relationships that exist between the different cellular subtypes present in the central nervous system, particularly in the context of neuroinflammation. This article offers general methodological guidelines in order to set up an experiment in order to evaluating cellular changes mediated by secreted soluble factors using an insert co-culture system. Moreover, a specific protocol to measure the neuroinflammatory effects of cytokines secreted by lipopolysaccharide-activated N9 microglia on neuronal PC12 cells will be detailed, offering a concrete understanding of insert co-culture methodology. |
A review of SNP heritability estimation methods Over the past decade, statistical methods have been developed to estimate single nucleotide polymorphism (SNP) heritability, which measures the proportion of phenotypic variance explained by all measured SNPs in the data. Estimates of SNP heritability measure the degree to which the available genetic variants influence phenotypes and improve our understanding of the genetic architecture of complex phenotypes. In this article, we review the recently developed and commonly used SNP heritability estimation methods for continuous and binary phenotypes from the perspective of model assumptions and parameter optimization. We primarily focus on their capacity to handle multiple phenotypes and longitudinal measurements, their ability for SNP heritability partition and their use of individual-level data versus summary statistics. State-of-the-art statistical methods that are scalable to the UK Biobank dataset are also elucidated in detail. |
A study to translate and validate the Thai version of the Victoria Respiratory Congestion Scale Purpose Few clinical tools are available to objectively evaluate death rattles in palliative care. The Victoria Respiratory Congestion Scale (VRCS) was adapted from the Back's scale, which has been widely utilized in research and clinical practice. The VRCS will be translated into Thai and research will be conducted to determine its validity and reliability in assessing death rattles in palliative care. Methods Two qualified language specialists converted the original tool into Thai and then back to English. Between September 2021 and January 2022, a cross-sectional study was undertaken at a palliative care unit at Ramathibodi Hospital to determine the Thai VRCS's validity and reliability. Two evaluators independently assessed the volume of secretion noises using the Thai VRCS. The criterion-related validity of VRCS was determined by calculating the correlation between the sound level obtained with a standard sound meter and the VRSC scores using Spearman's correlation coefficient method. To assess inter-rater reliability and agreement measurement on ratings, we utilized a two-way random-effects model with Cohen's weighted kappa agreement. Results Forty patients enrolled in this study with a mean age of 75.3 years. Fifty-five percent had a cancer diagnosis. Spearman's rho correlation coefficient was found to be 0.8822, p<0.05, indicating a highly significant link. The interrater reliability analysis revealed that the interrater agreement was 95% and the Cohen's weighted kappa agreement was 0.92, indicating near-perfect agreement. Conclusions Thai VRCS demonstrated excellent criteria-related validity and interrater reliability. Using the Thai VRCS to assess adult palliative care patients' death rattles was recommended. Background Death rattles is one of the most common symptoms in end-of-life patients. Previous research estimated that the prevalence of death rattles to be between 12-92 percent. Although the exact etiology of this symptom is unknown, many assume it is caused by the patient's inability to cough or swallow oral and bronchial secretions. As a result, secretions from the airways accumulate in the throat. When the breathed and exhaled air passes through the liquid, a screaming sound is produced. Although the effect on the patient itself is unclear, most experts believe that patients may not be aware of the symptom because of decreased level of consciousness during the last days. Nevertheless, this symptom may cause significant anxiety and concerns among family, caregivers, healthcare professionals, or even the patient nearby. Additionally, it caused families and caregivers distress Page 2 of 7 Tantiwatniyom and Nagaviroj BMC Palliative Care 21:150 as they witnessed their loved ones' suffering, which was possibly interpreted by the family as "choking to death". A previous study discovered that nurses caring for patients with death rattle reported that the patient's relatives felt the patient was in a lot of suffering. It's like the patient is "gagging" or "drowning," while some relatives think it's useful as a warning sign that the patient will die soon. However, there is no clear evidence that death rattle is associated with respiratory distress in this group of patients. There are still few clinical tools available to evaluate death rattles in palliative care patients. The Back's scale is a well-known clinical tool for assessing the severity of respiratory congestion or death rattle. It was originally developed for use in a study comparing the efficacy of subcutaneous hyoscine hydrobromide and glycopyrrolate in reducing death rattles in patients entering the final stages of life in a specialist palliative care unit. Scale 0 is inaudible, scale 1 is heard close to the patient, scale 2 is clearly heard at the end of the bed in a quiet room, and scale 3 is clearly heard at approximately 20 feet (9.5 m) or at the door in a quiet room. This tool had been used in six previous studies, four of which reported the percentage of patients in each grade. Scale 1 was received by 6-17 percent, scale 2 by 19-26 percent, and scale 3 by 5-11 percent. Even though the authors evaluated the face validity of the Back's scale. However, no data on its validity or reliability were published. The Victoria Hospice Society of Canada then modified and developed the Back's scale into the Victoria Respiratory Congestion Scale (VRCS), which was first published in the Medical Care of the Dying Textbook and is referenced on the Victoria Hospice Society Web Site. This tool classified the level of secretion sound into four levels: 0 = no congestion, 1 = audible at 12 inches (30 cm) from the patient's chest but not further, 2 = audible at the end of the bed but not further, and 3 = audible at the room's doorway. Compared to the Back's scale, we discovered that the VRCS provided more specific and clear instructions on how to use the tool, such as clarifying the distance between the measuring point and the patient's chest for score 1, indicating that the distance from the bed is based on an approximate single room, recommending reducing room noise as much as possible during the assessment, and recommending repeated measurements. We contacted the Victoria Hospice Society to obtain permission to use the Victoria Respiratory Congestion Scale (VRCS) and to inquire about the tool's validity and reliability. It was discovered that the VRCS had never been tested for validity and reliability. As a result, we are interested in translating the VRCS assessment into Thai and conducting a study to test its validity and reliability in assessing the loudness of death rattles in palliative care patients nearing the end of their lives. Methods We initially requested permission from the Victoria Hospice Society to translate the original tool into Thai. The Victoria Respiratory Congestion Scale (VRCS) was translated into Thai and then back to English by two certified language experts. Three palliative care specialists checked the Thai version of VRCS for accuracy. A cross-sectional study was then conducted in a specialized palliative care unit at Ramathibodi Hospital in Bangkok between September 2021 and January 2022 to determine the validity and reliability of the Thai VRCS. Ethics The Human Research Ethics Committee approved this research project, Faculty of Medicine, Ramathibodi Hospital, Mahidol University Project No. COAL. MURA2021/712, on August 23rd, 2021. All methods were carried out in accordance with the approved study protocol under the Declaration of Helsinki. Participants were informed of the purpose and procedures of the study prior to the start of the study and had the right to withdraw at any time. Informed written consent was obtained from all participants prior to participation. Inclusion and exclusion criteria Patients over the age of 18 with a prognosis of less than a week or a high likelihood of death within 48-72 h are eligible. The patient's prognosis was determined by the presence of more than two common signs and symptoms observed in the last few days of life, such as a decreased level of consciousness or increased sleepiness. confusion and restlessness, difficulty swallowing, inability to eat or drink, death rattles, inability to close eyelids, air hunger, Cheyne stoke breathing or intermittent apnea, low blood pressure not associated with dehydration, pulselessness of radial artery, and low urine output. If patients withdrew from the study, they were excluded from the analysis. Data collection The data for the study was gathered using a standardized data record form. The data set was comprised of the patient's profile (age, gender, marital status, health insurance) and disease status (principal diagnosis, comorbidities, metastases). After receiving permission from the patient or relatives to participate in the study, two assessors comprised of palliative care physicians and nurses working in the palliative care unit used the Thai version of VRCS to assess the volume of secretion sounds. To blind the assessment score, the two assessors separately wrote down the score level on the data record form and place it in a sealed envelope. Each assessor was unaware of the other assessor's evaluation score. Sound level metering During the same period that the two assessors rated the VCRS, the researcher measured the sound level with a standard sound level meter. The 3MTM SoundProTM SE and DL Series Sound Level Meters meet the IEC 61,672 class 2 standard, as recommended by the Speech Sound Level Measurement Guidelines, and measure sound level in decibels. Before each measurement, the sound meter was calibrated with an acoustic calibrator. Each measurement lasted one minute, and the researcher recorded the average or equivalent sound level (mean LAeq) as well as the maximum sound level in decibels. To avoid unwanted noise during the measurement, we used a sliding wall between the beds, all medical devices were muted, and all medical staff was asked to remain silent. The measurement was repeated twice, five minutes apart. The correlation with VCRS scores was determined using the average sound level of the two measurements. The two VRCS assessors will not know the measurement results, and the researcher will not know the two assessors' VCRS scores. Statistical analysis The characteristics of the participants were presented in terms of frequency and percentage of the categorical data and mean, with standard deviation for continuous data, if the data had a normal distribution. If that were not the case, the median with range was applied. The sample size was determined by using the Sample Size Charts for Spearman and Kendall Coefficients by setting power 80%, significance level () = 0.05, and alternative value (s1) = 0.4 (moderate correlation). A sample size of 40 people was used to calculate the proportion of score 0: 1: 2: 3 based on the prevalence of each score from the systematic review, which was approximately 23: 5: 9: 3 people in each group. The criterion-related validity of VRCS was calculated using Spearman's correlation coefficient statistical method from the correlation between the sound level in decibels and the VRSC scores. The criteria used to determine the degree of Pearson's and Spearman's correlation coefficients were based on Chan et al. guidelines. If the correlation coefficient is 1, it is highly correlated (Perfect); if the correlation coefficient is between 0.80-0.99, it is very strong; if the correlation coefficient is between 0.60-0.79, it is moderate. A correlation coefficient between 0.30-0.59 indicates a fair correlation, a correlation coefficient between 0.10-0.29 indicates a very low correlation, and a correlation coefficient of 0 indicates no correlation. The two-way random-effects model with Cohen's weighted kappa agreement was used to examine interrater reliability and agreement measurement on ratings. The Landis and Koch guideline was used to determine the level of correlation of the kappa statistics. If the kappa value is between 0.81-1.00, it is considered almost perfect; if it is between 0.61. -0.80, the consistency is substantial; kappa between 0.41-0.60, moderate; kappa between 0.21-0.40, fair; kappa between 0.00-0.20, slight; and kappa less than 0, there is no correlation (poor). The STATA version 18.0 program was used to analyze the statistical data, and the level of significance was set at 0.05. Results The study included forty palliative care patients who were nearing the end of their lives. The age range of the 40 patients ranged from 43 to 96 years, with a mean age of 75.3 years. Fifty-seven point five percent of those polled were female, and 55 percent had cancer. The non-cancer diagnoses included pneumonia (32.5%), septicemia (15%), end-stage renal disease (10%), cerebrovascular disease (7.5%), coronary artery disease (5%), heart failure (2.5%) and necrotizing fasciitis (2.5%). The most common types of cancer among patients with a cancer diagnosis were hepatobiliary cancer (22.7%), breast cancer (18.2%), colorectal cancer (18.2%), and cancer of the urinary tract (18.2%). The most common sites of metastasis were the lung (50%) and intraperitoneal (31.8%) metastases, as well as bone metastases (18.2%). Table 1 shows the characteristics of study participants. Spearman's correlation coefficient was used to analyze the correlation coefficient between Thai VRCS scores and sound level measured with a standard sound meter. Spearman's rho correlation coefficient was found to be 0.8822, p < 0.05. The level of correlation could be interpreted as a very strong correlation with statistical significance, according to the guidelines of Chan et al. The scatter plot of the correlation between the Thai version of the VRCS score and the mean sound level in decibels was also found to be linearly and positively correlated, as shown in Fig. 1. A sensitivity analysis was also performed to determine the correlation between the maximum sound level and the Thai VRCS scores, which revealed a Spearman's rho correlation coefficient of 0.6422, p 0.05. The level of correlation could be interpreted as moderate with statistical significance. Figure 2 depicts a scatter plot of the correlation between the Thai version of the VRCS score and the maximum sound level in decibels. Interrater reliability of VRCS score To identify the agreement measurement on ratings between two assessors, the interrater reliability was calculated using a two-way random-effects model. The analysis of the data showed that the interrater agreement was 95% and the Cohen's weighted kappa agreement was 0.92, which was considered to be almost perfect agreement according to Landis and Koch's guideline. (See Table 4). Discussion To the best of our knowledge, this is the first study to show the criterion-related validity and reliability of the Victoria Respiratory Congestion Scale, a clinical tool used to objectively assess death rattles in palliative care patients. The tool has been widely used in past research and clinical practice. The Thai version of the Victoria Respiratory Congestion Scale was found to be highly correlated with the sound level measured by a standard sound meter in this study. Criterion-related validity was very strong and statistically significant (Spearman's rho = 0.8822, p < 0.05), and the interrater reliability was at a nearly perfect level and statistically significant (Cohen's weighted kappa agreement = 0.9174, p < 0.05). The findings of our study are consistent with the findings of Downing M. 's previous study, which demonstrated the concurrent validity of the original VRCS (p < 0.001). Our findings also revealed that Thai VRCS have higher interrater reliability than one previous study, which found only a moderate level of reliability (k = 0.53, p < 0.001). However, we were unable to locate the original published data from the earlier study in order to investigate the differences further. The difference in interrater reliability between the two studies could be explained by the distance between the patient's bed and the room's doorway. If the assessor can hear the death rattles at the doorway, the VRCS score is 3 points in the original version. The distance may vary depending on the size of the room and affect the evaluation of VRCS scores. We contacted the Victoria Hospice Society to clarify the issue because there was no specific recommendation mentioned in the original tool. Their VRCS = 3 recommendation was to hear the death rattles at a distance of 12-16 feet from the patient's bed. As a result, we decided to use the standard distance of 4 m from the end of the bed as our reference standard in our study. The standardized measurement distance may aid in increasing the tool's interrater reliability. It is also worth noting that the VRCS and another commonly used tool, the Back's scale, have some differences. A score of 1 on the Back's scale indicates that the death rattle can be heard close to the patient. The VRCS score of 1 indicates that the death rattle is audible at 12 inches (30 cm) from the patient's chest but not further away. There was also a difference in how those tools defined the distance to the doorway. According to the Back's scale, a score of 3 indicates that the death rattle is clearly heard at approximately 20 feet (9.5 m) or at the door in a quiet room. The VRCS score of 3 indicates that the death rattle can be heard from the room's doorway (12-16 feet), based on the approximate size of a single room. Furthermore, the Back's scale was designed to be used in a quiet environment. The VRCS, on the other hand, can be used when ambient noise is kept to a minimum during the assessment. This is closer to actual palliative care settings. As a result, we believed the VRCS provided more specific instructions and was convenient to use in palliative care patients. Limitations There were some limitations to our study that should be mentioned. Although the proportion of patients in each death rattle score from our study is comparable to the findings of the previous systematic review. The fact that more than half of the study's participants obtained a VRCS score of 0 may have affected the study's results and overestimated the reliability of the tool. Despite the fact that we followed the standard sound measurement protocol, including the use of an IEC 61,672 class 2 sound meter and measuring techniques recommended by the Speech Sound Level Measurement Guidelines. Some difficulties included attempting to avoid uncontrollable background noises in the palliative care unit, such as snoring or yelling from another confused patient. This may have an impact on the precision of noise level measurements as well as VRCS assessments. However, these noises are very likely to occur in real clinical practice and are unavoidable. As a result, we believe that these factors reflect their practical application of this tool and should not have an impact on the validity and reliability of our study. We also did not conduct the definite hearing tests required by the assessors who rated the VRCS score. In the case of the assessor's hearing impairment, this could have an impact on VRCS accuracy. Furthermore, this study was only conducted in a specialized palliative care unit, which was generally calmer than general medical wards. If the tools are used in other clinical settings, such as intensive care units or emergency rooms, where there may be more disturbing noise, the results may differ. This study included palliative care patients aged 18 and up, with the majority of participants being elderly. As a result, its use in pediatric populations may be limited and warrants further investigation. Conclusion When compared to the sound level measured by a standard sound meter, Thai VRCS had very strong criteria-related validity (Spearman's rho = 0.8822, p < 0.05), and the almost perfect level of the interrater reliability (Cohen's weighted kappa agreement = 0.9174, p < 0.05). The Thai VRCS was recommended as the standard assessment tool for death rattles in adult palliative care patients. |
Results of school screening for scoliosis in the San Juan Unified School District, Sacramento, California. Annual routine school examination for scoliosis has been established in the San Juan Unified School District. Additionally, several parochial schools and other schools in the county or nearby towns have expressed interest in such a program. A rapid, effective method, taking no more than 30 seconds per child, has been used to detect spinal curvature. The program is beneficial for those identified with scoliosis, because early detection, followed by proper treatment, can prevent major surgery. The need for careful school nurse follow-up must be emphasized. A standing x-ray and evaluation by a qualified physician are imperative. If scoliosis is diagnosed, the school nurse can be a very effective contact in assisting the students by discussing exercise or brace care and by providing encouragement and general supportive help. |
Phenotypic and Genotypic Characterization of Nosocomial Staphylococcus aureus Isolates from Trauma Patients ABSTRACT Staphylococcus aureus is a major cause of nosocomial infections. During the period from March 1992 to March 1994, the patients admitted to the intensive care unit of the University of Maryland Shock Trauma Center were monitored for the development ofS. aureus infections. Among the 776 patients eligible for the study, 60 (7.7%) patients developed 65 incidents of nosocomialS. aureus infections. Of the clinical isolates, 43.1% possessed a polysaccharide type 5 capsule, 44.6% possessed a type 8 capsule, and the remaining 12.3% had capsules that were not typed by the type 5 or type 8 antibodies. Six antibiogram types were noted among the infection-related isolates, with the majority of the types being resistant only to penicillin and ampicillin. It was noted that the majority of cases of pneumonia were caused by relatively susceptible strains, while resistant strains were isolated from patients with bacteremia and other infections. Only 16 (6.3%) of the isolates were found to be methicillin-resistant S. aureus (MRSA). DNA fingerprinting by pulsed-field gel electrophoresis showed 36 different patterns, with characteristic patterns being found for MRSA strains and the strains with different capsular types. Clonal relationships were established, and the origins of the infection-related isolates in each patient were determined. We conclude that (i) nosocomial infection-related isolates from the shock trauma patients did not belong to a single clone, although the predominance of a methicillin-resistant genotype was noted, (ii) most infection-relatedS. aureus isolates were relatively susceptible to antibiotics, but a MRSA strain was endemic, and (iii) for practical purposes, the combination of the results of capsular and antibiogram typing can be used as a useful epidemiological marker. |
Design and Implementation of an Intelligent Wheelchair Controlled by Multifunctional Parameter Physically challenged persons those who are suffering from different physical disabilities face many challenging problems in their day-to-day life for commutating from one place to another and even sometimes they need to have to be dependent on other people to move from one place to another. The purpose of this task is to make a Multi-Functional Wheelchair using Accelerometer ADXL335 and ESP32 as sensor to help physically disabled human beings in transferring from one region to any other simply by giving path from the hand, voice, Bluetooth. This Accelerometer ADXL335 and ESP32 primarily based totally challenge use Accelerometer ADXL335 and ESP32 for transferring backward path, ahead path, left path or proper path. We have advanced a Multi-useful wheelchair for medically disable human beings. We are working wheelchair by multi-feature 1) Bluetooth (Using Bluetooth device, ESP32), 2) Hand Gesture (Using Accelerometer ADXL335 MEMS), 3) Voice (ESP32). This makes challenge very powerful. The fundamental of this challenge is achieved retaining in thoughts to able the handicapped man or woman to transport round their domestic without any assist of different character. |
Dyspepsia: incidence of a non-ulcer disease in a controlled trial of ranitidine in general practice. Patients who presented to their family doctors with previously uninvestigated dyspepsia of at least two weeks' duration were recruited into a placebo controlled trial of treatment with ranitidine (150 mg twice daily) for six weeks. All patients were examined by endoscopy before treatment, and for those with macroscopical abnormalities the examination was repeated after treatment. Of the 604 patients recruited, 559 had endoscopy, of whom 171 (30%) had no apparent abnormality. Of the 388 patients remaining, one third had two or more lesions. The high incidence of underlying disease was coupled with low accuracy in unaided clinical diagnosis. After endoscopy 496 patients with persistent symptoms (median duration six to eight weeks) were randomly allocated to treatment and then reviewed every two weeks. Complete remission of symptoms occurred in 76% of patients who were taking ranitidine and in 55% who were taking placebo (p less than 0.000004). Of those with non-ulcer dyspepsia, significantly more became symptom free taking ranitidine compared with placebo (p less than 0.002). Ranitidine healed most duodenal ulcers (80%) and gastric ulcers (90%) within four weeks. Tolerance to ranitidine was good, and the incidence of complaints was similar on placebo. |
Effects of neonatal 6-hydroxydopa treatment on monamine content of rat brain and peripheral tissues. Treatment of rats from birth, with 6-hydroxydopa (6-OHDOPA) produced marked alterations in norepinephrine (NE) levels in the brain and spinal cord, but relatively slight changes in NE content of peripheral tissues, when rats were sacrificed at 5 weeks of age. In the neocortex, hippocampus and spinal cord 6-OHDOPA (60 mug/g i.p., 1 to 3 injections at 48 hr intervals from birth) resulted in a 20% to 85% reduction in NE. In the pons-medulla, midbrain and cerebellum, however, NE levels were elevated by 35% to 100%. Only slight alterations in NE were found in the hypothalamus, heart and spleen. Striatal levels of dopamine were unaltered by 6-OHDOPA at 5 weeks. Likewise, serotonin content of neocortex, cerebellum, pons-medulla, midbrain and hypothalamus was unchanged at 5 weeks, although slight elevations were seen in the neocortex and cerebellum at 2 weeks. The above effects indicate that neonatal 6-OHDOPA produces a relatively selective long term alteration of central stores of NE in the rat. |
PBA-MoS2 nanoboxes with enhanced peroxidase activity for constructing a colorimetric sensor array for reducing substances containing the catechol structure. Some nanoperoxidase-based colorimetric sensors have been used to detect only one molecule at a time. Thus, the simultaneous detection of various molecules coexisting in the same system is a great challenge. In this work, an excellent nanoperoxidase, nickel cobalt Prussian blue analogue-MoS2 nanoboxes (PBA-MoS2), have been successfully prepared by the hydrothermal method and used to construct a colorimetric sensing array to determine a series of reductive substances containing the catechol structure (such as catechol, epinephrine hydrochloride, procyanidin, caffeic acid and dopamine hydrochloride). The excellent peroxidase-like activity of PBA-MoS2 is verified by the chromogenic reaction of 3,3,5,5-tetramethylbenzidine (TMB) in the presence of H2O2 in 2 min. The catalytic mechanism of PBA-MoS2 is attributed to generated reactive species including holes (h+) and superoxide radicals (O2-) in the process of catalysis. The fast economic H2O2 colorimetric sensing array has been constructed based on the PBA-MoS2 nanoperoxidase. Due to the presence of different reducing substances, the catalytic oxidation of TMB can be restricted to different extents, accompanied by blue colour changes to varying degrees. Therefore, on combining PBA-MoS2 nanoperoxidase with H2O2 and TMB, five reductive substances can be quantitatively distinguished by linear discriminant analysis (LDA) at the 2 mM level. |
Testing models for molecular gas formation in galaxies: hydrostatic pressure or gas and dust shielding? Stars in galaxies form in giant molecular clouds that coalesce when the atomic hydrogen is converted into molecules. There are currently two dominant models for what property of the galactic disk determines its molecular fraction: either hydrostatic pressure driven by the gravity of gas and stars, or a combination of gas column density and metallicity. To assess the validity of these models, we compare theoretical predictions to the observed atomic gas content of low-metallicity dwarf galaxies with high stellar densities. The extreme conditions found in these systems are optimal to distinguish the two models, otherwise degenerate in nearby spirals. Locally, on scales<100 pc, we find that the state of the interstellar medium is mostly sensitive to the gas column density and metallicity rather than hydrostatic pressure. On larger scales where the average stellar density is considerably lower, both pressure and shielding models reproduce the observations, even at low metallicity. We conclude that models based on gas and dust shielding more closely describe the process of molecular formation, especially at the high resolution that can be achieved in modern galaxy simulations or with future radio/millimeter arrays. INTRODUCTION Theoretical arguments based on gravitational instability as well as observations of molecular gas reveal that low temperature (T ∼ 10 K) and high density (n ∼ 40 cm −3 ) giant molecular clouds (GMCs) are the natural sites where stars form. Although individual GMCs can be resolved only in the Milky Way or in a handful of local galaxies (e.g., and references therein), CO observations of several nearby spirals show that star formation mostly occurs in molecular regions 3 (e.g. Wong & Blitz 2002;;). At the same time, neutral atomic hydrogen (H I) remains the primordial constituent of the molecular phase (H 2 ), playing an essential role in the formation of new stars, as shown by the low star-formation rate (SFR) (e.g. Boselli & Gavazzi 2006) and low molecular content () found in H I-poor galaxies. Therefore, the transition from H I to H 2 is a key process that drives and regulates star formation in galaxies. The problem of molecule formation has been studied extensively in the literature mainly through two different approaches. The first is by modelling the formation of molecular gas empirically, starting from CO and H I maps in nearby galaxies. Following this path, Wong & Blitz (2002, hereafter WB02), Blitz & Rosolowsky (2004, hereafter BR04), and Blitz & Rosolowsky (2006, hereafter BR06) that the molecular ( H2 ) to atomic ( HI ) surface density ratio in disks is a function solely of the hydrostatic midplane pressure P m, which is driven both by the stellar and gas density: R H2 ∼ P 0.92 m (hereafter BR model). The second approach models the microphysics that regulates the formation of H 2 and its photodissociation. A detailed description should take into account the balance of H 2 formation onto dust grains and its dissociation by Lyman-Werner (LW) photons and cosmic rays, together with a complete network of chemical reactions that involves several molecules generally found in the interstellar medium (ISM). Due to this complexity, many studies address mainly the detailed physics of H 2 in individual clouds, without considering molecular formation on galactic scales (e.g. the pioneering work by van Dishoeck & Black 1986). Elmegreen made an early attempt to produce a physically-motivated prescription for molecule formation in galaxies by studying the H I to H 2 transition in both self-gravitating and diffuse clouds as a function of the external ISM pressure P e and radiation field intensity j. This numerical calculation shows that the molecular fraction f H2 = H2 / gas ∼ P 2.2 e j −1, with gas = H2 + HI. More recently, properties of the molecular ISM have been investigated with hydrodynamical simulations by Robertson & Kravtsov who have concluded that the H 2 destruction by the interstellar radiation field drives the abundance of molecular hydrogen and empirical relations such as the R H2 /P m correlation. Using numerical simulations which include selfconsistent metal enrichment, Gnedin et al. and Gnedin & Kravtsov have stressed also the importance of metallicity in regulating the molecular fraction and therefore the SFR. Similarly, Pelupessy et al. developed a subgrid model to track in hydrodynamical simulations the formation of H 2 on dust grains and its destruction by UV irradiation in the cold gas phase and collisions in the warm gas phase. A different approach based entirely on first principles has been proposed in a series of papers by Krumholz et al. (2008, hereafter KMT08), Krumholz et al. (2009, hereafter KMT09), and McKee & Krumholz (2010, hereafter MK10). Their model (hereafter KMT model) describes the atomicto-molecular transition in galaxies using a physically motivated prescription for the dust shielding and self-shielding of molecular hydrogen. This work differs from previous analyses mainly because it provides an analytic expression for f H2 as a function of the total gas column density and metallicity (Z). Therefore, the KMT model can be used to approximate the molecular gas on galactic scales without a full radiative transfer calculation. In this paper we shall consider the BR and the KMT models as examples of the two different approaches used to describe the H I to H 2 transition in galaxies. Remarkably, both formalisms predict values for f H2 which are roughly consistent with atomic and molecular observations in local disk galaxies (see, sect. 4.1.3). The reason is that the BR model becomes dependent on the gas column density alone if the stellar density is fixed to typical values found in nearby galaxies (see the discussion in Sect. 5). Despite the observed agreement, there are significant conceptual differences: the BR model is empirical and does not address the details of the ISM physics, while the KMT model approximates physically motivated prescriptions for the H 2 formation as functions of observables. Hence, although in agreement for solar metallicity at resolutions above a few hundred parsecs, the two prescriptions may not be fully equivalent in different regimes. It is still an open question whether molecule formation is mainly driven by hydrostatic pressure or UV radiation shielding over different spatial scales and over a large range of metallicities. A solution to this problem has important implications in several contexts. From a theoretical point of view, cosmological simulations of galaxy formation that span a large dynamic range will benefit from a simple prescription for molecular gas formation in order to avoid computationally intense radiative transfer calculations. Similarly, semi-analytic models or post-processing of darkmatter-only simulations will greatly benefit from a simple formalism that describes the molecular content in galaxies. Observationally, the problem of understanding the gas molecular ratio has several connections with future radio or millimeter facilities (e.g. ALMA, the Atacama Large Millimeter Array or SKA, the Square Kilometer Array). In fact, these interferometers will allow highresolution mapping of atomic and molecular gas across a large interval of redshift and galactic locations over which metallicity and intensity of the local photodissociating UV radiation vary significantly. In this work, we explore the validity of the KMT and BR models in nearby dwarf starbursts both locally (< 100 pc) and on larger scales (∼ 1 kpc). Their low metallicity (down to a few hundredths of solar values) combined with the relatively high stellar densities found in these systems offers an extreme environment in which the similarity between the two models breaks down. In fact, for a fixed gas column density, high stellar density corresponds to high pressure and therefore high molecular fraction in the BR model. Conversely, for a fixed gas column density, low metallicity in the KMT model results in a low molecular fraction (see Figure 1). We emphasize that the BR model was not designed to describe the molecular fraction on scales smaller than several hundred parsecs; indeed, Blitz & Rosolowsky explicitly warn against applying their model on scales smaller than about twice the pressure scale height of a galaxy. However, a number of theoretical models have extrapolated the BR model into this regime (e.g. ;). The analysis we present at small scales (< 100 pc, comparable to the resolutions of simulations in which the BR model has been used) is aimed at highlighting the issues that arise from such an extrapolation. Furthermore, a comparison of the pressure and shielding (KMT) models across a large range of physical scales offers additional insight into the physical processes responsible for the atomic-tomolecular transition. The paper is organized as follows: after a brief review of the two models in Section 2, we will present two datasets collected from the literature in Section 3. The comparison between models and observations is presented in Section 4, while discussion and conclusions follow in Sections 5 and 6. Throughout this paper we assume a solar photospheric abundance 12+log(O/H) = 8.69 from Asplund et al.. Also, we make use of dimensionless gas surface densities gas = gas /(M ⊙ pc −2 ), stellar densities star = star /(M ⊙ pc −3 ), gas velocity dispersions v gas = v gas /(km s −1 ), and metallicity Z = Z/Z ⊙. MODELS Here we summarize the basic concepts of the BR and KMT models which are relevant for our discussion. The reader should refer to the original works (WB02, BR04, BR06, KMT08, KMT09, and MK10) for a complete description of the two formalisms. The BR model The ansatz at the basis of the BR model is that the molecular ratio R H2 is entirely determined by the midplane hydrostatic pressure P m according to the powerlaw relation: The pressure can be evaluated with an approximate solution for the hydrostatic equilibrium for a two-component (gas and stars) disk (Elmegreen 1989): where star is the stellar density, and v star and v gas are the stellar and gas velocity dispersion, respectively. For a virialized disk, equation. For the KMT model, we assume a clumping factor c = 1. Blue compact dwarfs (BCDs) at low metallicities and high stellar densities are the optimal systems to disentangle between the two models which are degenerate in massive spiral galaxies with solar metallicity (compare the two black lines). Right panel. Models for the H I surface density as a function of the total gas column density, for the same parameters adopted in the left panel. While the KMT model exhibits a well defined saturation in the atomic hydrogen, in the BR model HI increases asymptotically with gas. (See the electronic edition of the Journal for a color version of this figure). be rewritten as Under the assumption that star > gas, the contribution of the gas self-gravity can be neglected and equation reduces to where gas = gas /(M ⊙ pc −2 ), star = star /(M ⊙ pc −3 ) and v gas = v gas /(km s −1 ). In deriving equation, constants in cgs units have been added to match equation in BR06. Combining equation with equation, the molecular ratio in the BR model becomes where the best fit values P = (4.3 ± 0.6) 10 4 K cm −3 and = 0.92 have been derived from CO and H I observations of local spiral galaxies (BR06). The KMT model The core of the KMT model is a set of two coupled integro-differential equations for the radiative transfer of LW radiation and the balance between the H 2 phtodissociation and its formation onto dust grains. Neglecting the H 2 destruction by cosmic rays, the combined transferdissociation equation is On the left-hand side, F * is the photon number flux integrated over the LW band. The first term on the righthand side accounts for dust absorption, where n is the number density of hydrogen atoms, d the dust cross section per hydrogen nucleus to LW-band photons, and E * the photon number density integrated over the LW band. The second term accounts for absorption due to photodissociation, expressed in term of the H 2 formation rate in steady state conditions. Here, f HI is the hydrogen atomic fraction while f diss is the fraction of absorbed radiation that produces dissociation rather than a de-excitation into a newly bound state. Finally, R expresses the formation rate of molecular hydrogen on dust grains. Equation can be integrated for a layer of dust and a core of molecular gas mixed with dust. This solution specifies the transition between a fully atomic layer and a fully molecular core and hence describes the molecular fraction in the system as a function of the optical depth at which F * = 0. Solutions to equation can be rewritten as a function of two dimensionless constants and Here, R is the dust optical depth for a cloud of size R, hence equation specifies the dimensions of the system. Conversely, is the ratio of the rate at which the LW radiation is absorbed by dust grains to the rate at which it is absorbed by molecular hydrogen. To introduce these equations which govern the microphysics of the H 2 formation into a formalism that is applicable on galactic scales, one has to assume () that a cold-neutral medium (CNM) is in pressure equilibrium with a warm-neutral medium (WNM). Assuming further that dust and metals in the gas component are proportional to the total metallicity (Z = Z/Z ⊙, in solar units), equations and can be rewritten as a function of the observed metallicity and gas surface density. Using the improved formalism described in MK10, the analytic approximation for the molecular fraction as specified by the solutions of equa- where the clumping factor c ≥ 1 is introduced to compensate for averaging observed gas surface densities over scales larger than the typical scale of the clumpy ISM. Primed surface densities are in units of M ⊙ pc −2. 2.3. Differences between the two models In Figure 1 we compare the BR and KMT models to highlight some behaviours that are relevant to our analysis. In the left panel, we present molecular fractions computed using the KMT (solid lines) and BR (dashed lines) formalisms. Different lines reflect three choices of metallicity for the KMT model (from right to left, Z = 0.1 blue, Z = 1 black, and Z = 10 red) and three stellar densities for the BR model (from right to left, star = 0.001 blue, star = 0.1 black, and star = 10 red). For a typical spiral disk with stellar mass M star = 10 10 M ⊙, size R = 10 kpc, and stellar height h = 300 pc, the stellar density is of the order of star ∼ 0.1. Figure 1 shows that, at solar metallicity and for a typical gas surface density gas ∼ 10 − 100, the two models predict similar molecular fractions. To break the degeneracy, we apply model predictions to observations of blue compact dwarf galaxies (BCDs) or low-metallicity dwarf irregulars (dIrrs), characterized by high stellar density ( star ∼ 1 − 100) and low metallicity (Z = 0.3 − 0.03). In these environments, for a fixed gas surface density (excluding the limit f H2 → 1), the two models predict very different molecular to atomic ratios. In the right panel of Figure 1, we show the predicted atomic gas surface density as a function of the total gas surface density, for the same parameters selected in the left panel. Besides the dependence on the metallicity and stellar density, this plot reveals a peculiar difference between the two models. The KMT formalism exhibits a well defined saturation threshold in HI for a fixed value of metallicity. This corresponds to the maximum H I column density that is required to shield the molecular complex from the LW-band photons. All the atomic hydrogen that exceeds this saturation level is converted into molecular gas. Conversely, the BR model has no saturation in the atomic gas surface density, but it increases slowly as the total gas surface density increases. THE DWARF GALAXY SAMPLES We study the behaviour of the KMT and the BR models using two data sets compiled from the literature. Specifically, we have selected low-metallicity compact dwarf galaxies with sufficient observations to constrain gas densities, stellar masses, and metal abundances for a comparison with models. The first sample comprises 16 BCDs and dIrrs, for which quantities integrated over the entire galaxy are available. These objects constitute a low-resolution sample with which we study the two models on galactic scales (> 1 kpc). For seven of these galaxies, we also have high-resolution H I maps and Hubble Space Telescope (HST) optical images. With these objects, we construct a high-resolution sample, useful to study the two formalisms at the scale of individual star cluster complexes (< 100 pc). Both models depend on the total gas surface density. In principle, we could use CO emission, available in the form of integrated fluxes from the literature, to quantify gas and the molecular content of individual galaxies. However, recent studies of molecular hydrogen traced through a gas-to-dust ratio (e.g. Imara & Blitz 2007;) support the idea that CO is a poor tracer of molecular hydrogen in low metallicity environments, mostly due to its inability to self-shield (). Therefore, CO seems an unreliable H 2 tracer for these metal-poor galaxies. For this unfortunate reason, we avoid any attempt to precisely quantify H2, but rather use the observed H I column density as a lower limit on the total gas column densities. As discussed in Section 2.3, the KMT model has a welldefined saturation threshold for HI, and this threshold constitutes an observationally-testable prediction. The BR model does not have such a threshold and at a given star is in principle capable of producing arbitrarily high values of HI provided that the total gas density gas is sufficiently high. However, the extremely weak variation of HI with gas at large total gas column density ( HI ∝ 0.08 gas ) means that the amount of total gas required to produce a given HI may be implausibly large. This effect allows us to check the BR model as well using only H I (Section 4), albeit not as rigorously as we can test the KMT model. We also check the robustness of our results in Appendix C, where we impose an upper limit on H2 either from SFRs, assuming a depletion time t depl ∼ 2 Gyr ) for molecular gas, or from CO fluxes, using a conservative CO-to-H 2 conversion. In the next sections, we discuss in detail the procedures adopted to derive gas surface densities and stellar densities for the two samples. The reader not interested in these rather technical aspects can find the analysis, discussion and conclusions starting from Section 4. 3.1. High-resolution sample Seven BCDs are found in the literature with highresolution H I maps and with sufficient ancillary HST data to infer stellar masses on scales < 100 pc, typical of individual GMCs. A detailed description of how we compute HI and star in individual galaxies is provided in Appendix A, together with a list of relevant references. Here, we only summarize the general procedures we use. Stellar masses of individual clusters are in a few cases directly taken from the literature. Otherwise, we infer stellar masses from integrated light by comparing two methods. The first is based on age estimates, whenever those are available in the literature. In this case, we infer stellar masses from observed absolute magnitudes by comparing the K or V band luminosity with predictions at the given age by Starburst99 (SB99; ). This is done assuming an instantaneous burst, similar metallicity, and a Salpeter initial mass function (IMF) with lower and upper mass limits at 1 and 100 M ⊙, respectively. The second method is based on optical and near-infrared colors (e.g., In this case, we use mass-to-light (M/L) ratios inferred from colors (Bell & de Jong 2001), and the stellar masses are derived directly from observed luminosities. Usually, the two methods give similar results to within a factor of ∼ 2. Once the masses are known, we obtain stellar densities with sizes taken from the literature. If not available, we measure them by fitting a two-dimensional elliptical Gaussian to the clusters in HST images. For the closest objects, in order to avoid resolving individual stars, we fit binned surface brightness profiles with a one-dimensional Gaussian. Stellar masses are probably the most uncertain quantities in our study. In fact, our first method suffers from the rapid changes in the broadband output of a starburst at young ages (4 − 10 Myr), due to the onset of red supergiants whose amplitude and time of onset depends on metallicity. Moreover, the initial mass function of the SB99 models and the lower-mass cutoff may introduce additional uncertainty, up to a factor of 3, considering a full range of systematic uncertainties (). Instead, sources of error in the second method are the strong contribution of nebular continuum and line emission to the broadband colors of young starbursts. This can be a particularly severe problem in the K band because of recombination lines and free-free emission which in some cases constitutes as much as 50% of the broadband K magnitude (see ;Hunt et al., 2003. Despite this rather large uncertainty on the stellar densities, the results presented in the next sections can be considered rather robust. In fact, a variation in the density larger than the uncertainty would be required to significantly alter our conclusions. A more extensive discussion on this issue is presented in Section 4.1.2. To complete the data set, we add to the gas and stellar densities values for the metallicity, distances, and SFR indicators as collected from the literature. Individual references are provided in Appendix A and Tables 1 and 2. We derive integrated SFRs using H, 60m and radio free-free fluxes as different tracers. The final rates are given assuming the empirical calibrations by Kennicutt for the H, by Hopkins et al. for the 60m, and by Hunt et al. (2005a) for the radio free-free emission. We note that this last tracer is optimum in the absence of non-thermal emission, as typical in young starbursts. Since SFRs are used to set an upper limit on the total gas density assuming a given depletion time (Appendix C), we choose the maximum value whenever more than one indicator is found for a single galaxy. Total SFR surface densities are then calculated adopting the galaxy sizes from NED 4. A summary of the collected and derived data is presented in Table 1. The stellar densities in the high-resolution sample are generally quite high, and associated with massive compact star clusters, some of which are in the Super Star Cluster (SSC) category (e.g., O';;). Despite their extreme properties, none of the BCDs in the highresolution sample exceeds the maximum stellar surface 4 NASA/IPAC Extragalactic Database. density limit found by Hopkins et al., and most are 5−10 times below this limit. Interestingly, the stellar densities here are uncorrelated with metallicity, implying that some other parameter must play the main role in defining the properties of massive star clusters. Low-resolution sample We have collected a second sample from the literature by requiring only that quantities integrated over the entire galaxy be available. Due to the lower spatial resolution, this data set is suitable to study the KMT and BR models on larger scales (> 1 kpc). Our search yielded a total of 16 low-metallicity star-forming galaxies; among these are the 7 objects in the high-resolution sample. We have compiled gas and stellar densities, distances, and metallicity for these 16 objects, most of which are classified as BCDs, but some are dIrrs (Sm, Im), since they are more diffuse, larger in size, and more luminous (massive) than typical BCDs. Stellar masses are computed from Spitzer/IRAC fluxes following the formulation of Lee et al., as we summarize in Appendix B, together with a comment on the dominant sources of uncertainty. Stellar densities are then derived assuming spherical symmetry and the sizes inferred from the stellar component, as measured from IRAC images. The resolution of these images (∼1. 2) is a factor of 10 lower than the worst HST resolution, so that the compact regions are unresolved. This implies that the stellar densities derived for this sample are much lower than the values quoted for individual star cluster complexes. Moreover, for non-spherical (spheroidal) BCDs these densities correspond formally to lower limits; the volume of a prolate spheroid is smaller than the volume of a sphere by a factor (b/a) 0.5, with a and b the semi-major and semi-minor axis, respectively. In our sample, the mean axis-ratio is a/b ∼ 1.5 with a 0.5 standard deviation. This discrepancy is small enough to justify our assumption of spherical symmetry. In any case, the possible volume overestimate could partially compensate the potential overestimate of stellar density because of nebular emission contamination or free-free emission (see the discussion in Appendix B). For most objects, integrated H I fluxes are retrieved from HyperLeda 5 (). We then convert integrated fluxes into mean column densities using optical radii from NED, and assuming that the gas extends twice as far as the stellar component (see ;van a;). For I Zw 18, the integrated H I flux is not available in HyperLeda and we consider the flux published in de Vaucouleurs et al.. Similarly, for Mrk 71 we estimate the total atomic gas from available interferometric observations averaged over the entire galaxy. 12 CO(1 − 0) fluxes are available for most of the galaxies here considered (see Table 2). For three galaxies, the most metal-poor objects in our sample (SBS 0335−052 E, I Zw 18, and Mrk 71), we find only CO upper limits in the literature. Because we use CO fluxes only to set upper limits on H2 (Appendix C), we choose one of the largest CO-to-H 2 conversion measured to date (). It is worth noting that for extremely metal poor galaxies (e.g. 1 Zw 18) the adopted conversion factor may still underestimate the H 2 content. To make our limits even more conservative, we compare these values with H2 inferred from SFRs and we choose for each galaxy the maximum of the two. As with the high-resolution sample, we derive SFRs from H, 60m, or free-free emission (see Table 2 for references). Again, SFR densities are computed assuming the optical size as given by NED. Finally, we collect information on the metallicity and distances for each object. A summary of the data derived for the low-resolution sample is given in Table 2. 4. ANALYSIS 5 http://leda.univ-lyon1.fr 4.1. Testing models on small scales (< 100 pc) With the aim of testing how the BR and KMT models perform at high stellar density and low metallicity, we first compare both formalisms with the observed H I surface densities and stellar densities in the high-resolution data set. Since the average resolution of this sample is below 100 pc, this part of the analysis focuses mainly on the molecular fraction in individual GMCs and associations rather than on larger ISM spatial scales. As previously mentioned, the BR formalism was not developed to describe the molecular fraction in such small regions (BR06). Hence the results presented in this section are intended to assess possible pitfalls of extrapolating the pressure model to small scales. Furthermore, a comparison of the performances of the BR and KMT models below 100 pc provides insight into what quantities are relevant to the production of molecules on different size scales. As summarized in Section 2.2, the KMT formalism describes the molecular fraction as a function of the total gas column density and metallicity. A free parameter is the clumping factor c that maps the observed column density gas onto the relevant quantity in the model, i.e. the column density of the cold phase in individual clouds. With resolutions coarser than ∼100 pc, beam smearing dilutes the density peaks and one must adopt c > 1 in order to recover the intrinsic gas surface density comp = c gas. However, given the high resolution of the HST images, we set c = 1 so that the KMT model has no free parameters. Conversely, as reviewed in Section 2.1, the BR model describes R H2 as function of the total column density and stellar volume density. An additional parameter in this case is the gas velocity dispersion, set to v gas = 8. Apart from a similar dependence on gas, a direct comparison between models and observations is not straightforward. We start our analysis by confronting each model with observations, and then attempt a comparison of both models and data.. Here, and for the rest of this analysis, we correct the gas column density for helium with a standard coefficient 1.36. We do not include corrections for projection effects because in dwarf galaxies a unique inclination angle is not well defined for a warped (non-planar) H I distribution or in a triaxial system. As discussed in Appendix A, interferometric H I observations do not achieve the resolution required to match the HST observations. A possible solution is to downgrade HST images to match the atomic hydrogen maps. However, since the exact value for c would be unknown at the resultant resolution ( 100 pc), we perform our analysis on scales < 100 pc, compatible with HST images and where c → 1. For this reason, we express the observed H I column density as lower limits on the local HI. This is because coarser spatial resolutions most likely average fluxes on larger areas, thus lowering the inferred peak column density. Indeed, whenever H I observations at different resolutions are compared, better resolution is associated with higher column densities. SBS 0335−052 E is an example: this BCD has HI = 7.4 10 20 cm −2 in a beam of 20. 515. 0 () and 210 21 cm −2 in a 3. 4 beam (). The inferred H I column density is even higher, 7 10 21 cm −2, with the smaller 2 beam in HST/GHRS observations of Ly absorption. In any case, the lower limits illustrated in Figure 2 prevent us from concluding that model and observations are in complete agreement, although this is strongly suggested. Adopting a conservative approach, this comparison shows that observations do not immediately rule out the KMT model on scales of < 100 pc; 5/7 of the galaxies here considered are consistent with predicted curves. Although not crucial for the current and remaining analysis, the quoted metallicity may in some cases overestimate the dust and metal content which contributes to the H 2 formation. In fact, in the KMT model, it is the CNM that plays a relevant role in regulating f H2 and the assumption that the nebular metallicity reflects the metal abundances in the cold ISM may not hold in all cases. Specifically, the opticallyinferred metallicity used here is dominated by the ionized phase. Studies of the metal enrichment of the neutral gas in metal poor dwarfs show that the neutral phase can be sometimes less metal-enriched than the ionized medium (e.g. ;Lecavelier des ). Furthermore, galaxies with the lowest nebular metallicities have similar neutral gas abundances, while dwarfs with higher ionized nebular metallicities can have up to ∼7 times () lower neutral ISM abundances (see however ). A recent interpretation for this effect is that although mixing is effective in diffusing new metals from ionized regions, due to the larger volume, the enrichment is modest (). This justifies the use of a single metallicity for multiple GMCs since, in the worst case, we would overestimate by some factor the local metal content. Data points, especially the ones at higher metallicity, would be offset to lower values and the parameter space common to data and model would increase, mitigating the discrepancy found at Z ∼ −0.5. The BR model predictions below 100 pc Turning our attention to the BR model, we test predictions (lines) against the observed HI (crosses) in Figure 3. Stellar densities are available for individual GMCs measured from high-resolution HST images. Therefore, while multiple observations overlap in metallicity in Figure 2, here we show distinct data-points for the same galaxy. Once again, different curves are for a selection of total gas density ( According to the BR model, despite the low metallicity, a high fraction of hydrogen is expected to be molecular because of the enhanced stellar density. However, Figure 3 illustrates that observations discourage the use of pressure models on scales < 100 pc. Even under the very conservative hypothesis that stellar densities are overestimated by a factor of 2 − 3 and that the total column densities can reach very high values (e.g. the longdashed curve at gas = 10 4 ), observations mostly lie in the region not allowed by the extrapolation of the BR model 6. Compatibility between the extrapolation of the BR model and a good fraction of the data would require gas 10 10 M ⊙ pc −2. A value that large would correspond to A V > 10 5 with a dust-to-gas ratio that is 1% of the Milky Way value, and is thus ruled out by the fact that the star clusters are observable. Moreover, such a large gas would make the gas mass in the observed region larger than either the baryonic or the dark matter mass of the entire dwarf galaxy. Clearly, even though we cannot directly detect the molecular component of the gas, we can rule out the presence of such a large amount of gas on other grounds, and we can therefore conclude that the extrapolated BR model is incompatible with the observations. We give a more rigorous estimate of the maximum plausible value of gas in Appendix C. An additional tunable parameter in the BR model is the gas velocity dispersion and a substantial change in v gas can affect its predictions. In this paper, following BR04 and BR06, we adopt v gas = 8. Since pressure varies linearly with the velocity dispersion, we can solve for the value of v gas required for the BR model to match the observations. We find that v gas 2 in order to have one half of the data points consistent with the model; this is in contrast with recent H I observations (e.g. ;) that show typical dispersion velocities v gas > 5 (and in many cases v gas > 10) in all the surveyed galaxies. It is worth mentioning that the use of the observed v gas is not always appropriate, although unavoidable; for example, v gas depends on the thermal velocity and the gas in a cold medium has a lower velocity dispersion than what inferred from a multiphase ISM. We conclude that the disagreement found in Figure 3 cannot be explained with uncertainties on the velocity dispersion. Finally, we should assess if the high pressure predicted by the model can be attributed to the use of hydrostatic equilibrium in a disk rather than in a sphere, which would be more appropriate for our systems. Intuitively, this is not the case since the central pressure in a sphere of gas and stars cannot be lower than the midplane pressure of the disk. In fact, the central point in a sphere has to support the weight of the entire system, while each point in the midplane of a disk has to support only the pressure from the components along the vertical direction. This argument is substantiated by a quantitative analysis. A solution of the hydrostatic equilibrium equation for a sphere of gas and stars shows that the central gas pressure is enhanced by a quantity that depends on (v star /v gas ) 2. Therefore, to minimize an increase in the pressure due to the stellar component, the condition (v star /v gas ) ∼ 1 has to be satisfied. However, an increase in v gas is reflected by an increase in the gas pressure itself. In other words, the central pressure in a gas sphere with cold kinematics (low v gas ) receives a significant contribution from the stellar potential (v star /v gas > 1), while a gas sphere with hot kinematics (high v gas ) has an intrinsically higher gas pressure (v star /v gas < 1). A direct comparison between the two models A different way to visualize both the models and the observations for the high-resolution sample is shown in Figure 4, where we plot predictions for HI of the KMT model (solid lines) and the BR model (dashed lines) as a function of the total gas column density. Different curves in the KMT model are for the maximum, central, and minimum metallicity observed in the sample (from the bottom to the top, Z = 0.32 red, Z = 0.12 black, and Z = 0.03 blue). The curves for the BR model correspond to the maximum, central, and minimum stellar density (from the bottom to the top, log star = 3.86 red, log star = 1.38 black, and log star = −0.52 blue). As in the previous figures, observed lower limits on HI are superimposed. The yellow-shaded region in Figure 4 indicates the maximum parameter space allowed by the observations. This plot summarizes the two main results presented in the previous paragraphs. Observations of HI reveal that an extrapolation of the BR model below scales of 100 pc results in a significant overestimation of the molecular fraction. In fact, for the observed HI, exceedingly high total gas surface densities ( gas > 10 4 ) are required by the BR model to reproduce observations. As shown quantitatively in Appendix C, such high values appear to be unrealistic. Conversely, observations seem to suggest a good agreement between the KMT model and data. Also, comparing Figure 4 with Figure 1, it appears that the different behaviour of the two models for a similar gas column density is related to which quantity regulates the molecular fraction at the second order, subordinately to the gas column density. In fact, the discrepancy with observations and the BR model is associated with the high values of stellar densities, while the consistency between the observed H I column densities and the KMT model is fostered by the low value of metallicity that raises the atomic hydrogen saturation limit. 4.2. Testing models on galactic scales (> 1 kpc) In the second part of this analysis, we compare predictions from models and observations on larger scales (> 1 kpc), by considering spatially integrated quantities for a larger sample of BCDs. Before we start, it is worth mentioning that the condition star 20 (see BR04) which ensures the validity of equation holds also for the low-resolution data set. A comparison between models and global data In Figure 5 we present a comparison between observed H I surface densities and models, as previously done in Figure 4 for the high-resolution sample. Solid lines represent the KMT model, for the maximum (Z = 0.58 red), central (Z = 0.20 black), and minimum (Z = 0.03 blue) observed metallicity. Dashed lines are for the BR model for the maximum (log star = −1.45 red), central (log star = −2.00 black), and minimum (log star = −3.18 blue) stellar density. Lower limits on the total gas column density are computed for gas = HI (see Appendix C for a version of this figure that includes upper limits). A comparison of Figures 5 and 4 reveals that the BR model predicts for the galaxy as a whole a much higher H I surface density, compared with the predictions for regions smaller than 100 pc. By going from the highresolution sample to the low-resolution one, we lose the ability to analyse the local structure of the ISM, and are limited to average quantities which dilute the density contrasts in both gas and stars over many GMC complexes and aggregations. As a result, HI and star are lowered by one and two orders of magnitude respectively. This behaviour is reflected in the BR model as a decrease in the pressure by an order of magnitude, which now guarantees an overall agreement between data and the model. Conversely, the beam-smearing is accounted for in the KMT model by the clumping factor, here assumed to be c ∼ 5. Hence, despite the different spatial scales, the KMT formalism is able to account for the mean observed HI. A test with individual galaxies Although indicative of a general trend, Figure 5 does not allow a comparison of individual galaxies with models. This is particularly relevant for the KMT formalism that predicts a saturation in the H I column density as a function of the metallicity. To gain additional insight, we present in Figure 6 the predicted H I surface density ( HI,mod ) against the observed one ( HI,obs ) for individual galaxies. For the KMT model, we compute HI,mod using observed metallicities (red crosses). Since the total gas column density is unknown, we compute for each galaxy a range of HI,mod (shown with solid red lines) using upper and lower limits on the observed total gas surface density. Lower limits are derived assuming gas = HI,obs, while upper limits arise self-consistently from the H I saturation column density, naturally provided by the model. For the BR model, we instead compute only lower limits on HI,mod with stellar densities (blue squares), assuming gas = HI,obs (upper limits derived for HI,mod are presented in Appendix C). From Figure 6, the asymptotic behaviour of HI in the BR model prevents a tighter constraint on gas. In general, there is good agreement between observations and model predictions for all the galaxies, despite the low mean metallicity of this sample. Conversely, the KMT model allows a narrower interval of H I surface density because of the well defined atomic hydrogen saturation. Therefore Figure 6 provides a more severe test of the KMT formalism that nevertheless reproduces correctly most of the observations. Although for 4/16 galaxies the KMT predictions are inconsistent with the data, there seems to be no peculiar reason for the failure of the model for these objects (see Section 4.2.3). For many objects the agreement between models and observations occurs close to the lower limits on HI,mod, i.e. when gas = HI. We stress that this is not an obvious outcome of the assumption made on the total gas surface density, since high stellar densities in the BR model or high metallicity in the KMT model would imply a high molecular fraction irrespectively of HI,obs. If we change perspective for a moment and assume that models are a reliable description of the molecular hydrogen content, the observed trend suggests that low-metallicity galaxies are indeed H I rich. Perhaps, this is not surprising due to the reduced dust content at low metallicity and the consequent reduction of the shielding of molecular gas from the LW-band photons. Systematic effects for the KMT model Finally, in Figure 7 we explore whether the KMT model exhibits systematic effects within the range of values allowed by the low-resolution sample. For this purpose, we present the ratio HI,obs / HI,mod for the KMT model (red crosses) as a function of the observed metallicity (top panel) and stellar density (bottom panel). As in Figure 6, we display with solid lines the full interval of HI,mod. The lack of any evident trend either with metallicity or stellar density suggests that the KMT model is free from systematic effects. In particular, because the four deviant galaxies are not found at systematically high or low metallicity, the difficulties in assessing metallicity for the cold-phase gas could be the cause of such deviations. This hypothesis was touched upon in Section 4.1.1, where we comment on the possibility that metals in the cold gas can in some cases be several factors lower than that observed in the ionized gas. An example of the importance of a correct metallicity determination is provided in Figure 8, where we repeat the comparison between the KMT model and observations in individual galaxies after we arbitrarily redefine Z ≡ Z /3. As expected, for lower metallicity, the H I saturation moves to higher atomic surface densities and the KMT model better reproduces the observations. DISCUSSION The analysis presented in Sect. 4 indicates that extrapolations of the BR model based on pressure overpredict the molecular fraction on small spatial scales (< 100 pc), while this formalism recovers the observed values on larger ones. Such a failure of the BR model on small scales was predicted by BR06, who point out that the model does not properly account for the effects of gas selfgravity or local variations in the UV radiation field. Conversely, the KMT model based on gas and dust shielding is consistent with most of the observations both locally and on galactic scales, although it is more prone to observational uncertainties in the ISM structure (through the clumping factor) and the cold-phase metallicity. In this section, after a few comments on these points, we will focus on an additional result which emerges from our comparison: the molecular fraction in galaxies depends on the gas column density and metallicity, while it does not respond to local variations in pressure from enhancements in the stellar density. 5.1. The effect of self-gravity at small scales The problem of gas-self gravity is that it introduces an additional contribution to the force balance on scales typical for GMCs. In this case, pressure equilibrium with the external ISM is no longer a requirement for local stability and the empirical power law in equation may break down. However, since self-gravity enhances the internal pressure compared to the ambient pressure in equation, one would expect locally even higher molecular fractions than those predicted by an extrapolation of the BR model. This goes in the opposite direction of our results, since the observed molecular fraction is already overestimated. Hence, a different explanation must be invoked for the data-model discrepancy in Figure 3. The effect of the radiation field A reason for the high HI observed on small spatial scales is related to the intensity of the UV radiation field. In fact, regions which actively form stars probably have an enhanced UV radiation field compared to the mean galactic value. Since the BR formalism does not explicitly contain a dependence on the UV radiation field intensity, it is reasonable to expect discrepancies with observations. In contrast, the KMT model attempts to explicitly account for local variation in j, by considering how such variations affect conditions in the atomic ISM. This makes it more flexible than the BR model in scaling to environments where conditions vary greatly from those averaged over the entire galaxy. We stress here that the BR model is not completely independent of the radiation field, but simply does not account for a variation in j. Assuming the scaling relation f H2 ∼ P 2.2 j −1 (Elmegreen 1993), the BR formalism is commonly considered valid only when variations in pressure are much greater than those in j, allowing us to neglect the latter. Being empirically based, the BR model contains information on a mean j, common for nearby spirals. Therefore this model describes the molecular content as if it were only regulated by pressure 7. Hence, it can be applied to describe the molecular fraction only on scales large enough such that variations in the local UV intensity are averaged over many complexes and, eventually, j approaches a mean macroscopic value similar to that found in the galaxies used to fit the BR model. This idea is quantitatively supported by recent numerical simulations. The molecular fraction of the galaxies simulated by Robertson & Kravtsov is consistent with the observed R H2 ∼ P 0.92 only when the effects of the UV radiation field are taken into account. In fact, when neglecting the radiation field, a much shallower dependence R H2 ∼ P 0.4 is found. In their discussion, the observed power-law index ∼ 0.9 results from the combined effects of the hydrostatic pressure and the radiation field. Starting from f H2 ∝ P 2.2 j −1 (Elmegreen 1993), assuming a Kennicutt-Schmidt law in which j ∼ sfr ∼ n gas, and under the hypothesis that the stellar surface density is related to the gas surface density via a star formation efficiency star ∼ gas, R H2 = P requires that (Robertson & Kravtsov 2008) = 2.2 (1 + /2) − n 1 + /2. In the simulated galaxies, different star formation laws and efficiencies (mostly dependent on the galaxy mass) conspire to reproduce indexes close to the observed ∼ 0.92, in support of the idea that a mean value for j is implicitly included in the empirical fit at the basis of the BR model. 5. 3. The effect of stellar density So far, we have discussed why the BR model extrapolated to small spatial scales is unable to predict the observed H I surface density due to its reliance on a fixed "typical" j. Galaxies in our sample are selected to be metal poor, but some of them have a higher SFR (median ∼ 0.6 M ⊙ yr −1 ) and higher specific star formation rate (SSFR; median ∼ 10 −9 yr −1 ) than observed in nearby spirals 8 (< 10 −10 yr −1 ; e.g. ). If the UV intensity were the only quantity responsible for the disagreement between observations and the BR model below 100 pc, we would also expect some discrepancies on larger scales for galaxies with enhanced star formation. However, such discrepancies are not observed. As discussed, on larger scales, both the KMT and the BR models are able to reproduce observations, despite the different assumptions behind their predictions. We argue that there is an additional reason for the observed discrepancy between data and the extrapolation of the BR model below 100 pc. While both models depend to first order on the gas column density, our analysis favours a model in which the local molecular fraction in the ISM depends to second order on metallicity rather than density as in the BR model. To illustrate the arguments in support of this hypothesis, in Figure 9 we show the atomic gas surface density predicted by the BR model (black lines) as a function of the stellar density. Different curves are for different values of the total gas surface density ( Recalling that for the BR model P ∼ gas v gas √ star, we see from Figure 9 that the stellar density does not provide a major contribution to the variation in pressure when star 0.5, typical for large galactic regions (open triangle and square). In this regime, the predicted HI from the BR model (black lines) is mostly dependent on varaition in the total gas column density alone. In fact, for a constant velocity dispersion, the pressure model becomes equivalent to models based on gas shielding. This reconciles on theoretical grounds the equivalence observed in local spirals between the BR and KMT (blue horizontal dashed line) models. At local scales, i.e. moving towards higher stellar density, star provides an important contribution to the total pressure and the extrapolation of the BR model predicts high molecular gas fractions, as seen in Figure 9. The expected HI drops accordingly, following a trend that is in disagreement with observations (open diamond). Conversely, the KMT model is insensitive to star and accounts only for a variation of the gas column density and metallicity. In this case, the predicted HI (red horizontal dotted line) moves towards higher values in agreement with observations. From this behaviour, we conclude that stellar density is not a relevant quantity in determining the local molecular fraction. Furthermore, because the KMT formalism recovers the high observed H I column density for low metallicity, dust and metals have to be important in shaping the molecular content of the ISM. This is further supported by Gnedin & Kravtsov who show with simulations how the observed star formation rate, which mostly reflects the molecular gas fraction, depends on the metallicity 9. It follows that the observed dependence on the midplane pressure is only an empirical manifestation of the physics which actually regulates the molecular fraction, i.e. the effects of the UV radiation field and the gas and dust shielding. Fixed stellar density for molecular transitions A final consideration regards the observational evidence that the transition from atomic to molecular gas occurs at a fixed stellar density with a small variance among different galaxies (BR04). This result favors hydrostatic pressure models because if the atomic-tomolecular transition were independent of stellar surface density, there would be much more scatter in the stellar surface density than is observed. However, this empirical relation can also be qualitatively explained within a formalism based on UV radiation shielding. In fact, if the bulk of the star formation takes place in molecule dominated regions, the build-up of the stellar disks eventually will follow the molecular gas distribution, either directly or via a star formation efficiency (see Robertson & Kravtsov 2008;Gnedin & Kravtsov 2010). Since the transition from molecular to atomic hydrogen occurs at a somewhat well-defined gas column density (KMT09; WB02; ), it is plausible to expect a constant surface stellar density at the transition radius. While this picture would not apply to a scenario in which galaxies grow via subsequent (dry) mergers, recent hydrodynamical simulations (e.g. ) support a model in which stars in disks form from in-situ star formation from smoothly accreted cold or shock-heated gas. SUMMARY AND CONCLUSION With the aim of understanding whether the principal factor that regulates the formation of molecular gas in galaxies is the midplane hydrostatic pressure or shielding from UV radiation by gas and dust, we compared a pressure model (BR model;Wong & Blitz 2002;Blitz & Rosolowsky 2004 and a model based on UV photodissociation (KMT model; Krumholz et al., 2009McKee & Krumholz 2010) against observations of atomic hydrogen and stellar density in nearby metal-poor dwarf galaxies. Due to their low metallicity and high stellar densities, these galaxies are suitable to disentangle the two models, otherwise degenerate in local spirals because of their proportionality on the gas column density. Our principal findings can be summarized as it follows. -On spatial scales below 100 pc, we find that an extrapolation of the BR model (formally applicable above ∼ 400 pc) significantly underpredicts the observed atomic gas column densities. Conversely, observations do not disfavour predictions from the KMT model, which correctly reproduces the high H I gas surface densities commonly found at low metallicities. -Over larger spatial scales, with the observed and predicted H I surface density integrated over the entire galaxy, we find that both models are able to reproduce observations. -Combining our results with numerical simulations of the molecular formation in the galaxies ISM (Elmegreen 1989;Robertson & Kravtsov 2008) which indicates how the UV radiation field (j) plays an essential role in shaping the molecular fraction, we infer that the discrepancy between the BR model and observations on local scales is due partially to the model's implicit reliance on an average j, which breaks down at small scales. In contrast, the KMT model properly handles this effect. -Since on scales ∼ 1 kpc the BR model agrees with observations despite the low metallicity and high specific SFR in our sample, we infer that the discrepancy between pressure models and observations below 100 pc also arises from their dependence on stellar density. An increase in stellar density corresponds to an increase in the hydrostatic pressure which should, in the BR model, reduce the atomic gas fraction. No such trend is seen in the observations. -If we drop the dependence on the stellar density, the pressure model reduces to a function of the total gas column density and becomes equivalent to the KMT model, for a fixed velocity dispersion and metallicity. This provides a theoretical explanation for the observed agreement of the two models in local spirals. In conclusion, our analysis supports the idea that the local molecular fraction is determined by the amount of dust and gas which can shield H 2 from the UV radiation in the Lyman-Werner band. Pressure models are only an empirical manifestation of the ISM properties, with the stellar density not directly related to the H 2 formation. Although they are useful tools to characterize the molecular fraction on large scales, obviating the problem of determining the clumpy structure of the ISM or the metallicity in the cold gas as required by models based on shielding from UV radiation, pressure models should be applied carefully in environments that differ from the ones used in their derivation. These limitations become relevant in simulations and semi-analytic models, especially to describe high-redshift galaxies. Furthermore, a correct understanding of the physical processes in the ISM is crucial for the interpretation of observations, an aspect that will become particularly relevant once upcoming facilities such as ALMA will produce high-resolution maps of the ISM at high redshifts. Combining our analysis with both theoretical and observational efforts aimed at the description of the ISM characteristics and the SFR in galaxies, what emerges is a picture in which macroscopic (hence on galactic scales) properties are regulated by microphysical processes. Specifically, the physics that controls the atomic to molecular transition regulates (and is regulated by) the SFR, which sets the UV radiation field intensity. The ongoing star formation is then responsible for increasing the ISM metallicity and building new stars, reducing and polluting at the same time the primordial gas content. Without considering violent processes more common in the early universe or in clusters, this chain of events can be responsible for a self-regulated gas consumption and the formation of stellar populations. Future and ongoing surveys of galaxies with low-metallicity, active star formation and high gas fraction (e.g., LITTLE THINGS; Hunter et al.) will soon provide multifrequency observations suitable to test in more detail the progress that has been made on a theoretical basis to understand the process of star formation in galaxies. We thank Xavier Prochaska, Chris McKee, and Erik Rosolowsky for valuable comments on this manuscript and Robert da Silva for helpful discussions. We thank the referee, Leo Blitz, for his suggestions that helped to improve this work. We acknowledge support from the National Science Foundation through grant AST-0807739 (MRK), from NASA through the Spitzer Space Telescope Theoretical Research Program, provided by a contract issued by the Jet Propulsion Laboratory (MRK), and from the Alfred P. Sloan Foundation (MRK). MF is supported by NSF grant (AST-0709235). We acknowledge the usage of the HyperLeda database (http://leda.univ-lyon1.fr). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. I Zw 18 The main body of I Zw 18 consists of two main clusters, the north-west (NW) and the south-east (SE) components with an angular separation of ∼ 6. A third system, known as "Zwicky's flare" or "C" component, lies about 22 to the northwest of the NW cluster. H I maps are available from van Zee et al. (1998a), together with the HST/WFPC2 F814W image (0. 045 resolution). The H I peaks close to the fainter SE cluster, rather than to the NW where the stellar density is higher. Ly observations with HST/GHRS by Kunth et al. (2 2 beam) are also available for the NW cloud and have a better resolution than the VLA map. At the assumed distance of 13 Mpc (Izotov & Thuan 2004b), 1 = 63 pc. Stellar masses of the two massive clusters in I Zw 18 are not published. Hence, multiband integrated photometry of the two star clusters in I Zw 18 is taken from Hunt et al., together with cluster ages as modeled by them. Sizes of the clusters are measured from fitting Gaussians to the surface brightness profiles (not previously published). The SE cluster has an age of 10 Myr and, near where the distribution peaks (see van a), M K = −12.4 (in a 2 aperture). The lowest-metallicity SB99 models (Z = 0.001) give M K = −12.4, which implies a stellar mass of 2.3 10 5 M ⊙. With a K-band luminosity of 1.91 10 6 L ⊙, this would give a (M/L) K = 0.12, as inferred from the SB99 models. The Bell & de Jong predictions give (M/L) K = 0.17, on the basis of V − K and (M/L) K = 0.09, from V − J. Hence, the SB99 value of 0.12 is roughly consistent. We therefore adopt the value of 2.3 10 5 M ⊙ for the stellar mass of the SE cluster. We can check the inferred stellar mass by inspecting the K-band surface brightness at the SE peak (see Figure 5 in ), K = 19.2 mag arcsec −2. This gives K = 185.3 L ⊙ pc −2 and, assuming (M/L) K = 0.12 from SB99, we would have star = 22.2 M ⊙ pc −2. This is in good agreement with the value of star = 23.3 M ⊙ pc −2, inferred from the absolute luminosity (see above) and the measured radius of 56 pc. The NW cluster has an age of 3 Myr and M K = −13.25 (in a 2 aperture). The lowest-metallicity SB99 models give M K = −16.2, which is rather uncertain because of the rapid increase in luminosity at about 3 Myr when the most massive stars start evolving off of the main sequence, a phase which is not correctly described in models (). In fact, the observed V − H color of 0.29 is predicted by SB99 to occur at ∼10 Myr, not at 3 Myr, which is the best-fit photometric age. In any case, the inferred mass from this model is 6.6 10 values are quite low, roughly 6 times smaller than those predicted by Bell & de Jong from the observed colors of I Zw 18. We therefore use the latter M/L ratio. With V − K = 0.38, V − H = 0.29, and V − I = −0.04, we estimate (M/L) K = 0.11, 0.10, and 0.09, respectively. Therefore, adopting 0.10, we derive a stellar mass of 4.2 10 5 M ⊙. Repeating the calculations for V band, we find (M/L) V = 0.039, 0.034, and 0.033, respectively. Adopting 0.033, we would derive a similar stellar mass of 3.9 10 5 M ⊙. Again, the K-band surface brightness of the NW cluster ( K = 18.3 mag arcsec −2, 0. 5 resolution) gives a similar result. We find K = 536 L ⊙ pc −2, and, with (M/L) K = 0.10, becomes 53.6 M pc −2. With a cluster radius of 56 pc (0. 89), this would correspond to a cluster mass of 5.3 10 5 M ⊙, about 1.3 times that inferred from the lower-resolution photometry. Hence, to obviate problems of resolution (1 radius aperture or 63 pc, vs. a 56 pc radius measured from the HST image), we adopt the mean of these two measurements for the stellar mass of the NW cluster, namely 4.7 10 5 M ⊙. SBS 0335−052 E SBS 0335−052 E hosts six SSCs, with most of the star formation activity centered on the two brightest ones to the southeast. The H I distribution is published in Ekta et al., and the HST/ACS F555M image (0. 050 resolution), was published by Reines et al.. The H I map is of relatively low resolution (∼ 3. 4) and does not resolve the six SSCs individually since they are distributed (end-to-end) over roughly 2. 6. Ly observations with HST/GHRS by (2 2 beam) are also available. In our analysis, we use this column density, being at better resolution than the one derived from H I emission map. At the assumed distance of 53.7 Mpc, 1 = 260.3 pc. Stellar masses for individual clusters have been derived by Reines et al. by fitting the optical and UV spectral energy distributions, and we adopt these masses here. Comparison with masses inferred from K-band is unfruitful since nebular and ionized gas contamination make this estimate highly uncertain. We measure the size of the clusters by fitting two-dimensional Gaussians. They are unresolved at the HST/ACS resolution of 0. 050, but since they have the same size to within 13%, we assume the average radius of 18.2 pc. Therefore, the inferred mass densities result in lower limits. MRK 71 Mrk 71 (NGC 2363) is a complex of H II regions in a larger irregular galaxy, NGC 2366. There are two main knots of star-formation activity (see ), denoted A and B. A low-resolution H I map (12. 511. 5) is available from Thuan et al., but at this resolution we are unable to distinguish the two main clusters which are 5 apart. At the assumed distance of 3.44 Mpc (derived from Cepheids, ), 1 = 16.7 pc. Stellar masses of the two starburst knots in Mrk 71 are not published. Hence, V -band photometry was taken from Drissen et al., and I-band from, together with cluster ages as modeled by Drissen et al.. As for I Zw 18, sizes of the clusters were measured from fitting 1D Gaussians to the surface brightness profiles. The ). The latter value from B − V is a factor of 10 higher than the former from V − K, and highly inconsistent with the SB99 value of 0.01. The I-band photometry of knot A from gives a similar inconsistency. With I = 17.97 mag, and a corresponding absolute magnitude of 9.71, we would infer a stellar luminosity of 3.43 10 5 L ⊙. With Bell & de Jong (M/L) I values of 0.025 (from V − K) and 0.15 (from B − V ), we would derive stellar masses of 8.6 10 3 and 5.1 10 4 M ⊙, respectively. Since three values are roughly consistent (∼ 10 4 M ⊙ ), we adopt 1.2 10 4 M ⊙ as the mass for knot A. Knot B is slightly older than knot A (4 Myr) consistent with its Wolf-Rayet stars and strong stellar winds as inferred from P Cygni-like profiles in the UV (). It is also slightly fainter with V = 18.05 (after correcting for A V = 0.3 mag), corresponding to an absolute magnitude M V = −9.63. SB99 models (at 4 Myr) predict M V = −15.3, which would give a stellar mass of 5.4 10 3 M ⊙, and an implied (M/L) V ratio of 0.005. Again, we derive M/L ratios as a function of color from Bell & de Jong, and obtain (M/L) V = 0.06 from stellar V − K = 0.67 and (M/L) V = 0.15, from stellar B − V = −0.07 (). With a stellar V -band luminosity of 6.1 10 5 L ⊙, we would infer a stellar mass of 3.6 10 4 M ⊙ with (M/L) V = 0.06, and 9.2 10 4 M ⊙ with (M/L) V = 0.15. Both masses are larger than those inferred for knot A, inconsistently with the observation of Drissen et al. that knot B contains only ∼6% of the ionizing photons necessary to power the entire H II region. Nevertheless, knot A is supposedly enshrouded in dust (), so the situation is unclear. The I-band photometry of knot B from is not edifying. With I = 18.92 mag, and a corresponding absolute magnitude of −8.76, we would infer a stellar luminosity of 1.43 10 5 L ⊙. With Bell & de Jong (M/L) I values of 0.10 (from V − K) and 0.20 (from B − V ), we would derive stellar masses of 1.4 10 4 M ⊙ and 2.9 10 4 M ⊙, respectively. These values are all greater than the mass inferred for knot A, even though knot B is reputed to be intrinsically ∼16 times fainter (see above, and ). For this reason, we adopt the SB99 value of 5.4 10 3 M ⊙ as the mass for knot B. NGC 1140 NGC 1140 is an amorphous, irregular galaxy, and, like NGC 5253, has been reclassified over the course of time (b). Optically, it is dominated by a supergiant H II region encompassing ∼ 10 4 OB stars, far exceeding the stellar content of the giant H II region, 30 Doradus, in the LMC (b). The H II region is powered by several SSCs (b;de ), situated in a vertical strip, about 10 in length. High-resolution HST/WFPC2 images exist for this galaxy (b), and H I maps have been published by Hunter et al. (1994a). The resolution of the H I map (16 22 ) is insufficient to distinguish the clusters; given this resolution, the peak HI is certainly underestimated. Stellar masses have been determined for the SSCs in NGC 1140 by de Grijs et al., using a minimization technique which simultaneously estimates stellar ages and masses, metallicities and extinction using broadband fluxes. the broadband fluxes from 2 to 5 m Smith & Hancock 2009). For this reason K, , and magnitudes can potentially be poor indicators of stellar mass. An extreme case is SBS 0335−052 E, one of the lowest metallicity objects in the sample, where 50% of the K band emission is gas, and 13% is dust. Only 37% of the 2 m emission is stellar (). At 3.8 m (ground-based L band), the situation is even worse, with stars comprising only 6% of the emission. Hence, to mitigate the potential overestimate of the stellar mass from contaminated red colors, the minimum (bluest) colors were used to infer the mass-to-light ratio (because of its B − K dependence), and the K-band luminosity. A comparison between the mass-metallicity relation obtained with our inferred stellar masses and the sample in Lee et al. suggests that the use of the minimum stellar masses (bluest colors) is strongly advocated (with an error of a factor 2-3). CONSTRAINTS ON THE TOTAL GAS COLUMN DENSITY The analysis presented in the main text is entirely based on lower limits on the total gas surface density, because of the impossibility to reliably establish the H 2 abundances from the available CO observations. However, we can impose conservative upper limits on gas using indirect ways to quantify H2. For the high-resolution sample, we derive gas from the molecular gas column density as inferred by means of the SFRs, assuming a depletion time t depl ∼ 2 Gyr ) for molecular gas. Formally, this should correspond to an upper-limit on H2, mainly because we use SFRs integrated on scales which are much greater than the individual associations we are studying. However, since the H I surface density is not precisely known, we cannot regard gas as real upper limits, although we argue that they likely are. Using these limits, we can explicitly show that the disagreement between the extrapolation of the BR model and the observations presented in the Section 4.1.3 cannot simply be explained with high gas column densities. This is shown in Figure 10, where we present once again both the models and data, adding conservative estimates for gas, connected with a dotted line. The fact that the derived gas ∼ 2 − 3 10 3 are not enough to account for the observed discrepancy confirm the results inferred using lower limits only (Figure 4). Similarly, for the low resolution sample, we set conservative upper limits on gas assuming a CO-to-H 2 conversion factor X = 11 10 21 cm −2 K −1 km −1 s (). Being derived from one the highest X published to date, the inferred H2 are likely to be truly upper limits on the intrinsic H 2. However, whenever these are smaller than H2 as obtained from the SFRs combined with a depletion time t depl ∼ 2 Gyr, we assume conservatively the latter values. This may be warranted since some galaxies in our sample are at even lower metallicity than the one assumed in CO-to-H 2 conversion factor used here. These upper limits on H2 are shown in Figure 11 and can be used in turn to set upper limits on HI,mod for the BR model (see Figure 6 and Section 4.2.2), as in Figure 12. Because these upper limits on HI,mod exceed significantly the model expectations at low metallicity, we infer that some caution is advisable when extrapolating local empirical star formation laws to high redshift, in dwarf galaxies or in the outskirt of spiral galaxies (see Fumagalli & Gavazzi 2008). |
The prognostic significance of the alterations of pulmonary hemodynamics in patients with pulmonary arterial hypertension: a meta-regression analysis of randomized controlled trials Background Hemodynamic assessment in patients with pulmonary arterial hypertension (PAH) is essential for risk stratification and pharmacological management. However, the prognostic value of the hemodynamic changes after treatment is less well established. Objectives We investigated the prognostic impacts of the changes in hemodynamic indices, including mean pulmonary artery pressure (mPAP), pulmonary vascular resistance (PVR), right atrial pressure (RAP), and cardiac output index (CI). We conducted this systematic review with meta-regression analysis on existing clinical trials. Methods We searched and identified all relevant randomized controlled trials from multiple databases. An analogous R2 index was used to quantify the proportion of variance explained by each predictor in the association with PAH patients prognosis. A total of 21 trials and 3306 individuals were enrolled. Results The changes in mPAP, PVR, RAP, and CI were all significantly associated with the change in 6MWD (∆6MWD). The change in mPAP was with the highest explanatory power for ∆6MWD (R2 analog = 0.740). Additionally, the changes in mPAP, PVR, and CI were independently predictive of adverse clinical events. The change in mPAP had the highest explanatory power for the clinical events (R2 analog = 0.911). Furthermore, the change in PVR was with the highest explanatory power for total mortality of PAH patients (R2 analog = 0.612). Conclusion Hemodynamic changes after treatment, including mPAP, PVR, CI, and RAP, were significantly associated with adverse clinical events or mortality in treated PAH patients. It is recommended that further studies be conducted to evaluate the changes in hemodynamic indices to guide drug titration. Systematic review registration PROSPERO CRD42019125157 Supplementary Information The online version contains supplementary material available at 10.1186/s13643-021-01816-0. Introduction Although there have been significant advances in pharmacological therapies in the past decade, pulmonary arterial hypertension (PAH) remains a progressive and fatal disease. The 2015 ESC/ERS Pulmonary Hypertension Open Access *Correspondence: [email protected] 3 Center for Evidence-based Medicine, Taipei Veterans General Hospital, No. 201, Sec. 2, Shih-Pai Road, Beitou District, Taipei, Taiwan Full list of author information is available at the end of the article guidelines have strongly recommended comprehensive screening protocols for high-risk populations and subsequent early intervention. In addition, upfront combination therapy and aggressive medical escalations were also suggested in the treatment of PAH patients. Given the variable long-term survival rates between patients, risk stratification has been endorsed in the clinical management of PAH. While the European guideline has proposed a risk prediction algorithm, comprising 9 measures, Benza et al. also computed a risk score calculator for 1-year survival in 504 individuals from the Registry to Evaluate Early and Long-term PAH Disease Management (REVEAL Registry). However, the routine clinical application was limited due to the complexity of these predictive algorithms. Hoeper et al., therefore, validated a simplified risk stratification strategy for mortality, including World Health Organization functional class (WHO Fc), 6-min walking distance (6WMD), brain natriuretic peptide or its N-terminal fragment, right atrial pressure (RAP), and cardiac index (CI) in a cohort of 1588 PAH patients. Despite the existing prediction models, the prognostic significance of the changes of these parameters during treatment for patients with PAH has not been systematically examined. The pathophysiology of PAH is characterized by increased pulmonary vascular resistance (PVR) at the beginning, followed by elevated pulmonary arterial pressure (PAP), decreased cardiac output, and increased RAP. The published data have supported that the hemodynamic indices, including PVR, cardiac output, and RAP were predictive of clinical outcomes among PAH patients. However, it remains debated whether the changes in the hemodynamic parameters are predictive of clinical outcomes. Although the non-invasive variables have been widely recommended to assess the risks in PAH, the mismatch between pulmonary resistance and RV contractility remains the main cause of mortality. We, therefore, conducted a systemic review to investigate the prognostic values of the changes in hemodynamic indices in PAH. Methods The protocol for this review has been registered in PROS-PERO (registration number CRD42019125157), and the study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines version 2020 (Supplementary Table 1). Search strategy All relevant studies from EMBASE, MEDLINE, Cochrane Library, and PubMed through August 2021, were searched and identified using the following keywords and the Medical Subject Headings (MeSH) terms: Pulmonary hypertension, Pulmonary Arterial Hypertension, PH, and PAH. No language restrictions were applied on any of these searches. We limited our searches to randomized controlled trials (RCTs) that compare either the effects of any of the 9 drug classes (ERA, PDE5, PDGFR, Prostacyclin, Prostacyclin plus ERA, Rho-kinase, TXSI/TXRA, sGC) with placebo or the effects between 2 drug classes. In the process of formulating the search strategy, the research team not only revised and discussed the preliminary search results to find a consensus but also consulted the librarians of the research institution to refer to their suggestions. Given the study is the secondary analysis of the published data, the review was waived by the ethical committee of Taipei Veterans General Hospital. Inclusion and exclusion criteria Studies were eligible only if they reported any or all of the following outcomes: hospitalization for PAH, death due to PAH, total mortality, all adverse events of hospitalization and all-cause death, and exercise capacity (as measured by a 6-min walk distance, 6MWD). Additional studies were retrieved by manually checking the reference lists of reviews, meta-analyses, and original publications. Finally, we excluded RCT studies investigating pediatric PAH (age < 12) and those that did not report sequential measurements of cardiopulmonary hemodynamics, including mPAP, PVR, RAP, CI, or pulmonary artery wedge pressure (PAWP). For studies with more than one publication, only the studies with the largest number of participants in the trial were retained. The search of eligible studies was done separately by 2 investigators (W. Y. Yeh and W.M. Huang). The consensus was then reached through discussion and the arbitration of the principal investigator (H.M. Cheng). Of 603 articles identified by the initial search, 39 were retrieved for more detailed evaluation, and 21 trials were included in the study. The selection process of the literature search is shown in Fig. 1. The International prospective register of systematic reviews (PROSPERO) registration number of this study is CRD42019125157 (URL: https:// www. crd. york. ac. uk/ PROSP ERO/). Data extraction To calculate the unit consistency, the standard deviations were all converted to standard error (divided by the square root of the sample number), 95% confidence intervals were converted to standard errors (= /3.92). If the actual data is not presented in the study and only graphically, we use WebPlotDigitizer version 4.1 to interpolate the approximate data. In addition, for studies reporting PVR in Woods units, we multiplied this value by 80 to obtain the PVR in dyne-sec/cm 5. Data were extracted from papers by 2 investigators (W. Y. Yeh and C. J. Huang) independently, and differences in data extraction were resolved through discussions with the third investigator (H.M. Cheng). Data synthesis and statistical analysis Weighted meta-regression analysis was performed to examine the relationship between hemodynamics changes before/after the interventions and outcome variables included in this study by Comprehensive Meta-Analysis version 3.3.070. For this analysis, the achieved differences between the changes in 6MWD (∆6MWD), and the event numbers of hospitalization and deaths in active treatment and control groups were considered. For the assessment of the regression coefficient of each hemodynamic parameter with ∆6MWD and clinical outcomes, and changes in mPAP (∆mPAP), PVR (∆PVR), RAP (∆RAP), and CI (∆CI), were entered into the meta-regression model separately with the adjustment of age, sex, and baseline WHO function class. The prognostic values of ∆mPAP, ∆PVR, ∆RAP, ∆CI, and ∆6MWD were evaluated by using the univariate metaregression model. For all meta-regression analyses, a random-effects model was used, and the analogous R square value (R 2 analog) was adopted to quantify the proportion of variance explained by the entered covariate(s) in meta-regression. Tau 2 and the restricted maximum likelihood (REML) methods were used to explain residual heterogeneity not explained by the covariate(s). If there were missing values of the hemodynamic parameters or outcomes in the enrolled studies, the missing data were excluded from the metaregression analysis. Assessment of risk of bias and level of evidence The quality of studies was assessed by using the Cochrane Risk of Bias tool to assess the quality of these randomized controlled trials. The following 7 main domains are used in the assessment: bias arising from the randomization; bias due to inappropriate allocation process; bias due to blinding of participants or outcome data assessment; bias due to missing outcome data; bias in measurement of the outcome; bias in selection of the reported result; other bias that may significantly affect the interpretation of the results. Bias is assessed as a judgment of high, low, or unclear. Trials with high or unclear risk for bias were considered with a high risk of bias. The quality of overall evidence and strength of recommendation for ∆6MWD (Y1), all adverse events (Y2), total mortality (Y3), hospitalization for PAH (Y4), and death due to PAH (Y5) were further determined (refer to the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach with the use of GRADEpro software (https:// grade pro. org/)), including the dimensions of Indirectness, Imprecision, Publication bias, and the Certainty/Importance of the overall evidence, while the indicator of effect is modified to the adjusted regression coefficient of meta-regression (Supplemental Table S1). Each item of the Cochrane Risk of Bias tool and Grade tool were also independently assessed by 2 investigators (S. H. Sung and W. Y. Yeh), and the disparities during this assessment process were determined by the principal investigator (H.M. Cheng). Characteristics of the included studies A total of 21 RCTs and 3306 PAH patients, published between 1996 and 2013 were recruited in this analysis. Supplemental Table S2 has shown the characteristics of each RCT, and the mean age of the study population ranged from 29 to 56 years. Of all the participants, 1097 received a placebo, and 2166 were treated with active drugs. The changes in hemodynamics indices, including ∆mPAP, ∆PVR, ∆RAP, and ∆CI, ∆6MWD, and the adverse events of mortality, death due to PAH, and hospitalization for PAH were summarized in Supplemental Table S3. Meta-regression of the hemodynamic parameters on clinical outcomes The meta-regression analysis demonstrated that all of the changes in hemodynamic indices, including ∆mPAP, ∆PVR, ∆RAP, and ∆CI, correlated with the ∆6MWD, after accounting for age, sex, and baseline functional class (Table 1, Fig. 2). Patients with increasing mPAP, PVR, and RAP were independently associated with less improvement of 6MWD ( = −7.1067, −0.1046, and −10.6923, respectively), and increasing change in CI was independently related to better improvement of 6MWD ( = 42.4492). Concerning the clinical outcomes, after accounting for age, sex, and baseline functional class, increased ∆mPAP and ∆PVR, and decreased ∆CI were associated with more adverse clinical events ( = 0.1794, 0.0031, and −1.7544, respectively) ( Table 1, Fig. 2). While ∆mPAP was the variable with the highest explanatory power for ∆6MWD (R 2 analog = 0.74), it also had the highest explanatory power for the incident adverse events (R 2 analog = 0.91) among the 4 hemodynamic indices. On the other hand, increased ∆PVR and decreased ∆CI were related to higher mortality rates ( = 0.0022 and −1.2136, respectively). In addition, none of the changes in hemodynamic indices was significantly related to the hospitalizations for PAH or death due to PAH in multivariate meta-regression analysis. Moreover, ∆6MWD did not correlate with any adverse event, neither. Risks of bias among the included studies The risk of bias among the 21 RCTs was evaluated in 7 categories (Supplemental Figure S1), and researchers presented higher or uncertain risk aspects including allocation concealment: 18 articles had no detailed description; blinding of outcome assessment: 15 articles had no clear description, but the 6MWD, hospitalizations, and mortality were objectively evaluated indicators, which were less susceptible to human subjective assessment; selective reporting: 13 articles unlisted study protocol to check the reported and unreported findings, nor a special statement about the containment of all expected outcomes; therefore, there is a lack of sufficient information to determine whether it is possible to selectively report some study results. Because the main purpose of this study was to investigate the prognostic values of the changes in pulmonary hemodynamics, the study results should be less susceptible to the aforementioned risk of bias. Therefore, the 21 studies were all included in the subsequent meta-regression analysis. GRADE assessment In addition, the assessment of the quality of the body of evidence was shown in Supplemental Table S4. In the GRADE Evidence Profile, the level of evidence for ∆6MWD was downgraded because of Indirectness concern (surrogate endpoint). Hospitalization for PAH was also downgraded because of Imprecision concerns (few incidents) (n = 4). Considering the overall quality of the evidence, together with the advantages and disadvantages of the clinical interventions for patients with PAH, consumption of medical resources, and patient values preferences, we considered ∆6MWD as "Important" for importance and rated other outcomes as "Critical important. " Discussion In this meta-regression analysis of 21 RCTs and 3306 participants, we demonstrated all the changes in hemodynamic indices, including ∆mPAP, ∆PVR, ∆RAP, and ∆CI, were associated with ∆6MWD, independent of age, sex, and baseline functional class. However, only ∆mPAP, ∆PVR, and ∆CI, but not ∆RAP or ∆6MWD were related to clinical adverse events, after accounting for age, sex, and functional class. In addition, only ∆PVR and ∆CI were the hemodynamic parameters to be independently predictive of total mortality. But none of the changes in hemodynamic indices was correlated with PAH hospitalization or death due to PAH. The study results may support the use of the changes in the hemodynamic parameters, including ∆mPAP, for the risk assessment in the management of PAH. Hemodynamic indices and the prognosis Although elevated mPAP is essential in the diagnosis of PAH and small increases in mPAP are independently associated with increased mortality in patients with borderline pulmonary hypertension, several studies did not demonstrate any association between mPAP and the survival in PAH patients. Benza et al. have shown RAP but not mPAP was associated with 1-year survival in the REVEAL registry of 2716 subjects. In contrast, CI among the hemodynamic indices was suggested to be predictive of clinical outcomes in European cohorts of PAH. Since the difference between mPAP and PAWP is the product of cardiac output multiplied by PVR, the increase in PVR and the decrease in cardiac output along with the progression of PAH could partially cancel their effects on mPAP. Conversely, an increase in mPAP could result from an augment in cardiac output due to improving PAH and might not be due to a rise in PVR from deteriorating PAH. Therefore, the determinants of mPAP could vary on different stages of right ventricular failure, and the prognostic value of mPAP in PAH patients is expected to be low. However, D' Alonzo et al. showed that a higher mPAP at the diagnosis of PAH conferred a greater risk of early death in a cohort of 194 patients. Moreover, Sitbon et al. identified an paradoxical correlation between low baseline mPAP and mortality in 178 patients with PAH in WHO functional class III or IV. In patients with severe PAH and right ventricular failure, low mPAP may better correlate with low cardiac output rather than low PVR, indicating worse outcomes. On the other hand, few studies have investigated the association between the changes in hemodynamic indices and outcomes in patients with PAH. Weatherald et al. presented a PAH cohort of 981 patients who had undergone repeated hemodynamic surveys in a mean time of 4.6 months. The results suggested that ∆mPAP and ∆PVR were significantly associated with death or lung transplantation in the whole study population, while ∆CI was only predictive of clinical outcomes in the subgroup of severe PAH patients. In the present study, we have shown that ∆mPAP, ∆PVR, ∆RAP, and ∆CI were all crudely correlated with clinical adverse events. After accounting for age, sex, and WHO functional class, ∆mPAP, ∆PVR, and ∆CI remained significantly related to clinical outcomes. The study results may support the inclusion of these indices in the simplified risk score for the prediction of disease outcomes. The 6-min walk distance The change from baseline in 6MWD (∆6MWD) has long-term served as the surrogate endpoint in the clinical trials of PAH to evaluate the therapeutic efficacy of the study drugs. It is expected that the indirect measure of 6MWD may reflect the clinically meaningful endpoints, such as quality of life and survival. The SERAPHIN study may have firstly endorsed the directly clinical outcomes as the primary endpoint to demonstrate that macitentan significantly reduced morbidity and mortality among patients with PAH. However, the ∆6MWD was not associated with the long-term outcomes. In a metaanalysis of 16 short-term RCTs, Macchia et al. have shown the ∆6MWD was not predictive of a survival benefit or adverse clinical events. The updated metaanalyses have demonstrated again that the ∆6MWD did not correlate with any of the composite clinical events, including mortality, hospitalization for PAH, lung transplantation, or the initiation of rescue therapy. The present study also found that ∆6MWD was not associated with clinical outcomes. The results support the use of morbidity and mortality rather than ∆6MWD as the primary endpoint in the RCTs for PAH patients. Limitations The long-term prognostic values of hemodynamic changes have not been evaluated in large cohorts yet. Although meta-regression analysis may improve our understandings of the associations between hemodynamic indices and the long-term clinical outcomes, the variances of baseline characteristics, study designs, and background therapies across the enrolled RCTs can cause biased study findings. Some RCTs were undertaken to prove the short-term effects of a novel drug mainly on exercise capacity. Although the others might have been designed to evaluate the therapeutic effects on long-term mortality and morbidities, caution should be exercised to interpret the correlations between hemodynamic changes and clinical outcomes. For patients with early PAH and preserved right ventricular function, the therapeutic changes in CI might be subtle, and the changes in mPAP may reflect the changes in PVR. In patients with PAH and profound right ventricular failure, improvement of CI followed by increased mPAP may indicate significant amelioration of right ventricular dysfunction, and better long-term outcomes were expected. While connective tissue disease is the second common etiologies of PAH, it may cause direct damage on the myocardium rather than through PAH. The inclusion of these subjects with distinct pathophysiology in the previously published RCTs may influence the findings observed in the present meta-regression analysis. In addition, lung transplantation was not statistically reported as an isolated endpoint in the majority of the studies. Although lung transplantation was even identical to the mortality event, the study was not able to analyze the associated impacts due to insufficient data. Moreover, the study results are based on published RCTs, in some of which currently available PAH drugs were not commercially available. Future directions Given that the pulmonary hemodynamics are essentially related to long-term survival, the non-invasive assessments of the risk features are currently encouraged for the management of PAH to improve the guideline implementation. While the cross-sectional hemodynamic evaluations have been predictive of clinical events, the measures of the changes in hemodynamics may further disclose the prognostic information in response to PAH therapy. However, future studies are needed to evaluate the effectiveness of hemodynamic-guided treatment, stratified by PAH etiologies and right ventricular function. Conclusions Progression of PAH is usually characterized by increasing mPAP, PVR, and RAP, and decreasing CI, and this study aggregates the evidence of existing RCTs for verification. In addition to the baseline and on-treatment hemodynamic measures, the present study demonstrates that ∆mPAP, ∆PVR, ∆RAP, and ∆CI were all significantly associated with the surrogate endpoint (∆6MWD). Furthermore, ∆PVR and ∆CI were significantly associated with mortality, and ∆mPAP, ∆PVR, and ∆CI correlated with adverse clinical events of PAH patients. Given that risk stratification is essential in the management of PAH, further studies are warranted to evaluate whether the changes in the hemodynamic indices could be used to evaluate the therapeutic effects, in addition to the clinical risk factors, including functional class, 6MWD, and NT-proBNP. |
Multichannel Electrotactile Feedback With Spatial and Mixed Coding for Closed-Loop Control of Grasping Force in Hand Prostheses Providing somatosensory feedback to the user of a myoelectric prosthesis is an important goal since it can improve the utility as well as facilitate the embodiment of the assistive system. Most often, the grasping force was selected as the feedback variable and communicated through one or more individual single channel stimulation units (e.g., electrodes, vibration motors). In the present study, an integrated, compact, multichannel solution comprising an array electrode and a programmable stimulator was presented. Two coding schemes (15 levels), spatial and mixed (spatial and frequency) modulation, were tested in able-bodied subjects, psychometrically and in force control with routine grasping and force tracking using real and simulated prosthesis. The results demonstrated that mixed and spatial coding, although substantially different in psychometric tests, resulted in a similar performance during both force control tasks. Furthermore, the ideal, visual feedback was not better than the tactile feedback in routine grasping. To explain the observed results, a conceptual model was proposed emphasizing that the performance depends on multiple factors, including feedback uncertainty, nature of the task and the reliability of the feedforward control. The study outcomes, specific conclusions and the general model, are relevant for the design of closed-loop myoelectric prostheses utilizing tactile feedback. |
The Impact of Glass Material on Growth and Biocatalytic Performance of Mixed-Species Biofilms in Capillary Reactors for Continuous Cyclohexanol Production In this study, the growth and catalytic performance of mixed-species biofilms consisting of photoautotrophic Synechocystis sp. PCC 6803 and chemoheterotrophic Pseudomonas sp. VLB120 was investigated. Both strains contained a cytochrome P450 monooxygenase enzyme system catalyzing the oxyfunctionalization of cyclohexane to cyclohexanol. Biofilm cultivation was performed in capillary glass reactors made of either, borosilicate glass (Duran) or quartz glass, in different flow regimes. Consequently, four phases could be distinguished for mixed-species biofilm growth and development in the glass-capillaries. The first phase represents the limited growth of mixed-species biofilm in the single-phase flow condition. The second phase includes a rapid increase in biofilm spatial coverage after the start of air-segments. The third phase starts with the sloughing of large biofilm patches from well-grown biofilms, and the final stage consists of biofilm regrowth and the expansion of the spatial coverage. The catalytic performance of the mixed-species biofilm after the detachment process was compared to a well-grown biofilm. With an increase in the biofilm surface coverage, the cyclohexanol production rate improved from 1.75 to 6.4 g m2 d1, resulting in comparable production rates to the well-grown biofilms. In summary, high productivities can be reached for biofilms cultivated in glass capillaries, but stable product formation was disturbed by sloughing events. INTRODUCTION The ability of the microbial photosynthetic machinery to convert solar radiation into chemical energy for fixing carbon dioxide into value-added chemicals has attracted academic and industrial attention for several decades (;). Such photo-bioprocesses are currently confined to the production of niche market products, including astaxanthin and -carotene, with product prices within the range of 100-1,000 €/kg (Pulz and Gross, 2004;;Posten, 2009;;). The rapid progress in metabolic engineering and synthetic biology tools have broadened the available product spectrum to exploit cyanobacteria as cell factories (;Hays and Ducat, 2015;Erb and Zarzycki, 2016;;Norea-Caro and Benton, 2018;;;). However, product titers and volumetric productivities obtained in these proof of concept studies have been very low compared to processes based on heterotrophic hosts such as E. coli or Pseudomonas (;). Some of these challenges could be circumvented by the cultivation of phototrophic organisms in a biofilm format, where cells are naturally immobilized in a self-produced extracellular polymeric matrix. Biofilms, in comparison to suspended cells, allow long retention times for slow-growing phototrophic organisms (), high tolerance toward toxic chemicals (;), and generate high cell density (;a), resulting in a compact reactor design and continuous operation with high volumetric productivities. In this context, a mixedspecies biofilm concept, constituting phototrophic Synechocystis sp. PCC 6803 and heterotrophic Pseudomonas sp. VLB120 was established in a capillary reactor made from polystyrene (;a). This phototrophic biofilm reactor concept reported high cell densities up to 51.8 g BDW L −1. It was successfully used for converting cyclohexane to the corresponding alcohol and resulted in 98% substrate conversion and a stable product flux of 3.76 g m −2 d −1 for one month. For the continuous production of chemicals on a technical scale by using phototrophic biofilms, crucial factors are to obtain high and stable volumetric productivities. Additionally, the scale-up of the phototrophic capillary biofilm reactor system based on the numbering-up concept or by using glass monoliths needs to be validated (;aKreutzer et al.,,b, 2006. One major issue in this respect is the material used for the capillary biofilm reactor. In the above mentioned capillary reactor, polystyrene was used as a cheap and easy to handle material. However, it is subjected to rapid yellowing and embrittlement under UV light, limiting long-term outdoor applications (Yousif and Haddad, 2013). Additionally, this material is susceptible to deformation in the presence of organic solvents (Whelan and Whelan, 1994). Commercial photobioreactors mostly consist of glass due to its high chemical stability, non-solarizing effects during UV-light exposure, resistance to temperature variations, and exceptionally long service life up to 20 years (). In this work, we aim to evaluate glass as a possible material for capillary reactors and study its impact on mixed-species biofilm development and catalytic performance. For continuous operation and stable volumetric productivities, biofilm growth and detachment need to be balanced to acquire a defined amount of active biomass within the reactors (;Rumbaugh and Sauer, 2020). Conversely, significant biofilm detachment events, termed as sloughing, could severely affect biofilm catalytic performance (Morgenroth and Wilderer, 2000;). What fraction of mixed-species biofilm detaches in capillary reactors and how these affect the overall performance of a continuous system is not well understood. The objectives of the current work were (i) to evaluate mixed-species biofilm growth in glass capillaries under single-phase flow and segmented flow conditions (ii) to assess mixed-species biofilm stability based on the detachment process and (iii) to investigate the influence of detachment on the catalytic performance. Two glass materials, quartz and borosilicate glass were selected to study mixed-species biofilm development. The biofilm consisted of a phototrophic Synechocystis sp. PCC 6803 and a heterotrophic Pseudomonas sp. VLB120, both organisms are harboring a cytochrome P450 monooxygenase for cyclohexanol production from cyclohexane. The average cyclohexanol (CHOL) production rates of 4.72 g CHOL m −2 d −1 and 4.08 g CHOL m −2 d −1 could be reached for biofilms grown in quartz and borosilicate capillaries. Nevertheless, the overall biocatalytic performance for mixedspecies biofilms in both glass capillaries was influenced by detachment events. Bacterial Strains and Plasmids Bacterial strains and plasmids used in this study are listed in Table 1. Cultivation of Pseudomonas sp. VLB120 Overnight cultures were inoculated directly from a 10% glycerol stock and used for inoculation of 5 mL lysogeny broth medium grown at 30 C and 200 rpm (2.5 cm amplitude) in a Multitron Pro shaker (Infors). 20 mL of M9 * medium () pre-cultures were inoculated (1% v/v) from this overnight culture and incubated for 24 h in 100 mL baffled shake flasks. Minimal medium main cultures were subsequently inoculated to an OD 450 of 0.2 and grown under the same conditions for 8 h in 20 mL M9 * medium. Capillary Reactor System Setup Biofilms were cultivated in the capillary reactor system (Supplementary Figure S1A), as previously described (). Quartz (wall thickness (w.th.) 1 mm, inner diameter (i.d) 3 mm) and borosilicate glass tubes (w.th. 1 mm, i.d 3.5 mm) were cut to a length of 20 cm to serve as capillary reactors. YBG11 medium (10 mM HEPES, 50 mM NaHCO 3 ) was supplied via Tygon tubing (LMT-55, 2.06 mm i.d., 0.88 mm w.th., Ismatec, Wertheim, Germany) using a peristaltic pump (ISM945D, Ismatec, Wertheim, Germany). The reactor system was inoculated by syringes through injection ports (ibidi GmbH, Martinsried, Germany) established in front of the capillary. Gas exchange for medium inlet as well as outlet and air segment generation was enabled through sterile filters (0.2 m, Whatman, Maidstone, United Kingdom). Cultivation was performed at room temperature (RT, 24 C), and fluorescent light tubes were used as a light source (50 mol m −2 s −1 ). When applied for the experiment, the air was introduced into the system via Tygon tubing connected by a T-connector in the form of air segments after 7 days of cultivation. Inoculation of the Capillary Reactor System The capillary reactors were inoculated with mixed-species cell suspensions (as described in section "Pre-mixing of Bacterial Strains") by purging 2 mL of each culture through the injection port. The medium flow was started 15 h after inoculation at a rate of ca. 52 L min −1 (flow velocity of 0.09-0.12 mm s −1 ). When appropriate, air segments were started 7 days after inoculation at a rate of ca. 52 L min −1 (flow velocity of 0.09-0.12 mm s −1 ), resulting in an increased overall flow rate of ca. 104 L min −1 in the capillaries. Cyclohexane Oxyfunctionalization in Capillary Biofilm Reactors Cyclohexane was supplied via the gas feed. Therefore, the T connector combining air feed with the medium supply was connected to a cyclohexane saturation bottle. A silicone tube was used for cyclohexane diffusion into the feed stream and was submerged in 80 mL cyclohexane in a closed 100 mL Schott glass bottle (Supplementary Figure S1B). Heterologous gene expression in Pseudomonas sp. VLB120_pCyp (Ps_CYP) and Synechocystis sp. PCC 6803_ pCyp (Syn_CYP) of the cytochrome P450 monooxygenase was induced after 39-41 days of cultivation by the addition of 1 mM IPTG to the medium and the cyclohexane feed was connected after one day to the capillary inlet using PTFE tubes. Liquid phase samples (1.2 mL) were collected from the reactor inlet and outlet. One mL sample was directly extracted by vigorous mixing for 2 min with 500 L of ice-cold diethyl ether (0.2 mM decane as internal standard) followed by centrifugation (17,000 g, 2 min, RT). The ether phase on top of the aqueous phase was removed, dried using anhydrous Na 2 SO 4, and applied for quantification by gas chromatography. O 2 Quantification in Gas and Liquid Phases Dissolved O 2 in the reactor outlet was quantified by a Clark-type flow-through sensor (OX-500 Oxygen Microsensor, Unisense, Aarhus, Denmark) connected to a microsensor amplifier (Microsensor multimeter, Unisense). When air segments were used, bubble traps (sealed with a PTFE coated silicone septum, Duran, Mainz, Germany) were attached to the capillary outlet and equilibrated for 24 h. Gas-phase (100 L) samples were obtained from the bubble traps using gas-tight syringes (Hamilton, Reno, United States) and quantified using a Trace 1310 gas chromatograph (Thermo Fisher Scientific, Waltham, United States). The gas chromatograph was equipped with a TG-BOND Msieve 5A capillary column (30 m, ID: 0.32 mm, film thickness: 30 m, Thermo Fisher Scientific) and a thermal conductivity detector operating at 100 C with a filament temperature of 300 C and a reference gas flow rate of 4 mL min −1. Argon gas was applied as carrier gas at a constant flow rate of 2 mL min −1. The sample injection temperature was set to 50 C with a split ratio of 2, and the oven temperature was kept constant at 35 C for 3 min. Determination of Biofilm Dry Weight For biomass quantifications, the capillary reactor setup was disassembled, and the biomass removed from the tubes. The collected biomass was concentrated by centrifugation (5000 g, 20 C, 7 min) in pre-dried and weight Pyrex tubes, dried again for 1 week at 80 C in an oven (Model 56, Binder GmbH, Tuttlingen, Germany) and was subsequently weighted. High Oxygen Amount Impedes Biofilm Development in the Single-Phase Medium Flow Conditions The impact of the capillary material on the mixed-species biofilm development was evaluated by inoculating quartz and borosilicate glass capillaries with a mixed culture (ratio 1:1) of Synechocystis sp. PCC 6803 (pCyp) and Pseudomonas sp. VLB120 (pCyp). Two biological replicates for each capillary material were analyzed. The system was operated with a continuous feed of aqueous medium supplied at a flow velocity of 0.09-0.12 mm s −1 for 14 days (Supplementary Figure S2). Oxygen evolution and total biomass formation were determined at the end of the experiment (Figure 1A). In both materials, biofilm formation followed a gradient from sufficient growth in the beginning to weak growth toward the end of the capillary. The color of the biofilm changed accordingly from dark green to yellow and reflected the viability of Synechocystis, which tends to "bleach" when the organism is stressed or starved. Oxygen accumulated over the capillary length and a maximum oxygen amount of 748 ± 16 mol L −1 was measured in the medium phase at the end of the borosilicate capillaries. Only 6.96 g biomass dry weight (BDW) m −2 developed under these conditions. The saturation concentration of O 2 in water at 25 C corresponds to 276 mol L −1 indicating a pronounced oxygen oversaturation within such capillary reactors. With the highest O 2 concentration of 827 ± 118 mol L −1 in the quartz capillary, the biofilm turned to a bright yellow color with the lowest biomass content 2.92 g BDW m −2 compared to the borosilicate capillary reactor ( Figure 1B). A similar effect of high O 2 concentrations causing a weak development of the mixed-species biofilm in polystyrene capillaries was described in the previous study (a). The transition of biofilm color from light green to yellow indicates impairment of photosystem II and associated photo-pigments (;a) due to oxygen oversaturation. Strategies to overcome oxygen stress are necessary to develop high cell density (HCD) phototrophic biofilms. Air Segments Relieve High Oxygen Stress but Facilitate Biofilm Flush Outs in Glass Capillaries In both glass capillary materials, the mixed-species biofilm grown in single-phase medium flow produced high amounts of O 2, subsequently restricting biofilm growth. In order to relieve the oxidative stress imposed by O 2, air segments were introduced into the medium flow. Within 24 h, the biofilm started to turn Frontiers in Bioengineering and Biotechnology | www.frontiersin.org from yellow to green (Supplementary Figure S3), indicating that the high O 2 concentration was being relieved, and the impairment of the photosystem II and associated photo-pigments was reversible. The segmented flow resulted in the complete surface coverage by the dark green biofilm within 7 days (Figure 2). After the biofilm was well grown, large biofilm parts were detached in both capillaries (Figure 2). The detachment of large biofilm chunks was frequently identified at the rear part of the capillary, and the amount of detached biofilm varied profoundly (Figure 2). In these experiments sloughing events were observed after a total of 14 days of biofilm cultivation. After the detachment, the biofilm regenerated itself, which under these conditions took one to two weeks after the sloughing events to achieve maximum surface coverage. Overall, four phases of mixed-species biofilm growth and development could be distinguished in glass capillaries. Phase 1 includes mixed-species biofilm growth under single-phase flow for 6-7 days. In this phase, the biofilm growth and development are limited due to oversaturated oxygen in the aqueous phase. In phase 2, the start of air-segments relieved oxygen stress and resulted in a rapid increase of biofilm spatial coverage. Phase 3 begins with the sloughing of large biofilm patches from wellgrown biofilms. Whereas, phase 4 consists of biofilm regrowth and the expansion of the spatial coverage. When cultivated further, phases 3 and 4 were observed to be recurring. Stable Biocatalytic Performance of Mixed-Species Biofilms in Both Glass Materials Are Affected by Sloughing Events The catalytic performance of biofilms is strongly associated with its development phases (). Therefore, we evaluated mixed-species catalytic performance at different biofilm developmental phases. The mixed-species biofilm was grown as described above, and the biotransformation was initiated for mixed-species biofilms at a matured state (phase 2). The product formation was measured for 13 days (Figures 3B,C). In comparison to the previous experiments (Figure 2), phase 2 was prolonged up to day 50-54 for both glass capillaries (Figure 3). The reason for this difference in the time interval is not clear. In both capillaries, the production rate was at a maximum at the start and dropped to 1.0-1.2 g m −2 d −1 within the next 8 to 10 days. No detachments were recorded directly after starting the cyclohexane (CHX) feed. The average cyclohexanol (CHOL) production rate was 4.72 g CHOL m −2 d −1 for 10 days of biotransformation in the quartz capillary and 4.08 g CHOL m −2 d −1 for the first 8 days in the borosilicate capillary. The overoxidation of CHOL to cyclohexanone (CHON) was observed in both glass capillaries to be around 13%. In the quartz capillary, minor biofilm detachments were observed on day 52, leading to a drop in total product amount from 1.25 to 1.01 mM. On day 54, most parts of the biofilm were flushed out (Figure 3A), and the total production rate dropped to 0.93 g m −2 d −1. In the borosilicate glass tube, two-third of the biofilm was detached at day 51, leading to a drop in the cyclohexanol production rate from 3.70 g CHOL m −2 d −1 to 1.94 g CHOL m −2 d −1. Further decrease in the production rate to 1.27 g CHOL m −2 d −1 was identified on day 54. Overall, the total product formation in quartz glass was slightly higher (17%) than the borosilicate glass. Nevertheless, sloughing events were observed in both glass capillaries, resulting in a severe loss of biomass and subsequently production rates. Biofilm Regrowth in Borosilicate Capillaries in the Presence of Cyclohexane Leads to High Productivities In these experiments, we investigated mixed-species biofilm growth and biocatalytic performance after biofilm sloughing events (phase 3). As no major difference was observed for biofilm growth and development in both glass materials, borosilicate was chosen for further experiments. A significant portion of biofilm detached after 36-39 days ( Figure 4A) due to sloughing. FIGURE 2 | Four stages of biofilm development in a quartz glass capillary. The biofilm was grown without air segments for 7 days (phase 1) with a medium flow velocity of 0.09-0.12 mm s -1, and air segments were started subsequently at the same flow rate (phase 2), which led to a fast surface coverage of the entire capillary (phase 2, 13 days total cultivation). After that, sloughing events (phase 3) and biofilm regrowth were observed (phase 4). Frontiers in Bioengineering and Biotechnology | www.frontiersin.org The biotransformation was initiated on day 41. Under the biotransformation conditions, the biofilms were able to recover completely in the following 7 days. In the borosilicate capillary 1, the cyclohexanol production rate increased from 1.75 g CHOL m −2 d −1 to 6.4 g CHOL m −2 d −1, and it reached a steady-state after day 45 ( Figure 4B). In this case, the system seemed to be CHX limited, as 98% of the supplied substrate was converted. In the borosilicate capillary 2, the cyclohexanol production rate increased from 1.8 g CHOL m −2 d −1 to 4.89 g CHOL m −2 d −1 from day 41 to 48, respectively ( Figure 4C). Here, the system was not CHX limited, as only 76% of the CHX feed was converted. In both cases, minor amounts (3-4%) of the overoxidation product CHON were detected. At day 49 and 55, the light was turned off to investigate the impact of photosynthesis on the heterologous reaction. During the first dark period, the O 2 concentration in the air phase dropped by 5.0% and 3.7% in capillary 1 and 2, respectively (Supplementary Figure S4). Additionally, the product amount in the outflow was significantly reduced below 0.15 mM in both capillaries. Similar effects of the drop in O 2 and CHOL production were seen during the second dark period on day 55. These results led to the conclusion that electrons provided by photosynthetic water splitting fueled the biotransformation reaction. After the regrowth of the biofilms in both capillaries under biotransformation conditions, sloughing effects were detected again. This detachment resulted in a significant drop in the product concentration on day 57 and 53 in the capillary 1 and 2, respectively. Overall, we could conclude that the catalytic performance of mixed-species biofilm is dependent on the amount of available biomass with the capillary system. DISCUSSION Cyanobacteria are considered to be promising microorganisms for converting CO 2, water, and solar radiation into value-added chemicals (;;). The capability of cyanobacteria to assimilate CO 2 depends primarily on the catalytic properties of the ribulose-1,5bisphosphate carboxylase/oxygenase (Rubisco). However, Rubisco's inability to discriminate between O 2 and CO 2 results in carboxylation and oxygenase activity, depending on their concentrations and the kinetic parameters (;). As oxygen is continuously generated in the oxygenic photosynthesis during the light reaction, its amount within the photosynthetic cell is considered to be FIGURE 4 | Images of the mixed-species biofilm cultivated in borosilicate capillaries (A). Flush outs were found after 36 days, the biotransformation was initiated after 41 days (arrow panel A) and product formation recorded (panels B,C). Medium and air segments (from day 7 on) were fed at a flow velocity of 0.09-0.12 mm s -1. The cyclohexane feed was visualized as a dashed line for capillary 1 at 1.60 ± 0.3 mM and for capillary 2 at 1.97 ± 0.70 mM. The arrows in panel (B,C) indicate light-off conditions. Abbreviation: CHX-cyclohexane. higher than the ambient oxygen level (). Excess oxygen concentration in the functional photosynthetic cells not only affects the net carboxylation rate but can form reactive oxygen species (ROS), leading to oxidative damage of DNA, lipids, and proteins. In contrast to the suspended format, cyanobacterial cells growing in biofilms or microbial mats could reach oxygen concentrations several times higher (ca. 1000 M) than the air-saturated water resulting in oxygen bubble formation ;). Correspondingly, we observed high oxygen amount of 748 to 827 M in the reactor outflow (Figure 1). Under such high oxygen content, the development of the mixed-species cyanobacterial (Syn. sp. PCC6803) biofilms were severely affected, leading to low biomass concentration in the glass capillaries. In the previous work, high oxidative stress was overcome by introducing air segments, utilizing citrate catabolism in Pseudomonas species, and an oxygen-dependent biotransformation reaction to enable HCD mixed-species phototrophic biofilms in polystyrene capillary reactor (a). In this work, air segments were introduced into the medium flow to extract excess O 2 from the biofilm and thereby to relieve oxidative stress imposed onto the cells. Overcoming the stress resulted in thick and dense biofilm growth (Figure 2). However, the mixed-species phototrophic biofilm growth and development in the glass capillaries showed four distinct phases based on the investigated time frame as compared to only two phases in polystyrene capillary reactors (a). Biofilm Sloughing: A Critical Issue in the Development of Stable Mixed-Species Biofilm Biofilm development is characterized by constant removal of cells from the biofilm either when small portions of the biofilm are lost by frictional forces (erosion) or when large fractions are lost based on sloughing events (Rumbaugh and Sauer, 2020). Sloughing is considered to be an integral part of biofilm development (), which could lead to a loss of more than 90% biomass within one day (Figure 2). A similar event was described by Telgmann et al., where 80% of biomass was lost due to the detachment of large biofilm portions. Reasons for sloughing are widespread. These include shear stresses, nutrient to oxygen gradients as well as a change in c-di-GMP levels inside the biofilm, e.g., by c-di-GMP-degradation (;;Stewart and Franklin, 2008;Rumbaugh and Sauer, 2020). In the mixed-species biofilm, the change in biomass and thickness with the biofilm growth (phase 2) in glass capillaries could increase (aqueous-air) segmented flow velocity and subsequently intensify shear stresses and aqueous-air interfacial forces. These external fluidic forces might become more substantial than the biofilm adhesive forces leading to the sloughing or detachment (;;;). However, such events were not frequently observed in polystyrene capillaries (a). This outcome points out that mixed-species biofilms have weakly adhered to glass capillaries as compared to the polystyrene capillary. There is also a possibility that biofilm adhesion to the glass surfaces becomes weaker with growth due to nutrient starvation or a different stress response of the biofilm. In the mixedspecies biofilm, we observed yellow locations at the bottom of biofilm (Figure 2). These color changes (green to yellow) might indicate nutrient starvation or oxidative stress because of high local O 2 concentrations. Such stresses might weaken the cell attachment and overall biofilm structure, leading to biofilm dispersion and subsequent detachment (Ohashi and Harada, 1995;Bazin and Prosser, 2018). Overall, the increase of external fluidic forces with biofilm growth and the decrease of internal biofilm strength caused by the hydrolysis of the polymeric biofilm matrix or due to oxidative stresses could be possible reasons for biofilm sloughing. Biofilm Sloughing Severely Affects Mixed-Species Catalytic Performance Sloughing may have two different effects on the product formation rate: (I) No impact on reactor performance, when only inactive biomass is lost, (II) severe impact on reactor performance due to the removal of active biomass (). In our experiments, sloughing was observed after the mixedspecies biofilm reached a well-grown or matured stage, and this detachment was accompanied by a severe drop in the catalytic activity (Figures 3, 4). These results conclude that the lost biofilm consisted mainly of active biomass. An average production rate of 4.72 g CHOL m −2 d −1 (7.18 g L −1 d −1 ) could be reached for a maximum of 10 days for the biofilm grown in the quartz glass capillary ( Table 2). Still, for a continuous biofilm-based process, 10 days of stable biocatalytic performance is considerably short. Our previous study reported a stable production rate of 3.8 g CHOL m −2 d −1 for 31 days in a capillary reactor made from polystyrene (;a). In comparison to polystyrene capillaries, the glass surface has little texture, with an average roughness of 100 nm (for borosilicate glass) (). Here, a rougher surface might promote biofilm attachment and resistance against hydrodynamic forces to minimize sloughing events (). Furthermore, the light focusing effect of the glass-capillaries might lead to increased light input resulting in high photosynthetic activity and enhanced O 2 accumulation. Such high oxygen content could trigger sloughing events and, therefore, biofilm growth and development at different light intensities (low 25 E m −2 s −1, high 100 E m −2 s −1 ) or with light-dark cycles need to be investigated to minimize sloughing events and retain constant production rates in glass capillaries. CONCLUSION We observed four-phases for mixed-species biofilm growth in the glass-capillaries. It comprises biofilm growth, detachment, and regrowth. The change in flow condition from single to aqueousair segmented flow resulted in faster growth, improved surface coverage, and enhanced biomass formation. For mature biofilms, biofilm detachment via sloughing and regrowth was frequently observed in glass capillaries. The biocatalytic performance of mixed-species was evaluated at different developmental phases. For mature biofilms, an average production rate of 4.72 g CHOL m −2 d −1 for 10 days and 4.08 g CHOL m −2 d −1 for 8 days were obtained for quartz and borosilicate glass, respectively. Product formation was associated with biofilm biomass and increased with the re-growing biofilm. Maximum product formation of 6.5 g m −2 d −1 was observed in the borosilicate capillary, although both glass types showed comparable results. The presence of the toxic substrate cyclohexane did not hamper biofilm growth and spatial coverage. The utilization of glasses as capillary reactor materials offers several benefits, such as the light focusing effect and excellent stability against solvents or UV radiation. Nevertheless, sloughing events were observed to be higher compared to other capillary materials, e.g., polystyrene (a). Therefore, future research efforts on understanding sloughing mechanisms in the mixed-species biofilms and finding solutions to minimize them in glass capillaries are necessary. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS RD and IH planned and conducted the experimental work and analyzed data. IH wrote the manuscript. RK and KB participated in designing experiments, data analysis, and correcting the manuscript. All authors contributed to the article and approved the submitted version. |
Winning Ways With Hydrogen Sulphide on the Namibian Shelf The shelf sediments off Namibia are some of the most unusual and extreme marine habitats because of their extremely high hydrogen sulphide concentrations. High surface productivity of the northern Benguela upwelling system provides benthic life with so much carbon that biotic processes must rely on innovative mechanisms to cope with perennial anoxia and toxic hydrogen sulphide. Bottom dwelling communities are forced to adapt lifestyles to deal physically, physiologically and behaviourally with these stressful conditions. The upside of hydrogen sulphide is that it fuels extensive mats of large sulphide-oxidizing bacteria on the seabed, which create detoxified habitat niches and food for the animals living there. The threat of hypoxic stress exacerbated by hydrogen sulphide is largely overcome in the water column by microbes that detoxify sulphide, allowing animals in the upper water layers to thrive in this productive upwelling area. The bearded goby Sufflogobius bibarbatus is a cornerstone species that successfully couples the inhospitable benthic environment with the pelagic. Benthic studies have as yet not characterized the sulphidic shelf communities, which have the potential to uncover biotic adaptations to toxic sulphide. This ancient shelf upwelling system has long operated under hypoxic pressure, balancing always the abundance of particulate food against oxygen limitation and hydrogen sulphide toxicity. Challenges faced by this unique system could include environmental changes related to climate change, or man-made physical disturbances of the anoxic, sulphide-rich seabed sediments. |
A labVIEW Code for PolSK encoding We have developed an integrated software module for use in free space Optical communication using Polarization Shift Keying. The module provides options to read the data to be transmitted from a file, convert this data to on/off code for laser diodes as well as measure the state of polarization of the received optical pulses. The Software bundle consists of separate transmitter and receiver components. The entire protocol involves handshaking commands, data transmission as well as an error correction based on post-processing Hamming 7,4 code. The module is developed using \lv, a proprietary software development IDE from National Instruments Inc. USA Introduction PolSK involves encoding the message bits with polarization state of a light pulse during transmission. The bits 0 and 1 are respectively mapped to two orthogonal states of polarisation. PolSK has several advantages since light can be decomposed into several sets of mutually orthogonal polarisations, such as Vertical/Horizontal, 45 /135 or RCP/LCP combinations. In each of this case, the two encodings are mutually orthogonal, in the sense that a light polarised in one state will show zero for measurement for other polarization. This results in an unambiguous measurement, except in presence of noise. This also becomes particularly useful in multibit-per-symbol transmission, although the different basis are not mutually exclusive and hence can provide ambiguity. We have developed an integrated software module to control and automate the above protocol using LabVIEW. The program involves all relevant modules necessary to be used in PolSK communication. Although it is developed with our particular laboratory setup in mind, it is independent of the hardware involved and can be adopted with any other similar or compatible hardware. The LabVIEW Program Labview is a graphical programming interface developed and distributed by National Instruments Inc. USA. This has two distinct advantages over other programing environments -(i) the graphical method of programming makes it easier by removing the need to remember the code words, (ii) it has built-in modules to interface many different types of hardware units and (iii) can create an executable binary version, which can run on a different computer without installing Labview, although this facility is available only on the professional versions. The ease of use and availability of extensive built-in modules has made LabVIEW very popular in case of laboratory automation as well as controlling communication protocol. While many of the earlier work uses LabVIEW options to control TCP/IP and built-in communication protocols, we use a control program through DAQ card, which makes the system independent of hardware, i.e., the hardware consisting of diode lasers or APD's can easily be replaced without any modification of the software. We have two independent parts of the code -the transmitter and the receiver part, each running on two independent computers. The synchronization between these two codes is explicitly obtained by the Labview code and hence does not demand identical clock speeds or memory for these two computers. In other words, the labview code is independent of the hardware parameters of either of the computers. The code described here is particularly designed for use in our ex-perimental setup, although it can easily be used for any other schemes as it is, or at best with very little modification. Our setup is described in detail in an earlier communication and will be very briefly recounted here for sake of completeness. The Transmitter The transmitter consists of two VCSEL lasers, operating at 780 nm placed about a Polarizing beam splitter (PBS) as shown in figure 1 The need for this clock pulse is explained the receiver section. Figure shows the initial module of the transmitter which generates a random sequence of 0's and 1's, converts it into a relevant format and writes it onto the digital port of the DAQ card using the DAQmx module of LabVIEW. This module was later replaced by the one which reads actual data from a saved file on the disk and similarly sends it to the DAQ card's output, which is shown in section 3 Receiver The receiver consists of another PBS whose output is incident on two Avalanche Photo Diodes APD1 and APD2. The APD's used in our setup was PCD-mini 200 from SenSL. Any other SPC module which operates in Gieger mode and provides TTL pulses for each incident photon would work equally good. The TTL pulses from the APD module are connected to counter pints of the DAQ card NI -PCI -6320. This card has 4 counter inputs, each with 32 bit resolution and a rate of 25 MHz with external clock. The receiver part of the Labview code is designed to obtain these TTL pulses during the on-time of the clock pulse and total them. The code resets the total number each time the clock pulse is off. This aspect required a certain round about method within the code since the original counter-handling aspect built into the Labview does not contain the reset feature, which is otherwise very important for our protocol. Operation of the protocol The generic aspects of the protocol is graphically represented in figure (3(a)). As per this, the steps involved are (i) Alice opens up the protocol with a wake up call (ii) Bob acknowledges (iii) Alice asks Bob to identify himself -so as to ensure that the data is given only to authorized receiver. (iii) Bob answers with a pre-agreed ID number. Transmitter module The LabVIEW module for transmitter is shown in 5. This program reads a binary file from the hard disk and convert it into individual bits for transmission through serial transmission. These bits are then converted into 8 bit word and then sent to write-to-port module of DAQ card. Receiver The LabVIEW component for the receiver runs on an independent computer, recording data from the two APD modules as shown in schematic 1. The program is started independently of the transmitter program and therefore at the beginning has a waiting loop with continuously analysing signals received at the input. If the signal corresponds to previously agreed handshaking commands from Alice, only then the program goes to the next step. The all optical version of the module receives and analyses signal. The complete module is shown in figure 6. It contains four subunits, which are described below and shown separately in following figures for sake of clarity. The first module is shown in figure 10. The two counters are initialized (top and bottom of the figure) as well as the received signal is compared for 'Identify' code. The identification of Bob is the merely ensure that Alice is transmitting the message to only authorized receiver and not to any one else. As mentioned in previous section, the module is built around NI DAQ card PCI 6320 but in reality uses LabVIEW modules which are hardware independent and hence can be used with any other card with Figure 7: First module of the Receiver. It initialises relevant parameters of DAQ and also waits in a 'while' loop until it receives an 'Identify' command from receiver. similar features. The APD modules provides TTL signals for every photon that is incident on them, which are fed to the counter inputs of the DAQ card. The LabVIEW module follows the steps (i) Create a Counter Input channel to Count Events. The Edge parameter is used to determine if the counter will increment on rising or falling edges. (ii)Call the DAQmx Timing VI (Sample Clock) to configure the external sample clock timing parameters such as Sample Mode, Samples per Channel, and Sample Clock Source. The Edge parameter can be used to determine when a sample is taken. (iii) Call the Start VI to arm the counter and begin counting. The counter will be preloaded with the Initial Count. (iv) For finite measurements, the counter will stop reading data when the Samples per Channel have been received. The program takes input from two APD's on two counter channels, totals the TTL signals obtained within till the edge detector detects fall of the clock pulses. Since the modules are not equipped with intermittent resetting of the counter, the counts within a clock pulse duration is obtained by subtracting old total from new total. The software also computes 'State of Polarization' as given by APD 1,2 indicates counts on respective APD's within the clock pulse duration. If the value of S is positive, the program assigns data to 1 and if S is negative, the data is assigned to 0. As explained in reference, this differential method of measurement provides a higher threshold against depolarization noise due to atmospheric effects. The second module takes care of handshaking, in particular that of sending ACK signals to Alice. The third module is the main subunit which computes SoP as per equation 1 and also saves the data onto a file for further processing. Error Correction using Hamming codes Hamming code is one of the industry standard algorithms to detect and correct bit flip errors during transmission. The most practical configuration is the mode, wherein three parity bits (p 1, p 2 and p 3 ) are added to four data bits (d i, i = 1, 2, 3, 4), adding upto 7 bits. While in normal circumstances the parity bits are computed and interspersed with the data so as to make a 7 bit word as p 1, p 2, d 1, p 3, Figure 9: This is the main unit of receiver, which takes the counts from two counters and computes State of Polarization (labled as Degree of Polarization -DoP on module) and also saves this data on a file on disk. However, since we started working with random numbers initially, we adopted a method wherein the data are sent at first and the parity bits are computed and transmitted later. This method was particularly adopted for use in a future use of Quantum Key Distribution protocol wherein the data bits would be sent through quantum channel while the parity through the classical channel. However, the mathematics related to the computation and use of the syndrome set was identical whether the parity bits were transmitted interspersed with data bits or otherwise. The LabVIEW code for this part consists of the transmitter part computing the relevant parity bits and create the Generator matrix G. The receiver code has modules to use this matrix G, identify the error as per standard methods and then correct them. conclusion We have prepared an integrated LabVIEW program for use of free space optical communication system using Polarization Shift Keying with Binary coding. The program is designed to control two diode lasers, each providing light polarized in orthogonal directions, initially mapped to bits 0 and 1. Corresponding light polarization is measured at the receiver, with the help of a Polarizing Beam Splitter (PBS) and a pair of Avalanche Photo diodes (APD). This measurement is in form of a 'State of Polarization', which takes into account any polarization scrambling during traverse through atmosphere. The differential method adopted provides a higher threshold against noise. The LabVIEW program presented here integrates all these aspects as well as all the relevant handshaking commands. It also includes a Hamming code error correction protocol, but with a post process option. Two main advantages of LabVIEW is that of having a graphical programming interface as well as several required modules already built-in. In this program, we exploit the digital output of a DAQ card to pulse the lasers appropriately for transmitter side and time synchronised counter acquisition to count the pulses from an SPCD module on the receiver side. The protocol is completely integrated and contains handshaking protocols as well as the Hamming code for error correction. At the same time, it is also modular so that individual components can be corrected or replaced as required. Full professional versions of LabVIEW also allow creating a stand-alone executable module, which can be run on independent computers. Future goal is to convert the present protocol for embedded versions such as FPGA or Raspberry Pi. |
Isolated fractures of the posterior tibial lip at the ankle as demonstrated by an additional projection, the "poor" lateral view. The routine x-ray examination of the ankle was evaluated because of the relative inability of the usual projections to show posterior tibial lip fractures. Multiple cases indicate the efficacy of the additional poor lateral view (with slight external rotation of the ankle) in demonstrating otherwise undisclosed isolated fractures of the posterior tibial lip. |
A Comparison Between Molecular Dynamics and Monte Carlo Simulations of a Lennard-Jones Fluid Molecular dynamics and Metropolis Monte Carlo simulations are used to study the behavior of a Lennard-Jones fluid. The relative sampling efficiencies of the two methods are compared. Both the methods give identical equilibrium properties, but one method is better than the other for different system parameters. Molecular dynamics simulations, in addition to predicting thermodynamic properties accurately, also provide reliable estimates of dynamic properties like diffusivity. Monte Carlo is also able to predict dynamics, but only when move sizes are small. |
The development of human immune system mice and their use to study tolerance and autoimmunity Autoimmune diseases evolve from complex interactions between the immune system and self-antigens and involve several genetic attributes, environmental triggers and diverse cell types. Research using experimental mouse models has contributed key knowledge on the mechanisms that underlie these diseases in humans, but differences between the mouse and human immune systems can and, at times, do undermine the translational significance of these findings. The use of human immune system (HIS) mice enables the utility of mouse models with greater relevance for human diseases. As the name conveys, these mice are reconstituted with mature human immune cells transferred directly from peripheral blood or via transplantation of human hematopoietic stem cells that nucleate the generation of a complex human immune system. The function of the human immune system in HIS mice has improved over the years with the stepwise development of better models. HIS mice exhibit key benefits of the murine animal model, such as small size, robust and rapid reproduction and ease of experimental manipulation. Importantly, HIS mice also provide an applicable in vivo setting that permit the investigation of the physiological and pathological functions of the human immune system and its response to novel treatments. With the gaining popularity of HIS mice in the last decade, the potential of this model has been exploited for research in basic science, infectious diseases, cancer, and autoimmunity. In this review we focus on the use of HIS mice in autoimmune studies to stimulate further development of these valuable models. Introduction The immune system has evolved to ensure the development and maturation of functional lymphocytes able to recognize and quickly respond to foreign antigens while maintaining tolerance to self. Despite the presence of multiple tolerance checkpoints throughout lymphocyte development, a small fraction of autoreactive B and T cells escape and survive these mechanisms. These autoreactive lymphocytes are generally short-lived and anergic, remaining incapable of attacking self. However, under certain environmental and genetic settings, these clones are able to proliferate and respond to self-antigens, the degree and target of these responses determine the severity and tissue distribution of the clinical manifestations. These autoimmune responses can evolve into full-fledged chronic diseases for which there are no cures. Autoimmunity affects up to 5% of the human population, women predominantly, causing a variety of debilitating illnesses and significantly shortening the lifespan. While some autoimmune diseases have effective treatments, the administration of insulin in type 1 diabetes (T1D) is an example, other diseasese.g., systemic lupus erythematosus (SLE) and rheumatoid arthritis (RA)rely mainly on broad anti-inflammatory and immunosuppressive drugs with mixed responses and a variety of side effects. In addition to the spontaneous development of autoimmunity caused by the synergy of genetic and environmental predisposing factors, recently we have been faced by an additional challenge: the rise of autoimmune complications that follow the use of novel immunotherapy protocols in cancer patients. Autoimmune diseases have a complex origin and development. Although our understanding of these diseases has vastly improved, their complexity has hampered a concrete elucidation of their causes. The biological mechanisms driving the onset of autoimmunity remain largely undefined, also because they are multifaceted and diverse for different patients. Moreover, the variability in the response of patients to a given treatment is often unexplained. This fosters a great and unmet need to better understand disease pathogenesis to find new ways to treat successfully a higher number of patients, and also to develop protocols that may altogether prevent disease onset in people at risk. Over time, the escalation in scientific and technological exploration of human biology has been justifiably accompanied by a rise of ethical considerations and regulations. For instance, testing whether a new vaccine works as Jenner did at the end of the 18th century (i.e., inoculating an experimental vaccine into a child who is then challenged with a deadly virus) is unthinkable nowadays. Small mammals, including rabbits, rats and mice but also larger non-human primates, have become accepted systems to model human diseases, understand mechanisms, and test new treatments. Mice have been particularly useful in this regard: they can live in small spaces, are easy to handle, reproduce robustly and, most importantly, their immune system and responses to infections and vaccines, as well as the development of autoimmune responses, are remarkably similar to those of humans. Mouse models exist that spontaneously develop autoimmune diseases, such as the non-obese diabetic (NOD) with T1D, and the MRL and the NZBXNZW that develop lupus. These strains are still heavily used as models for basic and translational research in autoimmunity. However, can we always be sure that mice faithfully represent human biology? This is definitively a problem when dealing with pathogens that do not infect mice, but it is also an issue when considering developing and testing new treatments that target human cells. The desire to improve the modeling of human diseases and probing their molecular mechanisms have led investigators to play with the idea of "humanizing" mice. This is a loose term with different meanings, but usually refers to the introduction of a human component, an individual gene, a locus, a cell type, or even a tissue into the mouse, to generate what we call humanized mice (hu-mice). Some of the alleles of the highly polymorphic Human Leukocyte Antigen (HLA) locus, which encodes the human MHC proteins, display the strongest association with autoimmunity. Their involvement in autoimmunity is based on the fact that MHC proteins present host and foreign peptides to T cells, thus playing a crucial role in T cell development and adaptive immune responses. For these reasons, HLA transgenic mice were among the first "humanized" animal models of autoimmune diseases. These models have become even more sophisticated by introducing additional genetic mutations, such as a recent study that created C57BL/6 mice lacking mouse MHC II and instead expressing human HLA-DR4, a high risk allele for T1D development, and human CD80, for additional co-stimulation of T cells. Upon immunization with murine proinsulin-2, these mice developed a disease similar to human T1D, with myeloid and lymphoid murine cell infiltration into the pancreatic islets and the production of insulin-reactive autoantibodies. These and other studies have considerably advanced our knowledge of autoimmunity development and progression. Nevertheless, significant discoveries in mice on the mechanisms and treatment of diseases do not often replicate in human patients. Despite the many similarities, there are also clearly significant differences between the mouse and human immune systems, differences that have not yet been all resolved. Moreover, laboratory mice belong to inbred strains with limited genetic variation. Thus, discoveries made in the mouse must be first verified and further investigated in humans, particularly when considering using new drugs in patients based on knowledge acquired in mice. Since testing mechanisms and drugs in humans is hampered by ethical, practical, and safety reasons, the pressure of improving the "humanization" of mice has steadily increased, leading to the idea of transplanting these animals with cellular components of the human immune system. In this review we focus on hu-mice that carry a human immune system (HIS) either via transplantation of human blood cells or of hematopoietic stem cells. Development and improvement of human immune system mice Mouse models of murine autoimmunity remain important tools for investigating the immune system at the mechanistic level. On the other hand, HIS hu-mice offer the additional potential to test whether these mechanisms truly operate in the human immune system, to add genetic variation to the modeling of immune responses, and for testing the effect of novel therapies on human cells. HIS hu-mice are generated by transplanting human hematopoietic cells, such as peripheral blood monocytes (PBMCs), T cells, or hematopoietic stem cells (HSCs), into immunodeficient mice. After cell transplantation and the establishment of cell chimerism, these animals allow for experimentation on human immune cells in vivo, within a small animal model setting. PBMCs are typically injected intravenous (i.v.) in adult mice in the absence or after a small irradiation dose. Transplant of HSCs, which are CD34 cells generally isolated from umbilical cord blood after live births or from fetal liver after voluntary termination of pregnancy, requires sublethal irradiation of the recipient animals. Chemical myeloablation (e.g., with busulfan) has been sometimes used in place of radiation. HSC transplant is performed by injecting cells either into newborn mice, i.v. into the facial vein or directly into the liver, or into adult mice, typically by i.v. tail vein injection. To distinguish HIS hu-mice made with different sources of cells we use the term hu-PBL for mice generated by the transfer of human PBMCs and hu-HSC for mice generated by the transfer of human HSCs. Improvement of the HIS hu-mouse model over the years followed the introduction of genetic modifications in recipient mice that eventually culminated in the development of more stringent immunodeficient host animals. The first of these genetic modifications came from the discovery of a natural mutation in the protein kinase DNA-activated catalytic polypeptide (Prkdc) found in Severe Combined Immunodeficiency (SCID) mice on the CB17 genetic background. Given the essential role of Prkdc in non-homologous end joining DNA repair during V(D)J recombination, the Prkdc SCID mutation leads to a severe deficiency in B and T lymphocytes, allowing for the engraftment of human cells in a mouse host without the issue of rejection by the adaptive immune system. In one of the first autoimmune studies using SCID mice, injection of human PBMCs from autoimmune patients was performed to determine whether this led to the development of autoantibodies and disease symptoms similar to those of patients. Indeed, autoantibodies were occasionally observed. However, disease manifestations did not develop, possibly because many of the human effector cells transferred into the mice did not survive long enough to generate a functional immune system. Furthermore, these studies were generally hampered by the development of graft versus host disease (GVHD) that arises in the context of MHC mismatch between donor and recipient cells. As it turns out, GVHD per se can cause the production of autoantibodies, confounding interpretations. Other limitations observed in this model were the high numbers of mouse NK cells, which can directly limit human cell engraftment. Moreover, the Prkdc SCID mutation also affects the ability of myeloid cells to repair DNA damage, a concern when exposing mice to the ionizing radiations required for the engraftment of human HSCs. Finally, while most of the SCID mice lack lymphocytes, as they age some accumulate functional (mouse) T and B cells due to a 'leaky' phenotype whereby alternate DNA repair mechanisms are able to rescue defective V(D)J gene recombination. These issues significantly affected the ability to use SCID mice as recipients of a transplanted human immune system. Not long after the discovery of the Prkdc SCID mutation two different groups used the recently developed technique of homologous gene recombination to generate knock-out mice for the recombination activating genes Rag1 and Rag2. These RAG-deficient animals completely lacked mature B and T cells due to an absolute inability to perform B cell receptor (BCR) and T cell receptor (TCR) V(D)J gene rearrangements, events that are absolutely necessary for antigen receptor expression and subsequent signaling. Moreover, genetic deletion of Rag1 or Rag2 genes had a permanent and specific impact on lymphocyte development but not elsewhere, meaning it could overcome both the radio sensitivity and the "leakiness" issues typical of SCID mice. Nevertheless, RAG knockout mice did not significantly improve the engraftment and maintenance of human cells because of the presence of mouse NK cells, the number of which expand to fill the void left by the absence of mature B and T cells. In the meantime, to address the low human cell engraftment observed in CB17-SCID hu-mice, the Prkdc SCID mutation was backcrossed onto different genetic backgrounds including the NOD mouse strain. Human cell engraftment was greatly improved in NOD-SCID mice, both in percentage and in kinetics. In addition to developing diabetes, NOD mice are appreciated to display poor NK cell activity, which likely contributed to the improved human chimerism. Nevertheless, even in NOD-SCID hu-mice the establishment of a human immune system maintained significant problems that restricted a wider use of this model for human immunological studies. For instance, the NK cell population in NOD-SCID hu-mice was only diminished but not abolished, still causing some tissue rejection. Moreover, these mice displayed spontaneous development of thymic lymphomas with increased mortality after 5 months of age. In the mid 1990s, the realization that mutations in the interleukin-2 (IL-2) receptor -chain locus (IL2R or CD132) lead to severe immunodeficiency was finally instrumental for improving HIS hu-mice. IL2r is an essential component for the intracellular signaling of IL-2, IL-4, IL-7, IL-9, IL-15 and IL-21 cytokine receptors. Mutations in Il2r result in X-linked severe combined immunodeficiency (XSCID) in humans and mice. This is characterized by a profound defect in T cell development but also an arrest of NK cell development due to, at least in part, a decreased ability to respond to IL-7 and IL-15, respectively. IL-7 is also important for B cell development in mice, further reducing the host lymphocyte population and increasing the chances of human cell repopulation. The targeted mutations of the Il2r gene were introduced into the NOD-SCID background creating the NOG strain first and the NSG strain later. They were also introduced into the BALB/c-Rag2 null background creating the BRG strain, and the NOD-RAG / background, creating the NRG strain. The inactivation of IL2R led to a crucial improvement of human chimerism in HIS hu-mice, with superior human cell engraftment and the development of a functional adaptive immune system. With the creation of NSG, BRG and NRG immunodeficient mice, the use of HIS hu-mouse models became more widespread and was paralleled by the range of areas investigated, including infections, transplantation, tumor responses, autoimmunity, but also basic immunological studies. The quest for a "better" humanized mouse model led to the simultaneous development of immunodeficient strains in different genetic backgrounds. Among the mouse strains historically used to study immune responses, it was found that the BALB/c genetic background, in contrast to C57BL/6, allowed the establishment of human hematopoietic chimerism. Nevertheless, even when sharing the same genetic modifications, NSG or NRG HIS mice still exhibited higher human cell engraftment relative to BRG HIS mice. The basis for this difference was subsequently demonstrated by an elegant study from Danska and colleagues showing that the NOD, but not the BALB/c (or C57BL/6) variant of the Signal Regulatory Protein- (SIRP-), a protein expressed on the surface of mouse macrophages, is able to bind human CD47 on human cells. This binding, which represents a "don't eat me" signal, inhibits phagocytosis, leading to higher human chimerism in NOD immunodeficient strains. This discovery guided the development of BRG mice harboring either the NOD or human SIRP- variants (strains named BRGS) and resulting in levels of human hematopoietic engraftment similar to those of NOD strains. This line of study also led to the development of a BL/6-Rag2 null IL2r null strain that upon the deletion of the CD47 locus allows the development of human hematopoietic cells. Despite these genetic manipulations, however, myeloablative irradiation still remains a common limiting requirement for the development of HIS mice. Such conditioning is necessary for opening niches in bone marrow microenvironments in which human HSCs can engraft, but it can also lead to tissue damage and other defects, particularly in SCID strains. The need for irradiation was relieved when mutations in the Kit gene (encoding c-Kit or CD117) were backcrossed into the BRG and NSG strains. C-Kit, the receptor for stem cell factor, is expressed on hematopoietic cells and is required for hematopoiesis. Defects in murine HSCs caused by the KitW41 mutation allows engraftment of multilineage human hematopoietic cells in intact NSGW41 mice at levels similar to those achieved in irradiated NSG mice. The response of conventional T cells relies on the ability of their TCRs to recognize (and bind) a specific peptide in the context of an MHC molecule that was expressed in the thymus during their initial development. Recognition of these selective peptides occurs mostly on thymic epithelial cells, although thymic dendritic cells and B cells also contribute to some extent to the selection of the emerging T cell repertoire. In HIS mice generated by transplanting human HSCs, human T cells develop in the mouse thymus where they simultaneously undergo maturation and selection via recognition of mouse MHC molecules expressed by mouse epithelial and dendritic cells. HIS mouse models in which the T cells are educated, and therefore restricted, to mouse MHC are not ideal for studies that require these cells to coordinate their response with human antigen presenting cells (APCs), which express human MHC (i.e., HLA) receptors. From the early implementation of hu-mice, investigators interested in studying human T cell development and function collected from the same human fetus fragments of thymus and liver tissues that were then implanted beneath the kidney capsule of SCID mice. This procedure led transplanted SCID mice to generate a transient wave of human CD4 and CD8 T cells that had undergone selection within the transplanted thymic fragments, on human HLA. The human T cells developing in these animals, however, were not functional due to the lack of human APCs. To achieve a HIS hu-mouse model in which T cells are HLA educated and able to coordinate a functional immune response with human APCs, Lan and colleagues in 2006 transplanted human CD34 cells isolated from human fetal liver into NOD-SCID mice that were implanted with liver and thymus from the same fetus, a model later named bone marrow-liver-thymus (BLT). In BLT hu-mice, the human T cells develop within the human thymus (implanted under the mouse kidney capsule), and are, therefore, HLA-restricted, while human APCs mature within the human liver or in the mouse bone marrow. Together, these cells can mount strong and coordinated immune responses. Since then the BLT hu-mouse has become the model of choice for certain studies, such as HIV infection, both because of the robust production and more physiological MHC education of T cells, but also due to the enhanced presence of T cells and other human cells in mucosal tissue. A shortcoming of early BLT models was that they were prone to developing fatal GVHD, a major obstacle for studying long-term immune responses such as autoimmunity. Methods that reduce number and/or function of the human T cells that are endogenous to the donor thymus, such as freezing and thawing the fetal thymus tissue before implant and treating the thymus-implanted mice with anti-hCD2 antibodies, are some of the strategies shown to attenuate or altogether eliminate GVHD. Deleting MHC class I and II in the recipient mice has also been shown to greatly reduce GVHD, at least in hu-PBL animals. The generation of BLT hu-mice includes additional issues. One is about the availability of fetal thymus, which is not often easy to procure. Other concerns are of ethical and legal nature. In some countries, voluntary abortion and/or the use of related fetal tissue for research are not legal. Even when legalized, some investigators do not approve the use of this tissue for personal ethical reasons. In the USA, federal government agencies have recently imposed additional oversight for approval of grants proposing fetal tissue research, and federal researchers are currently not allowed to work with this tissue altogether. An alternative to fetal thymus is neonatal thymus. This tissue can be acquired from pediatric cardiology units as it is often discarded during infant heart surgery, and one recent study suggests it can function similar to fetal thymus when implanted into mice, supporting the HLA education of developing human thymocytes. However, although neonatal thymus does not possess the ethical problems associated with the use of fetal tissue, it does bear the critical drawback of lacking a source of fully HLA-matched hematopoietic stem cells, reducing the functionality of the immune system it supports. In these animals, in fact, HLA screening of human cord blood or bone marrow HSCs must be performed to select those with partial allele matches to available thymi. In spite of the many improvements implemented over the years, HIS hu-mice in NSG, NRG and BRGS strains retain important limitations. This is because cytokines required for the development and maintenance of leukocytes are often produced by non-hematopoietic cells, and some of the mouse cytokines are not as functional on human cells as their human orthologs due to sequence divergence in ligands and/or receptors. To improve the generation and function of human leukocytes, different labs have generated recipient strains with transgenes encoding relevant human cytokines. For instance, replacing the mouse Il-6 gene with human Il-6 in BRGS mice, has led to enhanced human thymopoiesis and peripheral T cell numbers, while also increasing the number of class-switched memory B cells and IgG production. Lymphocytes aside, the human myeloid populations are even more significantly impacted in HIS hu-mice. This is because, besides chemokine and receptor divergence, human myeloid cells have to compete during their development with the mouse host myeloid populations. The generation of NSG strains expressing human IL-3, GM-CSF and stem cell (or Steel) factor (strains named NSG-SGM3 or NSGS) has led to great improvement in the numbers of human neutrophils and monocytes, positively impacting also the number of CD4 T cells and mature B cells. Related genetic modifications (the introduction of human M-CSF, IL-3, GM-CSF, and thrombopoietin) have been implemented in BRG/S mice to generate strains named MSTRG and MISTRG, which perform similar to NSGS mice upon transplantation with human HSCs. Although these strains greatly improve human monocyte development, they also have some drawbacks such as the mobilization of bone marrow HSCs that reduces their regeneration potential, and the development of anemia and the consequent shortened lifespan. An additional drawback is that these animals develop a dendritic cell population that is greatly skewed in favor of mouse instead of human cells. Deleting the mouse receptor tyrosine kinase Flk2/Flt3 gene has led to the development of a BRGS strain (named BRGSF) that, with the administration of human Flt3L, significantly improves the numbers of human DCs and their function, leading also to a better reconstitution and function of human NK cells and innate lymphoid cells (ILCs) in lymphoid and mucosal tissues. Other studies have shown that transgenic expression of human IL-15, or injection of human IL-15 coupled to IL-15 receptor alpha, substantially improves NK cell development and function in HIS mice generated in different genetic backgrounds. Finally, while the mutation of the Il2r gene has tremendously improved the generation of a human immune system in mice, the deficiency in IL2R/CD132 signaling has the unfortunate drawback of preventing the formation of secondary lymphoid tissue due to defective IL-7 responses and the consequent lack of lymphoid tissue inducer cells. A BRGS strain (named BRGST) with a transgene-mediated overexpression of murine thymic stromal cell derived lymphopoietin (TSLP) was recently generated to obviate this defect. In addition to developing lymph nodes in larger numbers and size, BRGST HIS mice exhibit significantly higher numbers of T follicular helper cells, a more organized distribution of B and T cells in follicles, and improved IgG antibody responses. Using HIS mice to study tolerance and autoimmunity HIS mice represent an experimental in vivo model to study the complex dynamics of the human immune system in a controlled setting, a model in which extensive technical manipulations and analyses can be performed in ways that are not tenable in human studies. Not only can this animal model help elucidate human biological processes, it can also provide an ideal setting for testing new therapies. The HIS mouse model has been used to explore many diverse properties and functions of the human immune system [14,. Here we discuss studies in which this model was used to investigate properties related to immunological tolerance and autoimmunity. Tolerance in HIS mice Human T cells develop relatively well in HIS mice, but studies on T cell function must rely on a model in which the T cell repertoire is not only diverse but also tolerant to the host. Studies in mice have demonstrated that the development and selection of thymocytes relies on a regulated migration through distinct thymic microenvironments and the local interaction with different cell types. Moreover, T cell development has been well characterized to progress through several phenotypically distinct stages defined by the expression of the co-receptors CD4 and CD8 and the rearrangement of their T cell receptor (TCR) genes. Within the thymic cortex, cells at the CD4/CD8 double negative (DN) stage undergo V(D)J recombination of the chain of the TCR genes leading to the expression of the pre-TCR complex, an event followed by the expression of both co-receptors CD4 and CD8. These double positive (DP) cells, while still in the thymic cortex, undergo VJ recombination at the TCR chain genes leading to the expression of a mature TCR and the first repertoire selection event, known as the positive selection. DP thymocytes that are able to bind weakly to the MHC-peptide complex present on the surface of cortical thymic epithelial cells (cTEC) receive survival stimuli and are positively selected to continue their development. After this event, DP thymocytes downregulate the expression of the co-receptor that was not engaged in binding the MHC-peptide complex, thus becoming (CD4 or CD8) single positive (SP) thymocytes. SP thymocytes migrate to the medullary region where they are tested for autoreactivity during the negative selection process. Negative selection is promoted mostly by medullary thymic epithelial cells (mTEC), but also by thymic conventional and plasmacytoid dendritic cells and by B cells. This process leads to the deletion of self-reactive T cells, i.e., T cells that bind too strongly to the MHC-peptide complex. As we described above, when human T cells develop in HIS mice transplanted only with human HSCs, they undergo maturation in a mouse thymus and, thus, they are mostly restricted by mouse MHC. In recipients that express transgenic human HLA alleles, or that have been implanted with fragments of a human thymus, the T cells are selected also, or entirely, on human HLA, depending on whether mouse MHC molecules and/or the mouse thymus are still present. Despite these differences, in all HIS models the thymic maturation of human conventional T cells has been found to be normal, resulting in the development of a peripheral T cell population with a diverse repertoire. On the other hand, the development of regulatory T cells (Tregs) is greatest when a human thymic organoid is present. An important question is whether the development of thymocytes in HIS mouse models includes the negative selection of autoreactive T cells and the generation of a peripheral T cell population tolerant to the host. In their seminal study, Traggiai and colleagues used a mixed lymphocyte reaction to show that the peripheral T cell population of hu-HSC HIS mice is tolerant to both mouse and human MHC, indicating that in this model, the human T cells developing in the mouse thymus undergo selection on MHC antigens of both species. Thus, their findings suggest that human APCs seeding the mouse thymus present antigens to human thymocytes and are partly responsible for the development of a tolerant T cell population. A recent study specifically investigated the ability of human thymic APCs to promote negative selection of CD8 T cells. To do so, BLT hu-mice were transplanted with a mix of human CD34 HSCs: some were transduced with vectors encoding either a MART1 or a control antigen, while others were transduced with a TCR specific for a MART1 peptide in the context of HLA-A2. In these animals, T cells expressing the MART1-specific TCR developed among other T cells and in the presence of APCs that either did or did not express the MART1 peptide. The mice with APCs expressing the MART1 peptide displayed profound clonal deletion of their developing MART1-reactive CD8 T cells, indicating an ability of human APCs to populate the mouse thymus and participate in the process of negative selection. Another recent study utilized high-throughput sequencing of TCR Complementarity-Determining Region 3 (CDR3) and single-cell TCR sequencing to demonstrate that NSG BLT hu-mice generate a very diverse T cell repertoire that develops via positive selection mediated by weak interaction with self-peptides. Overall, these data indicate that whether in the mouse thymus or in a human thymic organoid, human thymocytes developing in HIS mice are properly selected by both thymic epithelial cells and thymic APCs. In mammals after birth, B cells develop within the bone marrow tissue through a stepwise differentiation process. This process is regulated mostly in a cell-intrinsic manner by antigen receptors containing immunoglobulin (Ig) heavy and light chain proteins expressed following productive Ig V(D)J gene rearrangement events. Similar to the rearrangements that produce the TCR in T cells, these Ig gene rearrangements are random events, producing Ig genes encoding BCRs that can potentially bind foreign antigens but also those that bind self-antigens. The mechanisms of tolerance that purge the B cell population of autoreactive clones have been carefully investigated in mice by employing Ig transgenic and knock-in animals with Igs of defined antigen specificities. These studies have shown that newly generated autoreactive B cells with medium to high avidity for self-antigen are eliminated within the bone marrow by two mechanisms of central tolerance: receptor editing, a process using secondary Ig light chain gene rearrangements to edit the antigen receptor, or clonal deletion, which leads to cell death. In contrast, autoreactive B cells with low avidity for self-antigens are able to leave the bone marrow, but for the most part, they are anergic and short-lived, which promotes tolerance in the peripheral B cell population. Our group has spent significant effort investigating the development and function of human B cells in HIS hu-mice. Using the model generated by transplanting human HSCs from umbilical cord blood into BRG and BRGS neonates, we have shown that human B cells are robustly produced in these animals and that their bone marrow development is largely normal. Once exported outside the bone marrow, however, newly generated human B cells are impaired from developing into fully mature cells, and we have found that their maturation is aided by the accumulation of T cells but not by the addition of human BAFF. We were curious to understand whether human autoreactive B cells are properly selected in HIS mice and, if so, to use this model to elucidate the mechanisms of human B cell tolerance. Peripheral B cell tolerance is implemented by both B cell-intrinsic and extrinsic mechanisms, but extrinsic restraint (or lack thereof) may be less than optimal in HIS hu-mice because of the suboptimal coordination of B and T cell responses. Given that central B cell tolerance is, to large extent, a B cell-intrinsic process, we setup a HIS model to study the bone marrow selection of human autoreactive immature B cells. To achieve this goal we tailored a system established by Nemazee and colleagues to study central tolerance of wild-type mouse B cells. In this system, transgenic mice ubiquitously express a synthetic membrane antigen that serves as a 'self' antigen by incorporating antibody variable region genes specific for Ig constant regions. Thus, a recipient BRG strain expressing a synthetic self-antigen specific for the constant region of human Ig would be expected to censor all Ig, but not Ig, human B cells while they develop in the bone marrow. Indeed, using this system we demonstrated that central B cell tolerance is intact in HIS mice. In this model, developing human autoreactive () B cells undergo central tolerance by both receptor editing and clonal deletion. This was shown by augmented expression of RAG1/2 genes and increased extrusion of Ig excision DNA circles in bone marrow B cells, accompanied by a slightly reduced number of B cells entering the spleen. In these animals, the peripheral B cell population is almost exclusively composed of B cells, demonstrating that, similar to mouse B cell development, central tolerance for human B cells is extremely efficient. In agreement with our findings, repertoire studies have shown that newly generated B cells in the spleen of NSG HIS mice comprise only rare clones reacting with common self-antigens such as DNA and those expressed by Hep2 cells. This further suggests autoreactive human B cells developing in HIS mice are generally eliminated in the bone marrow. B cells in the spleen of HIS hu-mice also display reduced frequency of autoreactive VH4-34 clones relative to their bone marrow counterpart. These overall findings demonstrate that central B cell tolerance is intact in HIS hu-mice. Therefore, this model could also be used to elucidate the molecular mechanisms that either drive or impair the negative selection of autoreactive B cells. In such a study, NSG mice were transplanted with genetically altered human HSCs to demonstrate that central B cell tolerance is either reinforced or diminished by the respective expression of the AID gene or the PTPN22 autoimmune-associated variant, consistent with observations in patients with relevant genetics. Overall these studies show that HIS hu-mice are useful for investigating mechanisms of central lymphoid tolerance and repertoire generation. But can this model system be used to also study full-fledged autoimmunity? Autoimmunity is a complex process involving multiple cell types, genetic loci, and environmental drivers. Due to its complexity, achieving the development of disease models that faithfully represent the overall disease process requires the establishment of a sophisticated human immune system that can also interact with non-hematopoietic cell types and the environment. Though HIS hu-mice develop a human immune system, the function of this system is imperfect due to many differences in the biological environments of mice and humans. Thus, while some aspects of autoimmunity can be recapitulated and studied in this model, others are not yet attainable. Nevertheless, significant progress has been made in understanding the use and limitations of HIS mice for studying autoimmune diseases (Fig. 1, Table 1). Systemic lupus erythematosus and IPEX in HIS mice Systemic lupus erythematosus is one of the most studied autoimmune diseases. This disease is highly complex and diverse but appears to be generally dominated by pathogenic B cells and autoantibodies. While there are mouse strains that due to their genetic makeup spontaneously develop a lupus-like disease, it is also possible to trigger a similar dysfunction in healthy mice by treating these animals with a single injection of the long chain hydrocarbon pristane. When pristane was used in HIS hu-mice generated by transplanting fetal HSCs into NSG animals, it led to the development of key SLE features similar to those that develop in intact wild-type mice upon treatment. These features manifested with an increased production of anti-nuclear autoantibodies and pro-inflammatory cytokines, and also with the development of lupus nephritis and pulmonary serositis. While SLE is a multigenic disease where a diverse number of genetic loci contribute to a lesser or larger extent, other autoimmune diseases are caused by rare monogenic alterations. An example is Immunodysregulation Polyendocrinopathy Enteropathy X-linked (IPEX) syndrome, a multi-organ and fatal autoimmune disease that is caused by mutations in the FOXP3 gene and, thus, a defective generation of Tregs. A study by Goettel et al. showed that this monogenic autoimmune syndrome can be modeled in hu-HSC HIS mice by transplanting HSCs from a patient with IPEX into an NSG strain deficient for MHC-II and transgenic for HLA-DR1. This model can also be used to test genetic therapies, such as the transplantation of HSCs transduced with a lentivirus encoding a normal FOXP3 gene. Type 1 diabetes in HIS mice With respect to type 1 diabetes and other autoimmune diseases, the immune system produced by HIS and BLT hu-mice might not be sufficiently functional to fully recapitulate all aspects of a disease, although it may still mimic some key features. A study that focused on the tissue recruitment of pancreas-specific human T cells showed that islet-reactive human T cell clones that were primed in vitro by autologous APCs pulsed with relevant peptides, home to the pancreas of NOD-SCID mice that were pre-treated with streptozotocin to stress pancreatic -cells. This tissue recruitment was contingent on proper in vivo antigen stimulation (i.e., the coinjection of antigen-pulsed APCs) and the pre-treatment of the animals with streptozotocin. These mice, however, did not develop These HSCs are then transplanted into either newborn or adult immunodeficient mice that have been preconditioned with a sub-lethal irradiation dose. HSCs are generally injected intravenously, but they can also be injected intra-hepatically in newborn mice. To setup the hu-BLT model, adult immunodeficient mice are transplanted with thymus fragments obtained either from human fetal tissue or from infants undergoing heart surgery. The thymic fragments are engrafted under the kidney capsule of recipient mice. In addition, these mice are sublethally irradiated and injected with human HSCs isolated from the liver of the same thymus donor fetal tissue (to achieve complete HLA match), or from CB or adult bone marrow (BM) samples that have partial HLA match to the fetal thymus. Injection of the HSCs can occur at the same time of the thymus engraftment or few days after this surgery. In addition to the hu-HSC and hu-BLT models, HIS mice can be generated in a less sophisticated way by injecting immunodeficient mice with total human PBMCs (hu-PBL mice), or with T cells isolated from the blood or other tissues (e.g., the skin). These T cells can be injected as nave or after culture with APCs (e.g., from the related blood) and specific antigens to enrich for antigen-reactive T cell clones. The figure depicts autoimmune diseases that have been investigated so far using HIS mice: type 1 diabetes (T1D) which manifests in the pancreas, rheumatoid arthritis (RA) which exhibits joint inflammation, experimental autoimmune encephalomyelitis (EAE), a model of multiple sclerosis (MS) which manifests in the central nervous system, Sjogren's syndrome (SjS) which displays dryness of eyes, mouth and tongue, alopecia areata which leads to partial or complete hair loss, and systemic lupus erythematosus (SLE) which has multiple manifestations, skin rash and kidney disease being some of these. Alopecia, SjS, and EAE have only been investigated in HIS mice transplanted with human PBMCs or T cells. In addition to the studies of these autoimmune diseases, HIS mice are also starting to be utilized to study the autoimmune manifestations, or immune-related adverse events (irAEs), that arise following cancer immunotherapies with anti-PD1 and anti-CTLA4 antibodies. diabetes, suggesting they lacked critical components of disease development. As mentioned earlier, the BLT model allows endogenous human T cells to be educated by human HLA antigens. This model is most often setup using thymic fragments and CD34 HSCs isolated from the same fetus, which results in a complete match between the thymic "education" of human T cells and the HLAs expressed by human APCs in the peripheral tissue (although not to the MHC of mouse APCs and the mouse tissue itself). The BLT model can also be generated using fetal thymus expressing HLA alleles that partially match those expressed by HSCs isolated from the bone marrow of pediatric or adult individuals. This enables the development of BLT mice bearing immune systems from patients with distinct genetic characteristics, including those responsible for the onset of autoimmunity. In a pioneering study from the group of Megan Sykes, NSG animals received bone marrow HSCs from either T1D or healthy donors and were also implanted with fetal thymus fragments expressing the T1D-associated HLA*A201 and DRB*0302 and/or DQB*0301 HLA alleles to match those expressed by the transplanted HSCs. In these mice, while HSCs from T1D and control donors generated similar numbers of Treg cells, the peripheral T cells of T1D hu-mice showed increased proportion of activated and memory T cells compared to controls, suggesting a decreased propensity for immune regulation. In addition to differences in T cell function, this group also showed that BLT hu-mice generated with HSCs from T1D or RA patients display B-cell abnormalities when compared to mice transplanted with HSCs from healthy controls. This was demonstrated by an increased frequency of autoreactive and polyreactive clones in the peripheral B cell population, indicating HIS mice can be used to study the contribution of genetic polymorphisms to autoimmune-related B cell defects. These animals, however, did not display the tissue damage that is distinctive of T1D (e.g., insulitis) and RA (joint inflammation). Thus, this model does not appear to reflect critical aspects of disease pathogenesis. A potential explanation for the lack of disease in the above model is that their T cells do not react with peptides presented by mouse tissue cells and mouse APCs because they are only educated on human HLAs. In fact, insulitis was observed in a model in which CD4 T cells isolated from DRB1*0401 (DR4) T1D patients or healthy controls were pulsed with pancreatic peptides and then injected into NSG-DR4Tg mice in which mouse cells express DR4. However, only the pancreas of NSG-DR4Tg mice that received patient T cells displayed reduced insulin production despite the fact that both T1D and control T cells infiltrated the organ to a similar extent. Moreover, the degree of lymphocytic infiltration of the islets varied significantly depending on the donor and the peptide used to stimulate the T cells, suggesting the contribution of additional genetic factors and/or individual variations in T cell repertoire. Nevertheless, these mice did not develop overt hyperglycemia, indicating they were lacking cell types (e.g., CD8 T cells) that are important for -cell destruction. The establishment of full-fledged insulitis and T1D, in fact, appears to require additional antigen-specific T cell interactions. This was demonstrated in a recent study in which NSG-DQ8Tg recipients were transplanted with HLA-DQ8 human fetal thymus and HSCs to develop both CD4 and CD8 T cells and other human leukocytes. These mice were injected with streptozotocin to stress the pancreatic -cells, adoptively transferred with autologous human CD4 T cells transduced to express a TCR reactive with the insulin B:9-23 peptide in the context of HLA-DQ8, and finally immunized with the B:9-23 peptide in Freund's complete adjuvant. While this represents a large amount of sophisticated experimental manipulations to model the full pathologic state, it can instead be viewed as an opportunity for learning what is required to recapitulate whole human immune responses. In fact, it is because of what is lacking in hu-mice that we can sometimes learn what are the essential components needed to be added in order to establish physiologic and pathologic immunity. This humanized T1D preclinical model can now be used to elucidate the immunopathogenesis of T1D and furthermore to test novel therapies. For instance, a recent translational study utilized NSG-DQ8Tg HIS mice generated by transplanting HLA-DQ8 HSCs, to test the efficacy of T cell vaccines in promoting the expansion of functional Tregs. A two-week infusion with sub-immunogenic amounts of insulin mimetope peptides induced long lasting and stable Foxp3 Tregs that were efficient in suppressing the response of insulin-reactive effector T cells. Rheumatoid arthritis in HIS mice Rheumatoid arthritis research groups have also taken advantage of HIS hu-mice to investigate disease pathogenesis, environmental cues and pre-clinical treatments. Beside the study with BLT hu-mice described above in the T1D section, most studies have used either PBMCs or synovial tissue from RA patients as source of human cells. As an example, Seyler and colleagues transplanted human inflamed synovium from RA patients into NOD-SCID mice to study how the B cell growth factors B Lymphocyte Stimulator (BLyS) and A Proliferation-Inducing Ligand (APRIL) contribute to B cell and T cell functions in this disease. By blocking these cytokines with a decoy receptor (TACI:Fc), they found that BLyS and APRIL regulate both B and T cell function but their effects varied depending on the presence or absence of germinal center structures, with a strong immunostimulatory effect on both types of lymphocytes only in germinal center-positive synovium. Similar studies providing important contributions for disease treatment, have been described in a recent review of humanized mouse models of RA. Environmental factors play a crucial role in the development of autoimmune diseases, chief among these are infections. An investigation on the role of Epstein-Barr virus (EBV) in the development of RA, found that NOG HIS hu-mice (hu-HSCs) develop erosive arthritis with pannus formation and inflammatory cell infiltration when infected with EBV at a dose not promoting lymphomagenesis. In a different study, injection of Freund's complete adjuvant into the ankle or knee joints was sufficient to trigger acute inflammatory arthritis in NSG HIS hu-mice. The development of clinical and histological arthritis in these animals manifested with human immune cell infiltration, swelling, and bone erosion, symptoms that were decreased upon treatment with a TNF- inhibitor. Other autoimmune diseases in HIS mice While the development of sophisticated HIS mouse models for T1D, RA and SLE has significantly progressed, those for other autoimmune diseases are lagging behind. For instance, Sjogren's syndrome (SjS) and multiple sclerosis (MS) have only been modeled in hu-PBL mice, and alopecia areata has only been modeled by transplanting skin and T cells. In a study using NSG mice injected with PBMCs from either SjS patients or healthy donors, recipients of SjS cells produced higher amounts of inflammatory cytokines relative to mice with control cells. Moreover, these mice displayed inflammation and lymphocyte infiltration specifically in salivary and lacrimal glands, the target organs of SjS. Hu-PBL mice were also used to induce experimental autoimmune encephalomyelitis (EAE), a model of MS. In this study an NSG hu-PBL mouse model was developed by injecting a mix of healthy PBMCs and donor-matched dendritic cells (DCs) that were pulsed in vitro with myelin antigens. Animals were then boosted a week later by subcutaneous injection of immature DCs and myelin peptides emulsified in Freund's complete adjuvant. These treatments led to the development of subclinical inflammation in the central nervous system, as seen by the presence of infiltrating CD4 and CD8 T cells expressing IFN- and GM-CSF. In this model, the adoptive transfer of antigen-primed DCs was necessary for this outcome. However, the mice did not develop disease symptoms characteristic of EAE and MS, such as tail and leg paralysis, indicating an absence of important immunopathogenic mediators. Alopecia areata, an autoimmune condition directed to hair follicles and leading to the partial or even total loss of body hair, has been investigated in a humanized animal model in which hairless scalp biopsies from patients are engrafted in the skin of SCID mice. In the absence of human or mouse lymphocytes the grafts were able to regrow some of the hair within several weeks. However, this hair growth was hampered if the grafts were injected with patient T cells isolated from affected skin patches and activated first by culture with hair follicle homogenates and peripheral blood APCs. These studies show that effector T cells reactive with hair follicles are present in the patients' skin. Although the SjS, EAE, and alopecia studies represent important contributions for understanding how human immune cells respond to self-antigens and cause tissue damage in vivo, there are significant limitations due to the models used. Hu-PBL mice feature a poorly diverse immune system, mainly composed of clones of mature and xenoreactive T cells that expand to occupy the peripheral niches. These cells are by nature not tolerant to the host antigens, causing the development of GVHD that kills the mice around 4 weeks after transfer. In the alopecia studies in which only hair follicle-reactive T cells were transferred, this model lacks potential regulatory cells that can reduce T cells effector functions. When human CD34 HSCs are transplanted instead of PBMCs or isolated cell fractions they lead to a more robust reconstitution of an immune system that is more complex and sophisticated, long-lived, self-replicating, functional, and also generally tolerant to the host. Autoimmune responses as a consequence of cancer immunotherapy In addition to the many autoimmune diseases that are driven by genetics and environment, newly developed immunotherapy protocols that are used nowadays in many cancer patients have as side-effects the development of autoimmune responses. These therapies use antibodies that block the inhibitory receptors PD-1 (or PD-L1) and CTLA-4, increasing (and at times unleashing) the response of T cells to tumors, but also to other tissues. HIS hu-mice are becoming a model of choice for preclinical investigation of tumor-specific immunotherapies in combination or not with chemotherapies. These studies demonstrate that HIS hu-mice develop an immune system capable of rejecting some but not all tumors, and that this response can be enhanced by the use of immune checkpoint inhibitors, and modulated by some chemotherapies. Importantly, these animals appear to replicate many of the immune-related adverse events (irAEs) that have been observed in cancer patients treated with immune checkpoint inhibitor drugs, such as hepatitis, nephritis, dermatitis, adrenalitis, and development of anti-nuclear auto-antibodies. In addition, and similar to patients, these irAEs are more severe in mice treated with anti-CTLA-4 than anti-PD-1 antibodies. Thus, the ability of immunotherapy to trigger aberrant autoimmune responses could be investigated through the use of HIS hu-mice. These animals could also be useful for testing ways to prevent irAEs following cancer therapy. Conclusions While many models of HIS hu-mice exist, that use diverse recipient strains and human cell sources, none display a human immune system that completely recapitulates all aspects of the "normal" human immune system. Nonetheless, the human immune system that establishes in HIS hu-mice can often replicate complex responses that mimic those observed in human physiology and pathology. Several studies demonstrate that HIS hu-mice can be used as models to study autoimmune diseases, in their totality or in some of their components. The growing use of HIS mice in immunological research in the last 10 years has demonstrated that with empirical methods, this model can be successfully adapted to virtually any study. The development of proper HIS hu-mouse models of autoimmunity is important for closing the gap of knowledge on the initiation and progression of these diseases, a gap caused by the problems associated with accessing the disease target tissues in patients and in establishing the mechanisms of disease. Moreover, human disease heterogeneity caused by genetic polymorphisms and contrasting environmental exposure renders treatment inadequate for many patients. This issue might find some answers in personalized medicine approach based on the use of HIS hu-mice. Conflict of interest None. |
Bi-stable RF-MEMS switched capacitor based on metal-to-metal stiction This paper presents a new concept for the realization of a bi-stable RF-MEMS switched capacitor using a resistive contact. The main idea is to maintain the device in a given position using metal-to-metal stiction. The metal-to-metal contact is used only for mechanical purposes and has no electrical function. 20 Volts, 10 sec pulses are used to switch the device from one stable position to the other. After disconnection, the device maintains its position with extremely little change over long periods. 0.06% on-state capacitance relative shift has been measured over 4 days with daily control, and lab environment closet storage. 5 minutes periodic cycling shows very little drift in both states of the RF-MEMS capacitor. Moreover, the device is fabricated using MEMS conventional processing steps permits to obtain capacitive contrast of 3. |
Where Are the Regularly Pulsating Massive Stars? Radiation hydrodynamic simulations of massive stars combined with linear pulsation theory revealed the presence of very regular low-amplitude radial pulsations. Such pulsations were encountered during the core hydrogen burning as well as during the early core helium burning stage of evolution. For selected model stars we present light curves in various passbands which are derived from a posteriori radiative transfer computations on radiation hydrodynamic models. The results are processed with the aim of guiding observations to identify and monitor such regularly pulsating variable massive stars in nature. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.