eurekaalert_id
stringlengths 6
6
| eurekaalert_title
stringlengths 0
254
⌀ | eurekaalert_text
stringlengths 0
37.9k
| doi
stringlengths 12
42
| publication_year
int64 1.99k
2.02k
| publication_source
stringlengths 3
123
| publication_title
stringlengths 4
702
⌀ | publication_abstract
stringlengths 1
50.7k
|
---|---|---|---|---|---|---|---|
629165
|
Therapeutically robust correction, in vitro, of the most common cystic fibrosis mutation
|
BIRMINGHAM, Ala. - In experiments with isolated cystic fibrosis lung cells, University of Alabama at Birmingham researchers and colleagues from two other institutions have partially restored the lost function of those cells.
The work is proof-of-concept for using a yeast genetic model to find therapeutic targets, in this case for people with the most common cystic fibrosis mutation, called deltaF508-CFTR. This mutation affects close to 90 percent of patients with cystic fibrosis, and half of those have two copies of the mutation.
"The research is the first preclinical study, to our knowledge, that demonstrates therapeutic levels of deltaF508-CFTR function in primary patient cells," said John L. Hartman IV, M.D., associate professor in the UAB Department of Genetics and the Gregory Fleming James Cystic Fibrosis Research Center.
The work was recently published in PLOS Biology, and it will next be tested in animal models of cystic fibrosis, says Kathryn Oliver, the graduate student who did the laboratory work at UAB.
Cystic fibrosis is a progressive genetic disease marked by persistent lung infections that lead to lung damage and severe difficulty in breathing. The lungs of healthy people produce about two quarts of mucus a day. This mucus is transported up to the throat by the waving motion of hair-like cilia on the cells that line the respiratory tract, a conveyor-belt-like activity that removes bacteria, viruses and small particles that were inhaled into the lungs.
The defective gene in cystic fibrosis results in a thick, viscous mucus that resists transport. The gene product, called CFTR, is a tiny channel that pushes chloride ions across the cell membrane of secretory epithelial cells. Water moves along with those ions to lubricate the extracellular cilia and, consequently, the mucus. In cystic fibrosis patients, the channel is broken, so the cilia and mucus do not get hydrated.
UAB researchers, along with colleagues at McGill University, Montreal, Canada, and Emory University, Atlanta, restored the chloride channel by the additive effects of suppressing a ribosomal protein called Rpl12 and use of the investigational drug VX-809, or Lumacaftor. Together, these two treatments were able to restore CFTR chloride transport to 50 percent of the normal activity in bronchial epithelial cells. This level, if achieved in patients, could be enough to produce healthy lung function.
This work is proof-of-concept for discovering novel therapeutic targets for patients, using genomewide gene interaction analysis with a yeast homolog. The common ancestor of yeast and humans diverged about a billion years ago, but there is still enough functional conservation between some pairs of yeast and human genes that they can be substituted for each other.
"We are curious about the extent to which yeast genetic models can reveal gene interaction networks relevant to human disease," Hartman said. "For cystic fibrosis, this yeast phenomics approach appears to be very useful."
Details
Cystic fibrosis (CF) occurs when a person inherits a dysfunctional copy of the CFTR protein from his or her mother and father, yielding two mutations in that patient. CFTR stands for the CF transmembrane conductance regulator, an ATP-powered ion channel that spans the cell membrane. The most common CF mutation, deltaF508-CFTR, is a deletion of one phenylalanine amino acid residue from the CFTR protein. This deletion interferes with proper folding of the CFTR protein inside the cell, leading to degradation of the nascent polypeptide at the endoplasmic reticulum.
Yeast has a homolog to CFTR called YOR1, a pump that exports the mitochondrial toxin oligomycin, making the yeast resistant to oligomycin. From an evolutionary perspective, YOR1 is in the same protein family as CFTR and possesses the equivalent mutation to human deltaF508-CFTR, known as deltaF670-YOR1. DeltaF670-YOR1 is misfolded and degraded in yeast cells in a manner similar to deltaF508-CFTR in human lung cells. Thus, deltaF670-YOR1 was introduced systematically into every one of approximately 4,700 different yeast strains, each harboring loss of function in a single yeast gene. The purpose? To find targets for rescuing the misfolding of deltaF670-YOR1.
DeltaF670-YOR1 function is measured by changes in oligomycin resistance that were detected by new technology developed in the Hartman laboratory -- quantitative high-throughput cell array phenotyping, or Q-HTCP. This technology collects tens of thousands of growth curves simultaneously, allowing precise and accurate quantification of the growth response of all 4,700 mutants to inhibitory treatments like oligomycin.
In these unbiased yeast studies, numerous genes, including RPL12, were identified as potential targets that could restore proper deltaF670-YOR1 folding and prevent endoplasmic reticulum-associated degradation, and therefore, by analogy, also rescue deltaF508-CFTR activity. Based on the yeast results, experiments with human CF cells were conducted using small interfering RNA to suppress levels of the corresponding target proteins, particularly RPL12, a ribosomal stalk protein involved in translation of mRNA. This inhibition of RPL12 increased the plasma membrane density, function and stability of deltaF508-CFTR at the apical surface of primary bronchial epithelial cells isolated from five different patients carrying deltaF508-CFTR.
Suppression of RPL12 slows the rate of translation elongation of nascent deltaF508-CFTR protein at the ribosomal surface, which may reduce the amount of misfolded proteins during synthesis. When RPL12 inhibition was combined with the small-molecule corrector VX-809, CFTR function in mutant cells increased to 50 percent of the wild-type level, which is well above the 30 percent threshold believed to be beneficial for patients with CF. VX-809 acts as a molecular chaperone to promote folding of deltaF508-CFTR, but the drug showed only modest benefits in clinical trials with patients who have two copies of the deltaF508-CFTR mutation.
Taken together, this work provides the first evidence that novel therapeutic strategies for human patients can be identified based on yeast studies, and that targeting a ribosomal protein (Rpl12) together with VX-809 can rescue CFTR function to therapeutically relevant levels.
###
Corresponding authors of the paper, "Ribosomal stalk protein silencing partially corrects the ΔF508-CFTR functional expression defect," are Hartman of UAB and Gergely L. Lukacs, McGill University. Co-authors are Oliver, Jingyu Guo and Mert Icyuz, UAB Department of Genetics and the Gregory Fleming James Cystic Fibrosis Research Center; Guido Veit, Pirjo M. Apaja, Doranda Perdomo, Aurelien Bidaud-Meynard and Sheng-Ting Lin, McGill University; and Eric J. Sorscher, Emory University.
|
10.1371/journal.pbio.1002462
| 2,016 |
PLoS Biology
|
Ribosomal Stalk Protein Silencing Partially Corrects the ΔF508-CFTR Functional Expression Defect
|
The most common cystic fibrosis (CF) causing mutation, deletion of phenylalanine 508 (ΔF508 or Phe508del), results in functional expression defect of the CF transmembrane conductance regulator (CFTR) at the apical plasma membrane (PM) of secretory epithelia, which is attributed to the degradation of the misfolded channel at the endoplasmic reticulum (ER). Deletion of phenylalanine 670 (ΔF670) in the yeast oligomycin resistance 1 gene (YOR1, an ABC transporter) of Saccharomyces cerevisiae phenocopies the ΔF508-CFTR folding and trafficking defects. Genome-wide phenotypic (phenomic) analysis of the Yor1-ΔF670 biogenesis identified several modifier genes of mRNA processing and translation, which conferred oligomycin resistance to yeast. Silencing of orthologues of these candidate genes enhanced the ΔF508-CFTR functional expression at the apical PM in human CF bronchial epithelia. Although knockdown of RPL12, a component of the ribosomal stalk, attenuated the translational elongation rate, it increased the folding efficiency as well as the conformational stability of the ΔF508-CFTR, manifesting in 3-fold augmented PM density and function of the mutant. Combination of RPL12 knockdown with the corrector drug, VX-809 (lumacaftor) restored the mutant function to ~50% of the wild-type channel in primary CFTRΔF508/ΔF508 human bronchial epithelia. These results and the observation that silencing of other ribosomal stalk proteins partially rescue the loss-of-function phenotype of ΔF508-CFTR suggest that the ribosomal stalk modulates the folding efficiency of the mutant and is a potential therapeutic target for correction of the ΔF508-CFTR folding defect.
|
704100
|
PERK protein opens line of communication between inside and outside of the cell
|
PERK is known to detect protein folding errors in the cell. Researchers at the Laboratory of Cell Death Research & Therapy at KU Leuven (University of Leuven, Belgium) have now revealed a hidden perk: the protein also coordinates the communication between the inside and the outside of the cell. These findings open up new avenues for further research into treatments for cancer, Alzheimer's, and diabetes.
Proteins such as insulin are properly formed in the endoplasmic reticulum (ER), one of the biggest membrane structures in the cell. The ER works like an assembly line and folds the proteins into a three-dimensional shape that is essential for them to function. When there is a problem in the 'protein folding assembly line', the accumulation of misfolded proteins can lead to diseases such as Alzheimer's, cancer, and diabetes.
An essential component of this protein folding factory is PERK. "This protein is known to play a crucial role in maintaining ER functions and restoring them if necessary," explains Patrizia Agostinis, head of the KU Leuven Laboratory of Cell Death Research & Therapy. "When PERK detects protein folding errors in the ER it prompts the nucleus of the cell to take action."
Patrizia Agostinis, Alex van Vliet, and other team members have now discovered an additional function of PERK. Agostinis: "We found that PERK also coordinates the communication between the protein folding factory (the ER) and the skin of the cell (the plasma membrane). When the protein folding factory detects low calcium levels, the plasma membrane needs to let calcium flow back in. After all, calcium is crucial for the proper functioning of the protein folding factory - the ER, where the calcium is stored - and for the overall health of the cell. And this is where PERK comes in: the protein establishes contact between the two cell components so that they can work together to restore the calcium level."
"This entire process, which is regulated by PERK, takes place in a matter of minutes or even seconds," Alex van Vliet adds. "That's one of the reasons why it went unnoticed until now. We used a new method to reveal the underlying mechanism, and were surprised to find that PERK can control the movement of the ER towards the plasma membrane by modifying the skeleton of the cell."
The newly discovered role of PERK opens up promising therapeutic avenues. "But we must not get ahead of ourselves," Agostinis emphasizes. "This is fundamental research. Much more work needs to be done before we can even start thinking of new treatments that target this new function of PERK."
###
|
10.1016/j.molcel.2017.01.020
| 2,017 |
Molecular Cell
|
The ER Stress Sensor PERK Coordinates ER-Plasma Membrane Contact Site Formation through Interaction with Filamin-A and F-Actin Remodeling
|
Loss of ER Ca2+ homeostasis triggers endoplasmic reticulum (ER) stress and drives ER-PM contact sites formation in order to refill ER-luminal Ca2+. Recent studies suggest that the ER stress sensor and mediator of the unfolded protein response (UPR) PERK regulates intracellular Ca2+ fluxes, but the mechanisms remain elusive. Here, using proximity-dependent biotin identification (BioID), we identified the actin-binding protein Filamin A (FLNA) as a key PERK interactor. Cells lacking PERK accumulate F-actin at the cell edges and display reduced ER-PM contacts. Following ER-Ca2+ store depletion, the PERK-FLNA interaction drives the expansion of ER-PM juxtapositions by regulating F-actin-assisted relocation of the ER-associated tethering proteins Stromal Interaction Molecule 1 (STIM1) and Extended Synaptotagmin-1 (E-Syt1) to the PM. Cytosolic Ca2+ elevation elicits rapid and UPR-independent PERK dimerization, which enforces PERK-FLNA-mediated ER-PM juxtapositions. Collectively, our data unravel an unprecedented role of PERK in the regulation of ER-PM appositions through the modulation of the actin cytoskeleton.
|
823549
|
Disrupting parasites' family planning could aid malaria fight
|
Malaria parasites know good times from bad and plan their offspring accordingly, scientists have found, in a development that could inform new treatments.
Scientists have found that the reproduction strategy used by the disease-causing parasites is more sophisticated than previously thought, and is similar to that seen in more complex organisms.
The findings could help researchers better predict how the parasites respond to adverse conditions, such as treatment with anti-malarial drugs.
Scientists at the Universities of Edinburgh and Toronto used a mathematical model, combined with experiments, to examine when malaria parasites opt to put greater efforts into reproduction.
To survive in a host such as a person or animal, the parasites replicate asexually in the blood, causing disease. They must produce specialised sexual forms in order to reproduce and spread the infection to new hosts.
The team discovered that parasites alter how much effort they invest in survival versus reproduction, according to how well they can grow inside a host.
When conditions are good, and parasites are growing well, they can afford to reproduce and spread to new hosts, researchers found. In poor conditions however, parasites delay reproduction and divert efforts to replicating asexually, prioritising survival in the host. This can make infections harder to clear, the team says.
If conditions are catastrophically bad and the parasite population plummets - following treatment with a strong dose of anti-malarial drugs, for instance - they invest as much as possible in reproduction in a last-ditch effort to spread to new hosts.
Developing treatments that prompt parasites to invest more in reproduction and less in the disease-causing asexual stages, while also blocking their spread to other hosts, could help to combat the disease, the team says.
The study is published in the journal PLOS Pathogens. It was supported by NERC, BBSRC, the Royal Society, the FNR of Luxembourg, Wellcome, the Human Frontiers Science Program, and the Natural Sciences and Engineering Research Council of Canada.
Dr Petra Schneider, of the University of Edinburgh's School of Biological Sciences, who led the study, said: "It is really exciting to discover that these small blood parasites follow the same reproductive strategies as more complex animals, like insects, birds and mammals. Being able to predict how parasites balance reproduction and survival could improve the outcomes of treatment."
###
|
10.1371/journal.ppat.1007371
| 2,018 |
PLoS Pathogens
|
Adaptive plasticity in the gametocyte conversion rate of malaria parasites
|
Sexually reproducing parasites, such as malaria parasites, experience a trade-off between the allocation of resources to asexual replication and the production of sexual forms. Allocation by malaria parasites to sexual forms (the conversion rate) is variable but the evolutionary drivers of this plasticity are poorly understood. We use evolutionary theory for life histories to combine a mathematical model and experiments to reveal that parasites adjust conversion rate according to the dynamics of asexual densities in the blood of the host. Our model predicts the direction of change in conversion rates that returns the greatest fitness after perturbation of asexual densities by different doses of antimalarial drugs. The loss of a high proportion of asexuals is predicted to elicit increased conversion (terminal investment), while smaller losses are managed by reducing conversion (reproductive restraint) to facilitate within-host survival and future transmission. This non-linear pattern of allocation is consistent with adaptive reproductive strategies observed in multicellular organisms. We then empirically estimate conversion rates of the rodent malaria parasite Plasmodium chabaudi in response to the killing of asexual stages by different doses of antimalarial drugs and forecast the short-term fitness consequences of these responses. Our data reveal the predicted non-linear pattern, and this is further supported by analyses of previous experiments that perturb asexual stage densities using drugs or within-host competition, across multiple parasite genotypes. Whilst conversion rates, across all datasets, are most strongly influenced by changes in asexual density, parasites also modulate conversion according to the availability of red blood cell resources. In summary, increasing conversion maximises short-term transmission and reducing conversion facilitates in-host survival and thus, future transmission. Understanding patterns of parasite allocation to reproduction matters because within-host replication is responsible for disease symptoms and between-host transmission determines disease spread.
|
651870
|
Strategic classroom intervention can make big difference for autism students
|
Special training for teachers may mean big results for students with autism spectrum disorder, according to Florida State University and Emory University researchers.
In a new study, children whose teachers received specialized training "were initiating more, participating more, having back-and-forth conversations more, and responding to their teachers and peers more frequently," said researcher Lindee Morgan.
Morgan and FSU Autism Institute Director Amy Wetherby were co-principal investigators of a three-year, 60-school study that measured the effectiveness of a curriculum, called SCERTS, designed specifically for teachers of students with autism spectrum disorder (ASD).
SCERTS (pronounced "serts") was developed in 2006. It targets the most significant challenges presented by ASD, spelled out in its acronym: "SC" for social communication, "ER" for emotional regulation, and "TS" for transactional support (developing a partnership of people at school and at home who can respond to the ASD child's needs and interests and enhance learning).
The team reported its results this month in the Journal of Consulting and Clinical Psychology. Morgan, the lead author, worked at the Autism Institute when the study was conducted and now is at Emory's School of Medicine. Co-author Wetherby was one of the developers of the SCERTS curriculum.
"There is now a solid body of research on treatments for preschool children with ASD," Wetherby said. "However, this study is one of only a few demonstrating the efficacy of a treatment for school-age children. And the most impressive part is it was conducted in public school classrooms with a good mix of general and special education teachers."
ASD refers to a group of complex neurodevelopment disorders characterized by restricted and repetitive patterns of behavior and difficulties with social communication and interaction.
The research team enlisted the participation of 60 schools in 10 districts: one in California, two in Georgia and seven in Florida (Gadsden, Jackson, Leon, Okaloosa, Taylor, Volusia and Wakulla counties). They randomly matched pairs of schools for the study.
In each pair, one school was designated ATM, for "autism training modules." Its students got regular classroom teaching supplemented only by a website where modules related to autism were available to teachers. The other school was designated CSI, for "classroom SCERTS intervention." Its participating teachers received three days of SCERTS training--plus regular coaching, access to extra reference materials and videos of themselves in the classroom.
Morgan said the team was delighted with the results showing how CSI schools outperformed ATMs. One of the study's strongest features, she said, was that teachers could watch the videos and see for themselves how the classroom had changed.
"Our primary outcome measure was a direct observation tool, which is basically unheard of in educational intervention research," she said. "Video was a very tedious process. However, it's such a great measure to see what both teachers and students are using in the classroom."
In addition, she said, a parent report and several teacher measures also showed that the students in the CSI group outperformed the ATM group.
"There is a pressing need to change the landscape of education for school-age students with ASD," the paper concluded. "This work has the potential to contribute to this change by providing a feasible, comprehensive model of intervention that can be implemented in a variety of educational placements and settings."
Morgan said CSI could benefit teachers and all students, not just those on the autism spectrum.
"General education teachers in most states aren't required to have autism training," she said. "And yet they find themselves with kids with autism because that's the law. These days, more than 70 percent of kids on the spectrum have no intellectual disabilities. Therefore, schools are moving more toward modifying and adapting the mainstream classroom in ways that are not only helpful for kids with autism but also good for all the students. I remember some of our kindergarten teachers saying afterward: 'Putting this in place helped my whole class.'"
|
10.1037/ccp0000314
| 2,018 |
Journal of Consulting and Clinical Psychology
|
Cluster randomized trial of the classroom SCERTS intervention for elementary students with autism spectrum disorder.
|
This cluster randomized trial (CRT) evaluated the efficacy of the Classroom Social, Communication, Emotional Regulation, and Transactional Support (SCERTS) Intervention (CSI) compared with usual school-based education with autism training modules (ATM).Sixty schools with 197 students with autism spectrum disorder (ASD) in 129 classrooms were randomly assigned to CSI or ATM. Mean student age was 6.79 years (SD 1.05) and 81.2% were male. CSI teachers were trained on the model and provided coaching throughout the school year to assist with implementation. A CRT, with students nested within general and special education classrooms nested within schools, was used to evaluate student outcomes.The CSI group showed significantly better outcomes than the ATM group on observed measures of classroom active engagement with respect to social interaction. The CSI group also had significantly better outcomes on measures of adaptive communication, social skills, and executive functioning with Cohen's d effect sizes ranging from 0.31 to 0.45.These findings support the preliminary efficacy of CSI, a classroom-based, teacher-implemented intervention for improving active engagement, adaptive communication, social skills, executive functioning, and problem behavior within a heterogeneous sample of students with ASD. This makes a significant contribution to the literature by demonstrating efficacy of a classroom-based teacher-implemented intervention with a heterogeneous group of students with ASD using both observed and reported measures. (PsycINFO Database Record
|
715636
|
Spontaneous retinal waves simulate optical flow before neonatal mice can see
|
Like dreaming of walking through a world they've not yet experienced, the retinas of neonatal mice practice for what mature eyes must later process by generating spontaneous patterns of activity that mimic the perception of directional movement through space, according to a new study. Essential functions in the mammalian visual system, including the ability to locate objects and detect motion, are present even at the first onset of vision. Optic flow, the perceived relative motion of objects and surfaces that seemingly stream by a field of vision during movement, is one of these functions. However, how the visual system organizes its functional characteristics before visual sensory experience is even possible remains unclear. And, while previous studies have revealed spontaneous retinal activity prior to functional vision, the role of this spontaneous activity in visual system development is unknown. Xinxin Ge and colleagues examined the spontaneous activity of ganglion cells in mice at multiple ages throughout development in vivo and discovered an intrinsic mechanism in the developing retina that prepares the downstream visual system for motion detection before the newborn mice can see. According to Ge et al., spontaneous waves of retinal activity during this transient window flow in the same pattern as would be produced if the mouse was physically moving through the environment. This patterned, spontaneous activity effectively trains the visual system and the associated brain circuits to process directional information and to interpret movement through space at eye opening.
|
10.1126/science.abd0830
| 2,021 |
Science
|
Retinal waves prime visual motion detection by simulating future optic flow
|
The ability to perceive and respond to environmental stimuli emerges in the absence of sensory experience. Spontaneous retinal activity prior to eye opening guides the refinement of retinotopy and eye-specific segregation in mammals, but its role in the development of higher-order visual response properties remains unclear. Here, we describe a transient window in neonatal mouse development during which the spatial propagation of spontaneous retinal waves resembles the optic flow pattern generated by forward self-motion. We show that wave directionality requires the same circuit components that form the adult direction-selective retinal circuit and that chronic disruption of wave directionality alters the development of direction-selective responses of superior colliculus neurons. These data demonstrate how the developing visual system patterns spontaneous activity to simulate ethologically relevant features of the external world and thereby instruct self-organization.
|
621844
|
Kids twice as likely to eat healthy after watching cooking shows with healthy food
|
Philadelphia, January 8, 2020 - Television programs featuring healthy foods can be a key ingredient in leading children to make healthier food choices now and into adulthood.
A new study in the Journal of Nutrition Education and Behavior, published by Elsevier, found kids who watched a child-oriented cooking show featuring healthy food were 2.7 times more likely to make a healthy food choice than those who watched a different episode of the same show featuring unhealthy food.
Researchers asked 125 10- to 12-year-olds, with parental consent, at five schools in the Netherlands to watch 10 minutes of a Dutch public television cooking program designed for children, and then offered them a snack as a reward for participating. Children who watched the healthy program were far more likely to choose one of the healthy snack options - an apple or a few pieces of cucumber - than one of the unhealthy options - a handful of chips or a handful of salted mini-pretzels.
"The findings from this study indicate cooking programs can be a promising tool for promoting positive changes in children's food-related preferences, attitudes, and behaviors," said lead author Frans Folkvord, PhD, of Tilburg University,Tilburg, Netherlands.
This study was conducted at the children's schools, which could represent a promising alternative for children learning healthy eating behaviors. Prior research has found youth are more likely to eat nutrient-rich foods including fruits and vegetables if they were involved in preparing the dish, but modern reliance on ready-prepared foods and a lack of modeling by parents in preparing fresh foods have led to a drop in cooking skills among kids.
"Providing nutritional education in school environments instead may have an important positive influence on the knowledge, attitudes, skills, and behaviors of children," Dr. Folkvord said.
This study indicates the visual prominence of healthier options in both food choice and portion size on TV cooking programs leads young viewers to crave those healthier choices then act on those cravings.
The effect that exposure to healthier options has on children is strongly influenced by personality traits. For example, children who don't like new foods are less likely to show a stronger desire for healthier choices after watching a TV program featuring healthier foods than a child who does enjoy trying new foods. As they grow older, though, they start to feel more responsible for their eating habits and can fall back on information they learned as children. Researchers believe this may indicate watching programs with healthier options can still have a positive impact on children's behavior, even if it is delayed by age.
"Schools represent the most effective and efficient way to reach a large section of an important target population, which includes children as well as school staff and the wider community," Dr. Folkvord commented. "Positive peer and teacher modeling can encourage students to try new foods for which they exhibited distaste previously."
Poor dietary habits during childhood and adolescence have multiple negative effects on several health and wellness indicators, including achievement and maintenance of healthy weights, growth and development patterns, and dental health.
"The likelihood of consuming fruits and vegetables among youth and adults is strongly related to knowing how to prepare most fruits and vegetables. Increased cooking skills among children can positively influence their consumption of fruit and vegetables in a manner that will persist into adulthood," Dr. Folkvord added.
|
10.1016/j.jneb.2019.09.016
| 2,019 |
Journal of Nutrition Education and Behavior
|
Watching TV Cooking Programs: Effects on Actual Food Intake Among Children
|
To test the effects of a cooking program on healthy food decisions.An experimental between-subjects design with 3 conditions: healthy, unhealthy, and control.Class settings in 5 different schools.One hundred twenty-five children between 10 and 12 years of age.Video clips of cooking program containing healthy foods versus cooking program containing unhealthy foods versus control program.Healthy versus unhealthy food choice.Logistic regression analysis, with the control condition as a reference in the first contrast test and the unhealthy food condition as a reference in the second contrast, to examine effects on food choice between conditions.Children who watched the cooking program with healthy foods had a higher probability of selecting healthy food than children who watched the cooking program with unhealthy foods (P = .027), or with the control condition (P = .039).These findings indicated a priming effect of the foods the children were exposed to, showing that nutrition education guided by reactivity theory can be promising. Cooking programs may affect the food choices of children and could be an effective method in combination with other methods to improve their dietary intake.
|
848776
|
Visualizing danger from songbird warning calls
|
Watch out! Snake!
The moment you hear this, you cannot help but imagine a slithering creature, as your body prepares for a possible attack. In human conversation, hearing a particular word (eg "snake") can cause a listener to retrieve a specific mental image, even if there is nothing in the field of vision.
This cognition was once thought to be unique to humans. Now it turns out that songbirds have a similar ability.
A new study in PNAS reveals that a small songbird, the Japanese tit (Parus minor), can retrieve a visual image of a predator from specific alarm calls, providing the first evidence that nonhuman animals can 'see' a reference to certain vocalizations.
"The Japanese tit produces particular alarm calls when, and only when, encountering a predatory snake," explains Toshitaka Suzuki at the Center for Ecological Research, Kyoto University, and author of this study.
Using audio playback of calls and a short stick cut from a tree branch, the researcher discovered that simply hearing snake-specific calls causes the birds to perceive an otherwise inanimate object as a real snake.
In the experiment, snake-specific alarm calls were played while the birds approached a stick being moved in a serpentine fashion--up a tree trunk or along the ground. The birds notably did not respond to the same stick when hearing other calls, or if the stick's movement was not snake-like, indicating that, before seeing a real snake, they retrieve a snake image from specific alarm calls, causing them to become more sensitive to objects resembling snakes.
"With a snake's image in mind, tits can efficiently search out a snake regardless of its spatial position," says Suzuki.
Upon encountering a real snake, the birds typically make a close approach, hovering over it and spreading their wings and tail, as if to deter the snake from attacking. The birds in this study likewise made an approach, but did not exhibit such distraction behavior.
"They may have realized that the stick was not a real snake once they got close enough."
Suzuki was inspired by his previous work showing that the Japanese tit alters its response to snake-specific alarm calls depending on circumstances. If such alarms are heard while in a nest cavity, the birds immediately flee as if to evade an attack. In contrast, when outside the nest, they look at the ground near the nesting tree as if searching for a snake.
"These birds do not respond to the calls in a uniform way, but appear to retrieve a snake image and then decide how to deal with the predator according to the circumstance," he explains.
Over the last three decades, field biologists have revealed that many animals, such as monkeys and meerkats, produce specific calls for specific types of food or predators.
"Retrieval of mental images may also be involved in other animal communication systems," adds Suzuki. "Uncovering cognitive mechanisms for communication in wild animals can give insights into the origins and evolution of human speech."
|
10.1073/pnas.1718884115
| 2,018 |
Proceedings of the National Academy of Sciences
|
Alarm calls evoke a visual search image of a predator in birds
|
Significance In human speech, words often cause listeners to retrieve visual mental images of target objects. In nonhuman animal communication systems, many key, language-like features have been demonstrated, but there is still no evidence that animal signals evoke mental images of objects in receivers. Japanese tits produce specific alarm calls when encountering a predatory snake. Here, I show that simply hearing these calls causes tits to become more visually perceptive to objects resembling snakes (moving sticks). This result indicates that before having detected a real snake, tits retrieve its visual image from snake-specific alarm calls and uses this to search out snakes. This study provides evidence for a call-evoked visual search image in a nonhuman animal.
|
725001
|
Combination pack battles cancer
|
For efficient cancer therapy with few side effects, the active drug should selectively attain high concentration in the tumor. In the journal Angewandte Chemie, scientists have introduced a new approach, in which two synergistic drug components are combined into a dimer. This dimer can be incorporated into polymeric nanotransporters at exceptionally high concentration. The components are activated when the dimer is split within the tumor. In addition, they enable use of two different imaging techniques.
Polymeric micelles are the most important nanotransporters used in treating tumors. Despite improved transport systems, many challenges must still be overcome: insufficient loading, premature release of the drug, no ability to monitor distribution of the drug, and limited accumulation of the drug within the tumor tissue. Longjiang Zhang, Guizhi Zhu, Xiaoyuan Chen, and their team have approached these problems from the other direction. Instead of improving the transporter, they improved the cargo.
The scientists from the National Institutes of Health in Bethesda, USA, and Nanjing University, China, have used a simple but effective trick: they connected two drugs, camptothecin and a special photosensitizer, to make a dimer. Micelles can very efficiently be loaded with an unusually large amount of the dimeric freight (59%). The dimers are less hydrophilic than their individual components, allowing them to be more easily introduced into the hydrophobic interior of the micelles. For the same reason, the dimers do not exit the micelles as they travel through the blood vessels. This reduces undesirable side effects.
Both of the components of the initially inactive dimer are connected by a disulfide bridge that can only be broken by a glutathione-dependent reaction cascade. Glutathione is a small protein that is present in high concentration in many tumors. Both of the drugs are only activated after the dimer is split within the tumor cells.
When the area of the tumor is irradiated with laser light, the photosensitizer converts normal oxygen into highly reactive singlet oxygen, which damages the cell and causes an oxygen deficiency. Camptothecin inhibits factor 1α, which helps cells to withstand oxygen deficiency. This boosts the cytotoxic effect of the photosensitizer. Another effect of camptothecin is that it damages the tumor cells' DNA.
In addition, the photosensitizer is a fluorescent dye and it can bind the radioisotope copper-64, which enables visualization with both fluorescence imaging and positron emission tomography (PET). Quantitative PET allows for precise monitoring of the dimer, as well as confirmation of its pharmacokinetics and biodistribution in vivo.
Experiments with cell cultures and tumorous mice demonstrated that this new method significantly improved transport and accumulation of the drug in tumors with significantly fewer side effects, while shrinking the tumor to a significantly greater degree than administration of the unbound individual components.
###
About the Author
Dr. Xiaoyuan (Shawn) Chen is a Senior Investigator and Chief of the Laboratory of Molecular Imaging and Nanomedicine (LOMIN), National Institute of Biomedical Imaging and Bioengineering (NIBIB), National Institutes of Health (NIH). His lab focus on developing theranostics to diagnose and treat cancer and other diseases.
https://www.nibib.nih.gov/about-nibib/staff/xiaoyuan-chen
|
10.1002/anie.201801984
| 2,018 |
Angewandte Chemie International Edition
|
Polymeric Nanoparticles with a Glutathione‐Sensitive Heterodimeric Multifunctional Prodrug for In Vivo Drug Monitoring and Synergistic Cancer Therapy
|
Abstract Polymeric micelle‐based drug delivery systems have dramatically improved the delivery of small molecular drugs, yet multiple challenges remain to be overcome. A polymeric nanomedicine has now been engineered that possesses an ultrahigh loading (59 %) of a glutathione (GSH)‐sensitive heterodimeric multifunctional prodrug (HDMP) to effectively co‐deliver two synergistic drugs to tumors. An HDMP comprising of chemotherapeutic camptothecin (CPT) and photosensitizer 2‐(1‐hexyloxyethyl)‐2‐devinyl pyropheophorbide‐α (HPPH) was conjugated via a GSH‐cleavable linkage. The intrinsic fluorogenicity and label‐free radio‐chelation ( 64 Cu) of HPPH enabled direct drug monitoring by fluorescence imaging and positron emission tomography (PET). Through quantitative PET imaging, HDMP significantly improves drug delivery to tumors. The high synergistic therapeutic efficacy of HDMP‐loaded NPs highlights the rational design of HDMP, and presents exciting opportunities for polymer NP‐based drug delivery.
|
594662
|
New work showcases the chemistry of an upcoming fuel cell electrolyte
|
Tsukuba, Japan - As far back as the 1930s, inventors have commercialized fuel cells as a versatile source of power. Now, researchers from Japan have highlighted the impressive chemistry of an essential component of an upcoming fuel cell technology.
In a study recently published in The Journal of Physical Chemistry Letters, researchers from the University of Tsukuba have revealed successive proton transport--energy transfer--in an advanced carbon-based crystal for future fuel cells, and the chemistry that underpins this phenomenon.
Such crystals are exciting as solid electrolytes--energy transfer media--in upcoming fuel cell technologies. Solid electrolytes have advantages, such as high power efficiency and long-term stability, which some electrolytes lack. Solid electrolytes based on imidazole are common focuses of study. Researchers hypothesize that crystals of imidazolium hydrogen succinate can exhibit successive proton transport, also known as proton jumping. At present, this has not been rigorously confirmed, something the researchers at the University of Tsukuba aimed to address.
"A wide range of lab work and computer simulations are consistent with unidirectional proton transport in crystals of imidazolium hydrogen succinate," says lead and senior author of the study, Professor Yuta Hori. "Because this hypothesis requires further testing, we computed the molecular energy versus molecular geometry of our crystals, and compared our results with experimental data."
To do this, the researchers studied known crystal structures to investigate a chemical structure known as hydrogen bonds. Hydrogen dynamics on these bonds facilitate proton transport within the crystals and can be characterized experimentally by infrared spectroscopy.
"The spectroscopy results were clear," explains Hori. "We found that at 100°C, compared with 30°C, there was a shift to higher energy in a peak that pertains to proton transport."
Furthermore, the researchers' calculated peaks--those corresponding to chemical units that strongly contribute to hydrogen bonding--were consistent with the experimental data.
"We used these results to construct a model that traced how a proton is transferred from one imidazole unit to another," says Hori. "Our calculated potential energy surface provided geometric and energetic data that are consistent with proton jumping."
Fuel cells are used today to power a wide range of civil infrastructure and technologies, and typically produce few emissions. Improving the utility of fuel cells in more diverse applications, achieved in part by understanding how they work, will help minimize wasted power in the coming years.
|
10.1021/acs.jpclett.1c01280
| 2,021 |
The Journal of Physical Chemistry Letters
|
Proton Conduction Mechanism for Anhydrous Imidazolium Hydrogen Succinate Based on Local Structures and Molecular Dynamics
|
Anhydrous organic crystalline materials incorporating imidazolium hydrogen succinate (Im-Suc), which exhibit high proton conduction even at temperatures above 100 °C, are attractive for elucidating proton conduction mechanisms toward the development of solid electrolytes for fuel cells. Herein, quantum chemical calculations were used to investigate the proton conduction mechanism in terms of hydrogen-bonding (H-bonding) changes and restricted molecular rotation in Im-Suc. The local H-bond structures for proton conduction were characterized by vibrational frequency analysis and compared with corresponding experimental data. The calculated potential energy surface involving proton transfer (PT) and imidazole (Im) rotational motion showed that PT between Im and succinic acid was a rate-limiting step for proton transport in Im-Suc and that proton conduction proceeded via the successive coupling of PT and Im rotational motion based on a Grotthuss-type mechanism. These findings provide molecular-level insights into proton conduction mechanisms for Im-based (or -incorporated) H-bonding organic proton conductors.
|
931230
|
Discovery of universal adversarial attacks for quantum classifiers
|
10.1093/nsr/nwab130
| 2,021 |
National Science Review
|
Universal Adversarial Examples and Perturbations for Quantum Classifiers
|
Quantum machine learning explores the interplay between machine learning and quantum physics, which may lead to unprecedented perspectives for both fields. In fact, recent works have shown strong evidences that quantum computers could outperform classical computers in solving certain notable machine learning tasks. Yet, quantum learning systems may also suffer from the vulnerability problem: adding a tiny carefully-crafted perturbation to the legitimate input data would cause the systems to make incorrect predictions at a notably high confidence level. In this paper, we study the universality of adversarial examples and perturbations for quantum classifiers. Through concrete examples involving classifications of real-life images and quantum phases of matter, we show that there exist universal adversarial examples that can fool a set of different quantum classifiers. We prove that for a set of $k$ classifiers with each receiving input data of $n$ qubits, an $O(\frac{\ln k} {2^n})$ increase of the perturbation strength is enough to ensure a moderate universal adversarial risk. In addition, for a given quantum classifier we show that there exist universal adversarial perturbations, which can be added to different legitimate samples and make them to be adversarial examples for the classifier. Our results reveal the universality perspective of adversarial attacks for quantum machine learning systems, which would be crucial for practical applications of both near-term and future quantum technologies in solving machine learning problems.
|
|
808180
|
Children with Alagille Syndrome have malformed bile ducts
|
Serious liver and heart problems can affect children with Alagille Syndrome early in life. While there is as yet no cure, researchers at Karolinska Institutet in Sweden have discovered that the liver disease part of the syndrome is caused by specific malformations of the bile ducts. The results, which are published in the journal Gastroenterology, were discovered with the aid of a new mouse model that can now be used to develop and test new therapies.
About 2 in 100,000 children are born with the rare genetic disease known as Alagille Syndrome. Some of them become very ill with chronic liver and heart problems, sometimes so serious that they require a transplant. The liver problems can also give rise to severe itching. Other possible symptoms of the disease, which is usually caused by different mutations of the JAGGED1 gene, are deformities of the eyes or bones, and sometimes growth disorders. The children can also develop problems with other organs, such as the kidneys. Little is currently understood about how the disease can develop and each symptom is treated separately.
Using mice with a mutation in JAGGED1 and similar liver and heart problems, the researchers have discovered that the mutation not only affects the development of certain cell types, but also controls the actual formation of the liver's bile ducts.
By substituting a specific amino acid in a so-called "Notch ligand" encoded by JAGGED1, they found that this single point mutation can interfere with the important Notch signaling system and disrupt communication between the Notch ligand and the Notch receptors. The interaction with the Notch 1 receptor failed, while communication with the Notch 2 receptor was possible.
"The discovery is important and opens up possibilities for new, more specific treatments," says Emma Andersson, assistant professor at Karolinska Institutet's Department of Biosciences and Nutrition. "We hope to be able to use our mouse model to understand the disease better, predict which children will need a transplant and ultimately find a cure."
The researchers also obtained liver biopsies from patients, which they studied using RNA sequencing.
"The liver samples were the most important piece of the puzzle for our study," says Dr Andersson. "Thanks to them, we were able to verify that the results from our mouse model and cell experiments were actually relevant to humans and patients. I'm extremely grateful for these donations. By comparing RNA sequencing with the Human Protein Atlas, we've also been able to identify new markers for the bile ducts that confirm the malformations that develop in patients with Alagille Syndrome."
|
10.1053/j.gastro.2017.11.002
| 2,017 |
Gastroenterology
|
Mouse Model of Alagille Syndrome and Mechanisms of Jagged1 Missense Mutations
|
Alagille syndrome is a genetic disorder characterized by cholestasis, ocular abnormalities, characteristic facial features, heart defects, and vertebral malformations. Most cases are associated with mutations in JAGGED1 (JAG1), which encodes a Notch ligand, although it is not clear how these contribute to disease development. We aimed to develop a mouse model of Alagille syndrome to elucidate these mechanisms.Mice with a missense mutation (H268Q) in Jag1 (Jag1+/Ndr mice) were outbred to a C3H/C57bl6 background to generate a mouse model for Alagille syndrome (Jag1Ndr/Ndr mice). Liver tissues were collected at different timepoints during development, analyzed by histology, and liver organoids were cultured and analyzed. We performed transcriptome analysis of Jag1Ndr/Ndr livers and livers from patients with Alagille syndrome, cross-referenced to the Human Protein Atlas, to identify commonly dysregulated pathways and biliary markers. We used species-specific transcriptome separation and ligand-receptor interaction assays to measure Notch signaling and the ability of JAG1Ndr to bind or activate Notch receptors. We studied signaling of JAG1 and JAG1Ndr via NOTCH 1, NOTCH2, and NOTCH3 and resulting gene expression patterns in parental and NOTCH1-expressing C2C12 cell lines.Jag1Ndr/Ndr mice had many features of Alagille syndrome, including eye, heart, and liver defects. Bile duct differentiation, morphogenesis, and function were dysregulated in newborn Jag1Ndr/Ndr mice, with aberrations in cholangiocyte polarity, but these defects improved in adult mice. Jag1Ndr/Ndr liver organoids collapsed in culture, indicating structural instability. Whole-transcriptome sequence analyses of liver tissues from mice and patients with Alagille syndrome identified dysregulated genes encoding proteins enriched at the apical side of cholangiocytes, including CFTR and SLC5A1, as well as reduced expression of IGF1. Exposure of Notch-expressing cells to JAG1Ndr, compared with JAG1, led to hypomorphic Notch signaling, based on transcriptome analysis. JAG1-expressing cells, but not JAG1Ndr-expressing cells, bound soluble Notch1 extracellular domain, quantified by flow cytometry. However, JAG1 and JAG1Ndr cells each bound NOTCH2, and signaling from NOTCH2 signaling was reduced but not completely inhibited, in response to JAG1Ndr compared with JAG1.In mice, expression of a missense mutant of Jag1 (Jag1Ndr) disrupts bile duct development and recapitulates Alagille syndrome phenotypes in heart, eye, and craniofacial dysmorphology. JAG1Ndr does not bind NOTCH1, but binds NOTCH2, and elicits hypomorphic signaling. This mouse model can be used to study other features of Alagille syndrome and organ development.
|
879018
|
Tropical forest response to drought depends on age
|
Tropical trees respond to drought differently depending on their ages, according to new research led by a postdoctoral scientist at the University of Wyoming.
Mario Bretfeld, who works in the lab of UW Department of Botany Professor Brent Ewers, is the lead author of an article that appears today (Monday) in the journal New Phytologist, one of the top journals in the field of plant controls over the water cycle. The research was conducted in collaboration with the Smithsonian Tropical Research Institute (STRI).
"The paper provides some very interesting insights into how forest age interacts with drought to determine how much water is produced from tropical forests," Ewers says. "This work has implications for the operation of the Panama Canal, as well as providing fundamental insights into how forests control the water cycle."
The research team compared responses to drought in 8-, 25- and 80-year-old forest patches in the Agua Salud project, a 700-hectare land-use experiment collaboration with the Panama Canal Authority, Panama's Ministry of the Environment and other partners. The team measured water use in 76 trees representing more than 40 different species in forests of different ages in the Panama Canal watershed during an especially extended drought resulting from El Niño conditions in 2015 and 2016.
The information gained from the study is critical to understanding how tropical forests respond to the severe and frequent droughts predicted by climate change scenarios, says Jefferson Hall, staff scientist at STRI. He notes that, globally, 2016 registered as the warmest year since climate records began to be compiled.
"Droughts can be really hard on tropical forests," Hall says. "Too much heat, low humidity and not enough water can drastically alter which trees survive. We found that forest age matters."
Water moves from soil into roots, through stems and branches into tree leaves, where some of it is used for photosynthesis. Most of this water is released into the atmosphere -- a process called transpiration. Transpiration, or plant water use, can be measured using sap flow sensors in the stem.
"Transpiration is regulated by external factors -- for example, how dry the atmosphere is and how much water is available in the soil -- as well as internal factors, such as differences in the structure and function of wood and leaves," Bretfeld says. "Our results indicate that the factors most important for regulation of transpiration in young forests had to do with their ability to access water in the soil, whereas older forests were more affected by atmospheric conditions."
During the record drought, water use increased significantly in the oldest forests, whose expansive root systems supplied trees with water from deep soil layers and allowed for maintenance of transpiration on typically sunny and hot days. Trees in younger forests suffered from a lack of water, probably because their shallower root systems could not access water stored deeper in the ground. In response, trees in younger forests regulated the amount of water they were using during the dry period.
"All trees are not created equal. Their species and age matter. We are working on designing techniques we're calling 'smart reforestation,' making decisions about which tree species to plant to achieve different land-use objectives," Hall says. "This study is the perfect example of the link between basic and applied science, because it highlights the need to consider drought tolerance as we reforest wet, yet drought-prone areas."
###
This research was made possible with funding from the U.S. National Science Foundation, Stanley Motta, the Silicon Valley Foundation and the Heising-Simons Foundation.
STRI, headquartered in Panama City, Panama, is a unit of the Smithsonian Institution. The institute furthers the understanding of tropical biodiversity and its importance to human welfare; trains students to conduct research in the tropics; and promotes conservation by increasing public awareness of the beauty and importance of tropical ecosystems.
|
10.1111/nph.15071
| 2,018 |
New Phytologist
|
Plant water use responses along secondary forest succession during the 2015–2016 El Niño drought in Panama
|
Summary Tropical forests are increasingly being subjected to hotter, drier conditions as a result of global climate change. The effects of drought on forests along successional gradients remain poorly understood. We took advantage of the 2015–2016 El Niño event to test for differences in drought response along a successional gradient by measuring the sap flow in 76 trees, representing 42 different species, in 8‐, 25‐ and 80‐yr‐old secondary forests in the 15‐km 2 ‘Agua Salud Project’ study area, located in central Panama. Average sap velocities and sapwood‐specific hydraulic conductivities were highest in the youngest forest. During the dry season drought, sap velocities increased significantly in the 80‐yr‐old forest as a result of higher evaporative demand, but not in younger forests. The main drivers of transpiration shifted from radiation to vapor pressure deficit with progressing forest succession. Soil volumetric water content was a limiting factor only in the youngest forest during the dry season, probably as a result of less root exploration in the soil. Trees in early‐successional forests displayed stronger signs of regulatory responses to the 2015–2016 El Niño drought, and the limiting physiological processes for transpiration shifted from operating at the plant–soil interface to the plant–atmosphere interface with progressing forest succession.
|
963102
|
Studying the OCD cycle
|
Ikoma, Japan – Scientists from the Nara Institute of Science and Technology (NAIST), Advanced Telecommunications Research Institute international, and Tamagawa University have demonstrated that obsessive-compulsive disorder (OCD) can be understood as a result of imbalanced learning between reinforcement and punishment. On the basis of empirical tests of their theoretical model, they showed that asymmetries in brain calculations that link current results to past actions can lead to disordered behavior. Specifically, this can happen when the memory trace signal for past actions decays differently for good and bad outcomes. In this case, “good” means the result was better than expected, and “bad” means that it was worse than expected. This work helps to explain how OCD develops.
OCD is a mental illness involving anxiety, characterized by intrusive and repetitious thoughts, called obsessions, coupled with certain repeated actions, known as compulsions. Patients with OCD often feel unable to change behavior even when they know that the obsessions or compulsions are not reasonable. In severe cases, these may render the person incapable of leading a normal life. Compulsive behaviors, such as washing hands excessively or repeatedly checking whether doors are locked before leaving the house, are attempts to temporarily relieve anxiety caused by obsessions. However, hitherto, the means by which the cycle of obsessions and compulsions becomes strengthened was not well understood.
Now, a team led by researchers at NAIST has used reinforcement learning theory to model the disordered cycle associated with OCD. In this framework, an outcome that is better than predicted becomes more likely (positive prediction error), while a result that is worse than expected is suppressed (negative prediction error). In implementation of reinforcement learning, it is also important to consider delays, as well as positive/negative prediction errors. In general, the outcome of a certain choice is available after a certain delay. Therefore, reinforcement and punishment should be assigned to recent choices within a certain time frame. This is called credit assignment, which is implemented as a memory trace in reinforcement learning theory. Ideally, memory trace signals for past actions decay at equal speed for both positive and negative prediction errors. However, this cannot be completely realized in discrete neural systems. Using simulations, NAIST scientists found that agents implicitly learn obsessive-compulsive behavior when the trace decay factor for memory traces of past actions related to negative prediction errors (𝜈-) is much smaller than that related to positive prediction errors (𝜈+). This means that, from the opposite perspective, the view of past actions is much narrower for negative prediction errors than for positive prediction errors. “Our model, with imbalanced trace decay factors (𝜈+ > 𝜈-) successfully represents the vicious circle of obsession and compulsion characteristic of OCD”, say co-first authors Yuki Sakai and Yutaka Sakai.
To test this prediction, the researchers had 45 patients with OCD and 168 healthy control subjects play a computer-based game with monetary rewards and penalties. Patients with OCD showed much smaller 𝜈- compared with 𝜈+, as predicted by computational characteristics of OCD. In addition, this imbalanced setting of trace decay factors (𝜈+ > 𝜈-) was normalized by serotonin enhancers, which are first-line medications for treatment of OCD. “Although we think that we always make rational decisions, our computational model proves that we sometimes implicitly reinforce maladaptive behaviors,” says corresponding author, Saori C. Tanaka.
Although it is currently difficult to identify treatment-resistant patients based upon their clinical symptoms, this computational model suggests that patients with highly imbalanced trace decay factors may not respond to behavioral therapy alone. These findings may one day be used to determine which patients are likely to be resistant to behavioral therapy before commencement of treatment.
|
10.1016/j.celrep.2022.111275
| 2,022 |
Cell Reports
|
Memory trace imbalance in reinforcement and punishment systems can reinforce implicit choices leading to obsessive-compulsive behavior
|
We may view most of our daily activities as rational action selections; however, we sometimes reinforce maladaptive behaviors despite having explicit environmental knowledge. In this study, we model obsessive-compulsive disorder (OCD) symptoms as implicitly learned maladaptive behaviors. Simulations in the reinforcement learning framework show that agents implicitly learn to respond to intrusive thoughts when the memory trace signal for past actions decays differently for positive and negative prediction errors. Moreover, this model extends our understanding of therapeutic effects of behavioral therapy in OCD. Using empirical data, we confirm that patients with OCD show extremely imbalanced traces, which are normalized by serotonin enhancers. We find that healthy participants also vary in their obsessive-compulsive tendencies, consistent with the degree of imbalanced traces. These behavioral characteristics can be generalized to variations in the healthy population beyond the spectrum of clinical phenotypes.
|
768920
|
Discovery points the way to better and cheaper transparent conductors
|
Researchers at the University of Liverpool have made a discovery that could improve the conductivity of a type of glass coating which is used on items such as touch screens, solar cells and energy efficient windows.
Coatings are applied to the glass of these items to make them electrically conductive whilst also allowing light through. Fluorine doped tin dioxide is one of the materials used in commercial low cost glass coatings as it is able to simultaneously allow light through and conduct electrical charge but it turns out that tin dioxide has as yet untapped potential for improved performance.
In a paper published in the journal Advanced Functional Materials, physicists identify the factor that has been limiting the conductivity of fluorine doped tin dioxide, which should be highly conductive because fluorine atoms substituted on oxygen lattice sites are each expected to give an additional free electron for conduction.
The scientists report, using a combination of experimental and theoretical data, that for every two fluorine atoms that give an additional free electron, another one occupies a normally unoccupied lattice position in the tin dioxide crystal structure.
Each so-called "interstitial" fluorine atom captures one of the free electrons and thereby becomes negatively charged. This reduces the electron density by half and also results in increased scattering of the remaining free electrons. These combine to limit the conductivity of fluorine doped tin dioxide compared with what would otherwise be possible.
PhD student Jack Swallow, from the University's Department of Physics and the Stephenson Institute for Renewable Energy, said: "Identifying the factor that has been limiting the conductivity of fluorine doped tin dioxide is an important discovery and could lead to coatings with improved transparency and up to five times higher conductivity, reducing cost and enhancing performance in a myriad of applications from touch screens, LEDs, photovoltaic cells and energy efficient windows."
The researchers now intend to address the challenge of finding alternative novel dopants that avoid these inherent drawbacks.
|
10.1002/adfm.201701900
| 2,017 |
Advanced Functional Materials
|
Self‐Compensation in Transparent Conducting F‐Doped SnO<sub>2</sub>
|
Abstract The factors limiting the conductivity of fluorine‐doped tin dioxide (FTO) produced via atmospheric pressure chemical vapor deposition are investigated. Modeling of the transport properties indicates that the measured Hall effect mobilities are far below the theoretical ionized impurity scattering limit. Significant compensation of donors by acceptors is present with a compensation ratio of 0.5, indicating that for every two donors there is approximately one acceptor. Hybrid density functional theory calculations of defect and impurity formation energies indicate the most probable acceptor‐type defects. The fluorine interstitial defect has the lowest formation energy in the degenerate regime of FTO. Fluorine interstitials act as singly charged acceptors at the high Fermi levels corresponding to degenerately n‐type films. X‐ray photoemission spectroscopy of the fluorine impurities is consistent with the presence of substitutional F O donors and interstitial F i in a roughly 2:1 ratio in agreement with the compensation ratio indicated by the transport modeling. Quantitative analysis through Hall effect, X‐ray photoemission spectroscopy, and calibrated secondary ion mass spectrometry further supports the presence of compensating fluorine‐related defects.
|
604941
|
Questionnaire predicts likelihood of unprotected sex, binge drinking
|
ITHACA, N.Y. - Researchers in the social sciences have been searching for a holy grail: an accurate way to predict who is likely to engage in problematic behavior, like using drugs.
In a new study, Valerie Reyna, professor of human development at Cornell University, and Evan Wilhelms of Vassar College have debuted a new questionnaire that significantly outperforms 14 other gold-standard measures frequently used in economics and psychology. The measure's 12 simple questions ask in various ways whether one agrees with the principle "sacrifice now, enjoy later." Their study, "Gist of Delay of Gratification: Understanding and Predicting Problem Behaviors," appeared earlier this year in the Journal of Behavioral Decision Making.
"People who get drunk frequently, party with drugs, borrow money needlessly or have unprotected sex disagreed more with the concept 'sacrifice now, enjoy later' than people who didn't do these things," Reyna said. "Instead, they leaned more toward 'have fun today and don't worry about tomorrow.'"
Having fun is generally good, she said. "But not being able to delay gratification can interfere with education, health and financial well-being, and the impact is greater for young people," she added.
The researchers conducted four studies to get their results, comparing the measure, the Delay-of-gratification Gist Scale, against 14 others. The Gist Scale's questions include, "I wait to buy what I want until I have enough money," "I think it is better to save money for the future" and "I am worried about the amount of money I owe." Money is used as a "stand-in" or proxy for tempting rewards.
The first study asked 211 college students to take the Gist Scale and other measures that predict poor financial outcomes. The second and third studies, with 845 and 393 college students, respectively, compared the new measure against others involving delay discounting. With 47 teens and adult participants, the fourth study compared the Gist Scale against a widely used measure of impulsivity.
The Gist Scale is not only more accurate, it's also shorter and simpler - some other measures are more than twice as long. It is also gender and age neutral, meaning it can be taken by anyone.
Reyna points out that cultures all over the world have aphorisms that encourage the ability to delay gratification. That skill can improve with practice, she said.
"Sometimes we send young people very mixed messages about struggle. I think it's extremely important for them to know that struggle and pain are part of life and to be expected," she said. "Staying the course, keeping your eyes on the prize - these values make a difference. And they can be taught and they can be practiced."
###
Cornell University has television, ISDN and dedicated Skype/Google+ Hangout studios available for media interviews. For additional information, see this Cornell Chronicle story.
|
10.1002/bdm.1977
| 2,016 |
Journal of Behavioral Decision Making
|
The Gist of Delay of Gratification: Understanding and Predicting Problem Behaviors
|
Abstract Delay of gratification captures elements of temptation and self‐denial that characterize real‐life problems with money and other problem behaviors such as unhealthy risk taking. According to fuzzy‐trace theory, decision makers mentally represent social values such as delay of gratification in a coarse but meaningful form of memory called “gist.” Applying this theory, we developed a gist measure of delay of gratification that does not involve quantitative trade‐offs (as delay discounting does) and hypothesize that this construct explains unique variance beyond sensation seeking and inhibition in accounting for problem behaviors. Across four studies, we examine this Delay‐of‐gratification Gist Scale by using principal components analyses and evaluating convergent and divergent validity with other potentially related scales such as Future Orientation, Propensity to Plan, Time Perspectives Inventory, Spendthrift‐Tightwad, Sensation Seeking, Cognitive Reflection, Barratt Impulsiveness, and the Monetary Choice Questionnaire (delay discounting). The new 12‐item measure captured a single dimension of delay of gratification, correlated as predicted with other scales, but accounted for unique variance in predicting such outcomes as overdrawing bank accounts, substance abuse, and overall subjective well‐being. Results support a theoretical distinction between reward‐related approach motivation, including sensation seeking, and inhibitory faculties, including cognitive reflection. However, individuals' agreement with the qualitative gist of delay of gratification, as expressed in many cultural traditions, could not be reduced to such dualist distinctions nor to quantitative conceptions of delay discounting, shedding light on mechanisms of self‐control and risk taking. Copyright © 2016 John Wiley & Sons, Ltd.
|
971827
|
How well do state-of-the-art climate models simulate sea level?
|
According to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, the global mean sea level has risen faster since 1900 than over any preceding century in the last 3000 years. This makes hundreds of coastal cities and millions of people vulnerable to a threat of higher water levels. State-of-the-art climate models provide a crucial means to study how much and how soon sea levels will rise. However, to what extent these models are able to represent sea level variations remains an open issue. Thus, they should be evaluated before they can be adopted to forecast future sea-level changes.
In a paper recently published in Atmospheric and Oceanic Science Letters, Dr Zhuoqi He from the South China Sea Institute of Oceanology led a team to assess the performance of climate models in simulating the sea level over the low-to-mid latitudes of the globe. The results indicated that the models simulated the long-term mean sea level relatively well. However, strong biases were apparent when the models tried to reproduce the sea level variance. For example, almost all of them underestimated the interannual signals over the subtropics where strong western boundary currents prevail.
“This bias is at least partially due to the misrepresentation of ocean processes because of the relatively low resolution of their historical simulations. We can see that the nearshore bias is reduced as the model resolution is increased,” explains Dr He.
“Understanding the causes of model misrepresentation is important towards improving the simulation skills of models, and our study helps in this respect by identifying a direction for future model development to reduce model biases.”
Atmospheric and Oceanic Science Letters
10.1016/j.aosl.2022.100288
Performance of CMIP6 models in simulating the dynamic sea level: Mean and interannual variance
|
10.1016/j.aosl.2022.100288
| 2,022 |
Atmospheric and Oceanic Science Letters
|
Performance of CMIP6 models in simulating the dynamic sea level: Mean and interannual variance
|
Observational data from satellite altimetry were used to quantify the performance of CMIP6 models in simulating the climatological mean and interannual variance of the dynamic sea level (DSL) over 40°S–40°N. In terms of the mean state, the models generally agree well with observations, and high consistency is apparent across different models. The largest bias and model discrepancy is located in the subtropical North Atlantic. As for simulation of the interannual variance, good agreement can be seen across different models, yet the models present a relatively low agreement with observations. The simulations show much weaker variance than observed, and bias is apparent over the subtropics in association with strong western boundary currents. This nearshore bias is reduced considerably in HighResMIP models. The underestimation of DSL interannual variance is at least partially due to the misrepresentation of ocean processes in the CMIP6 historical simulation with its relatively low resolution. The results identify directions for future model development towards a better understanding of the mean and interannual variability of DSL. 摘要 本研究采用卫星测高数据与第六次国际耦合模式比较计划 (CMIP6) 海平面动力进行对比, 重点针对40°S–40°N地区的动力海平面 (DSL) , 评估了模式对其平均态与年际变率的综合模拟能力. 结果表明, 对于DSL平均态的模拟, 模式与观测结果非常吻合, 模式之间的差异较小. 其中, 副热带北大西洋是模拟偏差和模式间差异较为显著的区域. 对于DSL年际变率的模拟, 模式之间保持较高的一致性, 但是, 模式与观测结果存在明显差异, 模式普遍低估了DSL的年际方差; 其中, 误差大值区域出现在副热带西边界流附近. 模式分辨率会影响CMIP6对中小尺度海洋过程的重现能力, 这可能是导致CMIP6历史模拟出现误差的原因之一.
|
645021
|
Shiny mega-crystals that build themselves
|
To really appreciate what a team of researchers led by Maksym Kovalenko and Maryna Bodnarchuk has achieved, it is best to start with something mundane: Crystals of table salt (also known as rock salt) are familiar to anyone who has ever had to spice up an overtly bland lunch. Sodium chloride - NaCl in chemical terms - is the name of the helpful chemical; it consists of positively charged sodium ions (Na+) and negatively charged chloride ions (Cl-). You can imagine the ions as beads that strongly attract each other forming densely packed and rigid crystals like the ones we can see in a saltshaker.
Many naturally occurring minerals consist of ions - positive metal ions and negative ions, which arrange themselves into different crystal structures depending on their relative sizes. In addition, there are structures such as diamond and silicon: These crystals consist of only one kind of atoms - carbon in the case of diamond -, but, similar to minerals, the atoms are also held together by strong bonding forces.
Novel building blocks for a new kind of matter
What if all these strong bonding forces between atoms could be eliminated? In the realm of atoms, with all the quantum mechanics at play, this would not yield a molecule or a solid-state matter, at least at ambient conditions. "But modern chemistry can produce alternative building blocks that can indeed have vastly different interactions than those between atoms," says Maksym Kovalenko, Empa researcher and professor of chemistry at ETH Zurich. "They can be as hard as billiard balls in a sense that they sense each other only when colliding. Or they can be softer on the surfaces, like tennis balls. Moreover, they can be built in many different shapes: not just spheres, but also cubes or other polyhedra, or more anisotropic entities."
Such building blocks are made of hundreds or thousands of atoms and are known as inorganic nanocrystals. Kovalenko's team of chemists at Empa and ETH is able to synthesize them in large quantities with a high degree of uniformity. Kovalenko and Bodnarchuk, and some of their colleagues the world over, have been working for about 20 years now with these kinds of building blocks. The scientists call them "Lego materials" because they form long-range ordered dense lattices known as superlattices.
It had long been speculated that mixing different kinds of nanocrystals would allow the engineering of completely new supramolecular structures. The electronic, optical or magnetic properties of such multicomponent assemblies would be expected to be a mélange of the properties of the individual components. In the early years, the work had focused on mixing spheres of different sizes, resulting in dozens of various superlattices with packing structures that mimic common crystal structures, such as table salt - albeit with crystal unit cells ten- to 100-times larger.
With their latest article in "Nature", the team led by Kovalenko and Bodnarchuk now managed to expand the knowledge a great deal further: They set out to study a mixture of different shapes - spheres and cubes to start with. This seemingly simple deviation from the mainstream immediately led to vastly different observations. Moreover, the chosen cubes, namely colloidal cesium lead halide perovskite nanocrystals, are known as some of the brightest light emitters developed to date, ever since their invention by the same team six years ago. The superlattices the researchers obtained are not only peculiar as far as their structure is concerned, but also with respect to some of their properties. In particular, they exhibit superfluorescence - that is, the light is irradiated in a collective manner and much faster than the same nanocrystals can accomplish in their conventional state, embedded in a liquid or a powder.
Entropy as an ordering force?
Upon mixing spheres and cubes, wondrous things happen: The nanocrystals arrange themselves to form structures familiar from the world of minerals such as perovskites or rock salt. All these structures, however, are 100-times larger than their counterparts in conventional crystals. What's more: A perovskite-like structure had never before been observed in the assembly of such non-interacting nanocrystals.
Especially curious: These highly ordered structures are created solely by the force of entropy - that is, the perpetual endeavor of nature to cause maximum disorder. What a perfect joke of nature! This paradoxical assembly occurs because, during crystal formation, the particles tend to use the space around them most efficiently in order to maximize their freedom of motion during the late stages of solvent evaporation, i.e. before they are "frozen" in their eventual crystal lattice positions. In this regard, the shape of the individual nanocrystals plays a crucial role - soft-perovskite cubes allow for a much denser packing than what is attainable in all-spherical mixtures. Thus, the force of entropy causes the nanocrystals to always arrange in the densest possible packing - as long as they are designed such that they do not attract or repel each other by other means, such as electrostatics.
The dawn of a new science
"We have seen that we can make new structures with high reliability," says Maksym Kovalenko. "And this now raises many more questions; we are still at the very beginning: What physical properties do such weakly bonded superlattices exhibit and what is the structure-property relationship? Can they be used for certain technical applications, say, in optical quantum computing or in quantum imaging? According to what mathematical laws do they form? Are they truly thermodynamically stable or only kinetically trapped?" Kovalenko is now searching for theorists who might be able to predict what may yet happen.
"We will eventually discover completely new classes of crystals," he speculates, "ones, for which there are no natural models. They will then have to be measured, classified and described." Having written the first chapter in the textbook for a new kind of chemistry, Kovalenko is more than ready to deliver his share to make that happen as fast as possible. "We are now experimenting with disk- and cylinder-shaped nanocrystallites. And we're very excited to see the new structures they enable", he smiles.
|
10.1038/s41586-021-03492-5
| 2,021 |
Nature
|
Perovskite-type superlattices from lead halide perovskite nanocubes
|
ISSN:0028-0836
|
541069
|
A trio that could spell trouble: Many with dementia take risky combinations of medicines
|
People over 65 shouldn't take three or more medicines that act on their brain and nervous system, experts strongly warn, because the drugs can interact and raise the risk of everything from falls to overdoses to memory issues.
But a new study finds that 1 in 7 people with dementia who live outside nursing homes are taking at least three of these drugs.
Even if they received the drugs to calm some of dementia's more troubling behavioral issues, the researchers say, taking them in combination could accelerate their loss of memory and thinking ability, and raise their chance of injury and death.
The new study is published in JAMA by a team led by a University of Michigan geriatric psychiatrist who has studied the issue of medication for dementia-related behaviors for years.
It's based on data from 1.2 million people with dementia covered by Medicare and focuses on medications such as antidepressants, sedatives used as sleep medications, opioid painkillers, antipsychotics, and anti-seizure medications.
More than 831,000 of the entire study population received at least one of the medications at least once during the study period in 2018. More than 535,000 of them -- nearly half of all the people with dementia in the study -- took one or two of them for more than a month.
But the researchers focused on the 13.9% of the study population who took three or more drugs that act on the central nervous system, and took them for more than a month. They dubbed this "CNS-active polypharmacy."
That level of use goes beyond the limits recommended by the internationally accepted guidelines called the Beers criteria.
Key message for patients, caregivers and providers
The federal government has long targeted use of these medications in nursing homes, but not in people who live at home or in less-regulated settings like assisted living facilities.
"Dementia comes with lots of behavioral issues, from changes in sleep and depression to apathy and withdrawal, and providers, patients and caregivers may naturally seek to address these through medications," says Donovan Maust, M.D., M.S., the lead author of the study and an associate professor of psychiatry at Michigan Medicine, U-M's academic medical center.
"But the evidence supporting the use of many of them in people with dementia is pretty thin," he says, "while there is a lot of evidence about the risks, especially when there are multiple medications layered on top of one another."
Maust and his colleagues suggest that regular prescription drug reviews could help spot risky combinations, especially ones of three or more drugs that act on the brain and nervous system. Medicare covers such appointments with providers or pharmacists.
"It appears that we have a lot of people on a lot of medications without a very good reason," he says.
Classes of drugs studied
Antipsychotics have received the most attention for their risk to people with dementia, and 47% of those taking three or more of the medications in the study received at least one antipsychotic, most often Seroquel (quetiapine).
Even though such antipsychotics aren't approved for people with dementia, they're often prescribed to such patients for agitation and sleep issues, and more, Maust notes.
But two other classes of drugs were even more commonly prescribed to patients in the CNS polypharmacy group. Nearly all (92%) of those on three or more of the medications were taking an antidepressant, and 62% were taking an anti-seizure medication.
One drug in that last class, gabapentin, accounted for one-third of all the days of prescription supply that the patients in the study received during the study period.
While gabapentin is approved to treat epilepsy, few of these older adults had a seizure disorder. The vast majority of prescriptions were probably for other reasons because it is commonly prescribed off-label as a pain medication or to help with anxiety, Maust explains.
Another 41% of the people in the three-or-more medication group were taking a benzodiazepine, such as lorazepam (Ativan), often used for anxiety or agitation in people with dementia.
Maust's past work on benzodiazepine prescribing in older adults focused on long-term use, variation by geographic region, and the effects of national efforts to reduce the use of such drugs because of their risks.
New approaches needed
Maust says that providers and caregivers have the right motivation for trying to address dementia-related behaviors through medication: to reduce distress in the patients, and sometimes also in the caregivers.
Often the long-term goal is to make it possible for the person with dementia to avoid having to move to a long-term care facility. The high death toll of people with dementia in such facilities during the COVID-19 pandemic may increase that motivation, he notes.
And the lack of information for clinicians on the use of these drugs in dementia makes every prescription a judgment call.
But it's important to know that prescribing a medication combination that might be safe in younger people can be dangerous in older ones. Those with reduced cognitive abilities may be especially sensitive to potential risks. The changes in brain chemistry and response to medication that come with age and with dementia alter the reaction to these drug combinations.
For instance, opioid pain medications already come with a black-box warning against combining them with other drugs that affect the central nervous system, for any user. But these combinations can be especially risky in older adults. Yet 32% of the people in the study group were taking an opioid, most often hydrocodone.
Even as people with dementia receive drugs that act on their central nervous system for behavioral reasons, those same drugs may hasten their cognitive decline. For instance, a clinical trial of the antidepressant citalopram (Celexa) as a way to treat dementia-related agitation showed that in just nine weeks, participants lost a measurable portion of their cognitive ability.
"It's important for family members and providers to communicate often about what symptoms are happening, and what might be done with non-medication interventions such as physical therapy or sleep hygiene, as well as medications, to address them," Maust says. "Talk about what medications the patient is on, why they're on each one, and whether it might be worthwhile to try tapering some of them because the symptom that prompted the prescription originally might have waned over time."
In some cases, the medications may even be prescribed in response to the distress that a caregiver feels when seeing their loved one behave in a certain way. Connecting caregivers with resources through nonprofit organizations, or their local Area Agency on Aging, could help them support their loved ones better.
The researchers are now looking at which providers prescribed each of the medications to the patients taking three or more of the medications, to look for patterns and opportunities to educate providers or put systems in place after hospitalizations or other events. ###
In addition to Maust, who is a member of the U-M Institute for Healthcare Policy and Innovation and the VA Center for Clinical Management Research, the study was performed by a team that includes IHPI members and staff Myra Kim, Sc.D., M.A., Julie Bynum, M.D., Kenneth Langa, M.D., Ph.D., Chiang-Hua Chang, Ph.D., Kara Zivin, Ph.D. and Erica Solway, Ph.D., former U-M psychiatry professor Helen Kales, M.D., now at the University of California-Davis, and senior author Stephen Marcus, Ph.D. of the University of Pennsylvania.
The study was supported by a grant from the National Institute on Aging (AG056407).
|
10.1001/jama.2021.1195
| 2,021 |
JAMA
|
Prevalence of Central Nervous System–Active Polypharmacy Among Older Adults With Dementia in the US
|
<h3>Importance</h3> Community-dwelling older adults with dementia have a high prevalence of psychotropic and opioid use. In these patients, central nervous system (CNS)–active polypharmacy may increase the risk for impaired cognition, fall-related injury, and death. <h3>Objective</h3> To determine the extent of CNS-active polypharmacy among community-dwelling older adults with dementia in the US. <h3>Design, Setting, and Participants</h3> Cross-sectional analysis of all community-dwelling older adults with dementia (identified by<i>International Classification of Diseases, Ninth Revision, Clinical Modification </i>or<i>International Statistical Classification of Diseases and Related Health Problems, Tenth Revision </i>diagnosis codes; N = 1 159 968) and traditional Medicare coverage from 2015 to 2017. Medication exposure was estimated using prescription fills between October 1, 2017, and December 31, 2018. <h3>Exposures</h3> Part D coverage during the observation year (January 1-December 31, 2018). <h3>Main Outcomes and Measures</h3> The primary outcome was the prevalence of CNS-active polypharmacy in 2018, defined as exposure to 3 or more medications for longer than 30 days consecutively from the following classes: antidepressants, antipsychotics, antiepileptics, benzodiazepines, nonbenzodiazepine benzodiazepine receptor agonist hypnotics, and opioids. Among those who met the criterion for polypharmacy, duration of exposure, number of distinct medications and classes prescribed, common class combinations, and the most commonly used CNS-active medications also were determined. <h3>Results</h3> The study included 1 159 968 older adults with dementia (median age, 83.0 years [interquartile range {IQR}, 77.0-88.6 years]; 65.2% were female), of whom 13.9% (n = 161 412) met the criterion for CNS-active polypharmacy (32 139 610 polypharmacy-days of exposure). Those with CNS-active polypharmacy had a median age of 79.4 years (IQR, 74.0-85.5 years) and 71.2% were female. Among those who met the criterion for CNS-active polypharmacy, the median number of polypharmacy-days was 193 (IQR, 88-315 polypharmacy-days). Of those with CNS-active polypharmacy, 57.8% were exposed for longer than 180 days and 6.8% for 365 days; 29.4% were exposed to 5 or more medications and 5.2% were exposed to 5 or more medication classes. Ninety-two percent of polypharmacy-days included an antidepressant, 47.1% included an antipsychotic, and 40.7% included a benzodiazepine. The most common medication class combination included an antidepressant, an antiepileptic, and an antipsychotic (12.9% of polypharmacy-days). Gabapentin was the most common medication and was associated with 33.0% of polypharmacy-days. <h3>Conclusions and Relevance</h3> In this cross-sectional analysis of Medicare claims data, 13.9% of older adults with dementia in 2018 filled prescriptions consistent with CNS-active polypharmacy. The lack of information on prescribing indications limits judgments about clinical appropriateness of medication combinations for individual patients.
|
572771
|
Preliminary study of 300+ COVID-19 patients suggests convalescent plasma therapy effective
|
HOUSTON-(Aug. 12, 2020) - A preliminary analysis of an ongoing study of more than 300 COVID-19 patients treated with convalescent plasma therapy at Houston Methodist suggests the treatment is safe and effective. The results, which appear now in The American Journal of Pathology, represents one of the first peer-reviewed publications in the country assessing efficacy of convalescent plasma.
From March 28, when Houston Methodist became the first academic medical center in the nation to infuse critically ill COVID-19 patients with plasma donated from recovered patients, research physicians have used the treatment on 350 patients. The study tracked severely ill COVID-19 patients admitted to Houston Methodist's system of eight hospitals from March 28 through July 6.
These latest results from Houston Methodist that now measured medical effectiveness offer valuable scientific evidence that transfusing critically ill COVID-19 patients with high antibody plasma early in their illness - within 72 hours after hospitalization proving most effective - reduced the mortality rate.
The study, titled "Treatment of COVID-19 Patients with Convalescent Plasma Reveals a Signal of Significantly Decreased Mortality," was led by principal investigator Eric Salazar, M.D., Ph.D., assistant professor of Pathology and Genomic Medicine with the Houston Methodist Hospital and Research Institute and corresponding author James M. Musser, M.D., Ph.D., chair of the Department of Pathology and Genomic Medicine at Houston Methodist.
"Our studies to date show the treatment is safe and, in a promising number of patients, effective," Musser said. "While convalescent plasma therapy remains experimental and we have more research to do and data to collect, we now have more evidence than ever that this century-old plasma therapy has merit, is safe and can help reduce the death rate from this virus."
The research team found that those treated early in their illness with donated plasma that has the highest concentration of anti-COVID-19 antibodies are more likely to survive and recover than similar patients who were not treated with convalescent plasma. Patients with a history of severe reactions to blood transfusions, those with underlying uncompensated and untreatable end-stage disease and patients with fluid overload or other conditions that would increase the risk of plasma transfusion were excluded.
The patients were tracked for 28 days after plasma transfusion and compared to a control group of similar COVID-19 patients who did not receive convalescent plasma. An observational propensity score-matched analysis was used to balance the characteristics of participants and allow for an objective interpretation of the results at this stage.
Several studies have measured safety, showing that the more than 34,000 COVID-19 patients in the U.S. who have received plasma transfusions for COVID-19 experienced minimal adverse effects.
|
10.1016/j.ajpath.2020.08.001
| 2,020 |
American Journal Of Pathology
|
Treatment of Coronavirus Disease 2019 Patients with Convalescent Plasma Reveals a Signal of Significantly Decreased Mortality
|
Coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2, has spread globally, and proven treatments are limited. Transfusion of convalescent plasma collected from donors who have recovered from COVID-19 is among many approaches being studied as potentially efficacious therapy. We are conducting a prospective, propensity score-matched study assessing the efficacy of COVID-19 convalescent plasma transfusion versus standard of care as treatment for severe and/or critical COVID-19. We present herein the results of an interim analysis of 316 patients enrolled at Houston Methodist hospitals from March 28 to July 6, 2020. Of the 316 transfused patients, 136 met a 28-day outcome and were matched to 251 non-transfused control COVID-19 patients. Matching criteria included age, sex, body mass index, comorbidities, and baseline ventilation requirement 48 hours from admission, and in a second matching analysis, ventilation status at day 0. Variability in the timing of transfusion relative to admission and titer of antibodies of plasma transfused allowed for analysis in specific matched cohorts. The analysis showed a significant reduction (P = 0.047) in mortality within 28 days, specifically in patients transfused within 72 hours of admission with plasma with an anti-spike protein receptor binding domain titer of ≥1:1350. These data suggest that treatment of COVID-19 with high anti-receptor binding domain IgG titer convalescent plasma is efficacious in early-disease patients.
|
906424
|
Hydrocortisone effects on neurodevelopment for extremely low birthweight infants
|
Hydrocortisone is one of the 15 most frequently prescribed medications in extremely low birth weight (?1000 g) infants in the newborn intensive care unit (NICU).
Despite widespread use, the effects on neurodevelopmental outcomes of stress doses of hydrocortisone or of dosing after 1 week of age have not been assessed in randomized trials. Additionally, the benefit of giving hydrocortisone in relation to infant risk of developing bronchopulmonary dysplasia (BPD) is not well documented.
"Despite advances in perinatal care, one of every two extremely low birth weight infants develops bronchopulmonary dysplasia and/or neurodevelopmental impairments," says Nehal Parikh, DO, principal investigator in the Center for Perinatal Research at The Research Institute at Nationwide Children's Hospital. "We've been using hydrocortisone in these patients without randomized, placebo-controlled studies."
BPD is characterized by systemic inflammation, pointing toward a potential link among BPD, abnormal brain development and neurodevelopment. The idea that an anti-inflammatory medication, such as hydrocortisone, could offer benefits in all of these areas has taken hold without necessarily having data to support it.
In a new study recently published in PLoS ONE, Dr. Parikh and colleagues assess the long- and short-term effects of stress doses of hydrocortisone after 1 week of age via randomized, double-blind, placebo-controlled trial. Extremely low birth weight, ventilator-dependent infants between 10 and 21 days old were administered a seven-day taper of hydrocortisone or a saline placebo while being treated at Children's Memorial Hermann Hospital in Texas. The study period ended with neurodevelopmental checks in the 18 to 22 month corrected age period.
"Unlike dexamethasone, higher/stress dose hydrocortisone does not appear to be associated with brain injury or neurodevelopmental impairments," Dr. Parikh says.
The team found no evidence that the doses of hydrocortisone used in the study prevented BPD, but Parikh suggests even though lower doses of hydrocortisone do not appear to benefit the lungs, they may improve cognitive outcomes.
"Hydrocortisone as prescribed in our trial did not reduce the risk of BPD when compared to placebo, possibly because we used anti-inflammatory doses that were not sufficiently high and/or the enrolled population of extremely preterm infants was too sick," Dr. Parikh says.
As a result of this work, Dr. Parikh suggests neonatologists be more liberal with the use of stress dose hydrocortisone for relative adrenal insufficiency in extremely preterm infants (a common condition that can result in refractory hypotension) as the current data do not indicate adverse effects on neurodevelopment.
In follow-up work, Dr. Parikh and his team at Nationwide Children's are examining the relationship between hydrocortisone and risk of a common inflammatory brain abnormality - diffuse excessive high signal intensity (DEHSI) - which occurs in up to 75 percent of extremely preterm infants.
"Data from our pilot trial suggests a trend towards reduced cognitive deficits in hydrocortisone treated preterm infants," says Dr. Parikh. "Our initial study may have lacked sufficient power to show a significant difference. However, we have shown that increasing DEHSI volume is a significant predictor of lower cognitive score. Thus, examining a short term surrogate measure of cognitive outcomes, such as DEHSI, may reveal that stress dose hydrocortisone reduces risk of DEHSI and may indeed improve neurodevelopmental outcomes if tested in a larger trial."
###
Reference:
Parikh NA, Kennedy KA, Lasky RE, Tyson JE. Neurodevelopmental outcomes of extremely preterm infants randomized to stress dose hydrocortisone. PLoS ONE. 2015 Sep 16;10(9):e0137051.
|
10.1371/journal.pone.0137051
| 2,015 |
PLoS ONE
|
Neurodevelopmental Outcomes of Extremely Preterm Infants Randomized to Stress Dose Hydrocortisone
|
Objective To compare the effects of stress dose hydrocortisone therapy with placebo on survival without neurodevelopmental impairments in high-risk preterm infants. Study Design We recruited 64 extremely low birth weight (birth weight ≤1000g) infants between the ages of 10 and 21 postnatal days who were ventilator-dependent and at high-risk for bronchopulmonary dysplasia. Infants were randomized to a tapering 7-day course of stress dose hydrocortisone or saline placebo. The primary outcome at follow-up was a composite of death, cognitive or language delay, cerebral palsy, severe hearing loss, or bilateral blindness at a corrected age of 18–22 months. Secondary outcomes included continued use of respiratory therapies and somatic growth. Results Fifty-seven infants had adequate data for the primary outcome. Of the 28 infants randomized to hydrocortisone, 19 (68%) died or survived with impairment compared with 22 of the 29 infants (76%) assigned to placebo (relative risk: 0.83; 95% CI, 0.61 to 1.14). The rates of death for those in the hydrocortisone and placebo groups were 31% and 41%, respectively (P = 0.42). Randomization to hydrocortisone also did not significantly affect the frequency of supplemental oxygen use, positive airway pressure support, or need for respiratory medications. Conclusions In high-risk extremely low birth weight infants, stress dose hydrocortisone therapy after 10 days of age had no statistically significant effect on the incidence of death or neurodevelopmental impairment at 18–22 months. These results may inform the design and conduct of future clinical trials. Trial Registration ClinicalTrials.gov NCT00167544
|
563089
|
Humans' construction 'footprint' on ocean quantified for first time
|
In a world-first, the extent of human development in oceans has been mapped. An area totalling approximately 30,000 square kilometres - the equivalent of 0.008 percent of the ocean - has been modified by human construction, a study led by Dr Ana Bugnot from the University of Sydney School of Life and Environmental Sciences and the Sydney Institute of Marine Science has found.
The extent of ocean modified by human construction is, proportion-wise, comparable to the extent of urbanised land, and greater than the global area of some natural marine habitats, such as mangrove forests and seagrass beds.
When calculated as the area modified inclusive of flow-on effects to surrounding areas, for example, due to changes in water flow and pollution, the footprint is actually two million square kilometres, or over 0.5 percent of the ocean.
The oceanic modification includes areas affected by tunnels and bridges; infrastructure for energy extraction (for example, oil and gas rigs, wind farms); shipping (ports and marinas); aquaculture infrastructure; and artificial reefs.
Dr Bugnot said that ocean development is nothing new, yet, in recent times, it has rapidly changed. "It has been ongoing since before 2000 BC," she said. "Then, it supported maritime traffic through the construction of commercial ports and protected low-lying coasts with the creation of structures similar to breakwaters.
"Since the mid-20th century, however, ocean development has ramped up, and produced both positive and negative results.
"For example, while artificial reefs have been used as 'sacrificial habitat' to drive tourism and deter fishing, this infrastructure can also impact sensitive natural habitats like seagrasses, mudflats and saltmarshes, consequently affecting water quality.
"Marine development mostly occurs in coastal areas - the most biodiverse and biologically productive ocean environments."
Future expansion 'alarming'
Dr Bugnot, joined by co-researchers from multiple local and international universities, also projected the rate of future ocean footprint expansion.
"The numbers are alarming," Dr Bugnot said. "For example, infrastructure for power and aquaculture, including cables and tunnels, is projected to increase by 50 to 70 percent by 2028.
"Yet this is an underestimate: there is a dearth of information on ocean development, due to poor regulation of this in many parts of the world.
"There is an urgent need for improved management of marine environments. We hope our study spurs national and international initiatives, such as the EU Marine Strategy Framework Directive, to greater action."
The researchers attributed the projected expansion on people's increasing need for defences against coastal erosion and inundation due to sea level rise and climate change, as well as their transportation, energy extraction, and recreation needs.
|
10.1038/s41893-020-00595-1
| 2,020 |
Nature Sustainability
|
Current and projected global extent of marine built structures
|
The sprawl of marine construction is one of the most extreme human modifications to global seascapes. Nevertheless, its global extent remains largely unquantified compared to that on land. We synthesized disparate information from a diversity of sources to provide a global assessment of the extent of existing and projected marine construction and its effects on the seascape. Here we estimated that the physical footprint of built structures was at least 32,000 km2 worldwide as of 2018, and is expected to cover 39,400 km2 by 2028. The area of seascape modified around structures was 1.0–3.4 × 106 km2 in 2018 and was projected to increase by 50–70% for power and aquaculture infrastructure, cables and tunnels by 2028. In 2018, marine construction affected 1.5% (0.7–2.4%) of global Exclusive Economic Zones, comparable to the global extent of urban land estimated at 0.02–1.7%. This study provides a critical baseline for tracking future marine human development.
|
983702
|
Beyond greenspace and bluespace
|
You’ve heard it before; exposure to nature is good for you. Most research on environment and human health focuses on landscapes dominated by vegetation and bodies of liquid water. However, these so-called “green” and “blue” spaces only describe a fraction of Earth’s ecosystems. What about other landscapes, such as caves, deserts or glaciers?
A new paper is the first review that draws attention to the potential value of environments beyond greenspace and bluespace. The authors created a rubric to classify an environment based on its natural elements, rather than confusing color-coded language. They created three distinct categories—landscapes dominated by plants, rocks and minerals or water, including frozen landscapes. They identified existing studies that focused on non-greenspaces/bluespaces, and solid-state water, such as polar regions. In addition to what the benefits were, the authors identified what mechanisms, or pathways, underlay those benefits.
“We looked at studies focused on wild places—expeditions to the Arctic, cave therapy in China and others focused on high mountain landscapes,” said Alessandro Rigolon, assistant professor of city and metropolitan planning, and co-author of the review. “There was a breadth of different kinds of landscapes that show some beneficial factors. That was surprising to me—some of these places are not what you would consider this hospitable, right? Where humans have settled in small numbers. Why is that?”
The review, published Jan. 15, 2023, in the journal Science of the Total Environment, found that health outcomes from solid-state water or rock/mineral dominated landscapes resulted from both shorter-term (viewing images) and longer-term (living in the landscape) exposure. The reported benefits fit across a spectrum, from improved emotional and mental states to medical treatments for allergies. The mechanisms underlying the health benefits consisted of common theories from bluespace/greenspace literature, such as restoration, and less discussed pathways, such as self-determination and place attachment. There were also risks associated with exposure to these environments, including mobility issues, seasonal affected disorder and allergies. The authors say that much more research is needed to understand the restorative potential and therapeutic possibilities beyond greenspaces and bluespaces.
Landscapes dominated by solid-state water
The authors found little evidence of benefits from short-term exposure to landscapes dominated by solid-state water. However, cruises to the North Pole and guided glacier hikes indicate a fascination with ice-dominated landscapes, suggesting an emotional benefit that has yet to be uncovered. Alpine regions in the European Alps, Himalayas, Andes and the Wasatch Mountains’ “Greatest Snow on Earth” attract crowds of visitors every year, suggesting that outdoor recreation activities promote fitness and generate emotional and social benefits.
The authors also found clues to the health benefits of longer exposure to icy/snowy landscapes from studies of polar expeditions or military deployment. One review of Antarctic psychological research found that the lack of modern conveniences and responsibilities improved people’s moods and emotions. Other studies found that living in polar spaces for long periods spurred personal growth and improved well-being. The authors note that the study subjects were expedition members or soldiers that were trained to have a higher adaptability to extreme environments, so findings from this research might not apply to broader populations.
Landscapes dominated by rocks and minerals
Landscapes dominated by rocks and minerals include caves and both cold and dry-heat deserts. Caves are subterranean environments that often lack plants due to low sunlight. The authors found no evidence of benefits from short-term exposure to caves, but they point out that caves with stalactites and stalagmites attract more than 70 million visitors globally every year. Given their appeal, caves may promote positive emotional responses. The authors also found evidence that cave climates may be therapeutic for physical ailments. For example, speleotherapy involves breathing the unique air in caves, and halotherapy involves breathing air with airborne dry salt in an enclosed space that mimics salt caves. These therapies require longer-term exposure, and numerous studies have outlined their potential to treat afflictions from asthma and skin allergies to chronic obstructive pulmonary disease.
Based on the authors’ review, only dry-heat deserts have been researched for potential health benefits. A study of college students from Saudi Arabia exposed to one minute of a familiar coastal desert video performed better on a memory test than students exposed to an unfamiliar temperate forest video.
“This finding suggests that people’s familiarity with the predominant natural landscape where they grew up might play a role in the benefits that such landscape brings to them,” said Rigolon. Other studies that exposed participants to images or short walks in the desert found that they have a calming effect.
Longer-term exposure also impacted people. One experimental study found that during a four-day trip to the Utah desert, participants’ brain activity suggested their environment held their attention. Studies in Kenya and along the Israel/Jordan border revealed that living in the desert supported physical and mental well-being by offering freedom of movement and a sense of peace. Some medicinal therapies originated in desert landscapes, such as Uyghur sand therapy that uses sand heated by the sun to cure chronic osteoarthritis.
What drives health benefits?
To develop their new framework, the authors adapted three factors in greenspace/bluespace literature that link nature exposure to health. The first, harm reduction, refers to components of the landscape that mitigate noise, heat or air pollution. The researchers found no evidence of harm reduction from the rock/mineral and solid-state-water landscapes but included it for future research outcomes. The second, restoring capacity, refers to recovery from negative states, such as stress or fatigue. The third, building capacity, refers to natural landscapes’ ability to promote health through experiences, such as promoting social cohesion or physical activity.
The authors found evidence that these landscapes do promote restoring and building capacities. Many studies revealed that they restored attention spans, reduced stress and some covered therapies that addressed post-traumatic stress disorders.
What’s next?
Though the narrative review revealed some exciting health benefits from non-greenspaces/bluespaces, much more research is needed to address the gaps in the literature. Rigolon and colleagues have also published a paper that present a new dataset mapping all accessible and recreational public lands in the U.S., from local fields to national parks, including natural environments dominated by rocks and minerals. This dataset will allow them to study associations between rock/minerals dominated landscape and health outcomes on a broader scale than done so far. Locally, they’re also exploring whether guided walks through a garden and arboretum have positive impacts on older adults with dementia.
Rigolon concluded, “Living in Utah, it was exciting to find that desert and snowcapped landscapes have similar health benefits to landscapes dominated by vegetation. They show that tourism in our snowcapped mountains and red rock country can have great benefits for people’s mental health. One more reason to spend time outdoors.”
Co-authors include Hansen Li of Southwest University; Matthew Browning, Shuai Yuan, Olivia McAnirlin and Nazanin Hatami of Clemson University; Lincoln Larson of North Carolina State University; Derrick Taff and Jacob Benfield of The Pennsylvania State University; S.M. Labib of Utrecht University; and Peter Kahn Jr. of the University of Washington.
Science of The Total Environment
10.1016/j.scitotenv.2022.159292
Literature review
Not applicable
Beyond “bluespace” and “greenspace”: A narrative review of possible health benefits from exposure to other natural landscapes
15-Jan-2023
|
10.1016/j.scitotenv.2022.159292
| 2,022 |
The Science of The Total Environment
|
Beyond “bluespace” and “greenspace”: A narrative review of possible health benefits from exposure to other natural landscapes
|
Numerous studies have highlighted the physical and mental health benefits of contact with nature, typically in landscapes characterized by plants (i.e., "greenspace") and water (i.e., "bluespace"). However, natural landscapes are not always green or blue, and the effects of other landscapes are worth attention. This narrative review attempts to overcome this limitation of past research. Rather than focusing on colors, we propose that natural landscapes are composed of at least one of three components: (1) plants (e.g., trees, flowering plants, grasses, sedges, mosses, ferns, and algae), (2) water (e.g., rivers, canals, lakes, and oceans), and/or (3) rocks and minerals, including soil. Landscapes not dominated by plants or liquid-state water include those with abundant solid-state water (e.g., polar spaces) and rocks or minerals (e.g., deserts and caves). Possible health benefits of solid-state water or rock/mineral dominated landscapes include both shorter-term (e.g., viewing images) and longer-term (e.g., living in these landscapes) exposure durations. Reported benefits span improved emotional and mental states and medical treatment resources for respiratory conditions and allergies. Mechanisms underlying the health benefits of exposure consist of commonly discussed theories in the "greenspace" and "bluespace" literature (i.e., instoration and restoration) as well as less discussed pathways in that literature (i.e., post-traumatic growth, self-determination, supportive environment theory, and place attachment). This is the first review to draw attention to the potential salutogenic value of natural landscapes beyond "greenspace" and "bluespace." It is also among the first to highlight the limitations and confusion that result from classifying natural landscapes using color. Since the extant literature on natural landscapes - beyond those with abundant plants or liquid-state water - is limited in regard to quantity and quality, additional research is needed to understand their restorative potential and therapeutic possibilities.
|
917664
|
New potential approach to treat atopic dermatitis
|
The skin of humans and animals is densely populated by fungi. It is suspected that a small yeast species called Malassezia, which besides bacteria and viruses is part of the microflora of healthy skin, strengthens the body's defenses and prepares the immune system for dangerous pathogens - much like certain bacteria do. Unlike with the bacteria, however, little has so far been known about the physiological processes that keep the ubiquitous fungus in check.
Immunologists at the University of Zurich have now shown that our immune system is responsible for maintaining the balance on our skin. The researchers were able to demonstrate that, in mice as well as in humans, Malassezia fungi stimulate the immune system to produce the cytokine interleukin-17. "If this cytokine isn't released or if the immune cells that produce interleukin-17 are missing, there is nothing to stop the fungus from growing and infesting the skin," explains Salomé LeibundGut-Landmann, professor of immunology and head of the immunology section at the Vetsuisse Faculty of UZH.
Fungus can encourage skin allergy
But what happens when the balance between the fungus and the immune system on the surface of our body is lost? There is some evidence that the usually harmless Malassezia fungus plays a role when it comes to atopic dermatitis. In this chronic inflammatory skin allergy, which affects up to 20 percent of children and 10 percent of adults, the immune system overreacts to antigens from the environment, for example house dust mites. This can lead to eczema, which is characterized by dry, inflamed and itching skin lesions, typically on the neck, forearms and legs. It is also one of the most common skin diseases in dogs.
The current study confirms that interleukin-17 production by certain immune cells, which normally provide protection against uncontrolled fungal growth on the skin, also contribute to the development of symptoms characteristic for atopic dermatitis. The fungus becomes an allergen on the skin, so to speak, and triggers an overreaction of the immune system with the respective inflammatory characteristics. This finding is supported by experiments with cells from atopic dermatitis patients carried out in cooperation with the University Hospital Zurich and ETH Zurich.
Treatment with therapeutic antibodies
"The findings of our study suggest that therapeutic antibodies that neutralize the effect of interleukin-17 could be an effective treatment for atopic dermatitis. These antibodies already exist and are being used to treat psoriasis with great success," says LeibundGut-Landmann.
However, it remains to be studied why the immune response against the Malassezia fungus can become pathological and why the normally protective mechanisms break down in atopic dermatitis patients.
###
|
10.1016/j.chom.2019.02.002
| 2,019 |
Cell Host & Microbe
|
The Skin Commensal Yeast Malassezia Triggers a Type 17 Response that Coordinates Anti-fungal Immunity and Exacerbates Skin Inflammation
|
Commensal fungi of the mammalian skin, such as those of the genus Malassezia, are associated with atopic dermatitis and other common inflammatory skin disorders. Understanding of the causative relationship between fungal commensalism and disease manifestation remains incomplete. By developing a murine epicutaneous infection model, we found Malassezia spp. selectively induce IL-17 and related cytokines. This response is key in preventing fungal overgrowth on the skin, as disruption of the IL-23-IL-17 axis compromises Malassezia-specific cutaneous immunity. Under conditions of impaired skin integrity, mimicking a hallmark of atopic dermatitis, the presence of Malassezia dramatically aggravates cutaneous inflammation, which again was IL-23 and IL-17 dependent. Consistently, we found a CCR6+ Th17 subset of memory T cells to be Malassezia specific in both healthy individuals and atopic dermatitis patients, whereby the latter showed enhanced frequency of these cells. Thus, the Malassezia-induced type 17 response is pivotal in orchestrating antifungal immunity and in actively promoting skin inflammation.
|
472492
|
Blood test offers improved breast cancer detection tool to reduce use of breast biopsy
|
NEW YORK, May 23, 2017 - A new study published in Clinical Breast Cancer demonstrates that Videssa® Breast, a multi-protein biomarker blood test to detect breast cancer, can help inform better decision-making after abnormal mammogram or other breast imaging results and potentially reduce use of biopsy by up to 67 percent. The study evaluated the performance of Videssa Breast among women under age 50.
With about 1.6 million breast biopsies performed each year,1 the implications of a blood test that can help clinicians confidently rule out breast cancer and avoid a potentially unnecessary biopsy are tremendous," said Judith K. Wolf, MD, Chief Medical Officer of Provista Diagnostics, Inc.
"We know imaging has limitations, especially among women under age 50 who, because of confounding factors, are more difficult to image. This research shows that Videssa Breast can be a powerful new tool in the diagnostic toolbox for clinicians."
The study, "A Non-Invasive Blood-Based Combinatorial Proteomic Biomarker Assay to Detect Breast Cancer in Women Under the Age of 50 Years" demonstrated the performance of Videssa Breast from two prospective trials that enrolled 545 women, ages 25-50, with abnormal or difficult-to-interpret imaging (BI-RADS 3 and 4). The overall performance of Videssa Breast in women with a breast cancer prevalence of 5.87 percent, resulted in a sensitivity of 87.5 percent, specificity of 83.8 percent, positive predictive value (PPV) of 25.2 percent and a negative predictive value (NPV) of 99.1 percent.
The study notes that the high NPV helps clinicians identify patients who are highly unlikely to have breast cancer. Depending on age, approximately 70 to 90 percent of breast biopsies are benign.1,2 The improved PPV of Videssa Breast over imaging - 25.2 percent vs. 8.8 percent - can increase the percentage of biopsies that yield a breast cancer diagnosis from one in 11 to one in four.
"When a mammogram yields an abnormal result, the challenge for every clinician is to decide which patients need follow-up, further imaging or biopsy," said Josie R. Alpers, MD, a radiologist specializing in mammography and diagnostic radiology at Avera McKennan Hospital & University Health Center and a study co-author. "A test that is well-validated in a prospective trial means clinicians have a new way to accurately identify which patients may or may not need additional follow-up."
Videssa Breast has been studied in two prospective, randomized, multi-center and blinded clinical trials, in more than 1,350 patients ages 25-75. It is the first prospective study of a proteomic assay composed of serum protein biomarkers and tumor-associated autoantibodies being used to detect breast cancer in women with abnormal imaging results. The data featured in the current Clinical Breast Cancer publication is taken from the first study and cohort one of the second study. Data from the over 50 cohort will be featured in upcoming publications. Videssa Breast is currently in limited clinical use through an early access program.
|
10.1016/j.clbc.2017.05.004
| 2,017 |
Clinical Breast Cancer
|
A Noninvasive Blood-based Combinatorial Proteomic Biomarker Assay to Detect Breast Cancer in Women Under the Age of 50 Years
|
Despite significant advances in breast imaging, the ability to detect breast cancer (BC) remains a challenge. To address the unmet needs of the current BC detection paradigm, 2 prospective clinical trials were conducted to develop a blood-based combinatorial proteomic biomarker assay (Videssa Breast) to accurately detect BC and reduce false positives (FPs) from suspicious imaging findings. Provista-001 and Provista-002 (cohort one) enrolled Breast Imaging Reporting and Data System 3 or 4 women aged under 50 years. Serum was evaluated for 11 serum protein biomarkers and 33 tumor-associated autoantibodies. Individual biomarker expression, demographics, and clinical characteristics data from Provista-001 were combined to develop a logistic regression model to detect BC. The performance was tested using Provista-002 cohort one (validation set). The training model had a sensitivity and specificity of 92.3% and 85.3% (BC prevalence, 7.7%), respectively. In the validation set (BC prevalence, 2.9%), the sensitivity and specificity were 66.7% and 81.5%, respectively. The negative predictive value was high in both sets (99.3% and 98.8%, respectively). Videssa Breast performance in the combined training and validation set was 99.1% negative predictive value, 87.5% sensitivity, 83.8% specificity, and 25.2% positive predictive value (BC prevalence, 5.87%). Overall, imaging resulted in 341 participants receiving follow-up procedures to detect 30 cancers (90.6% FP rate). Videssa Breast would have recommended 111 participants for follow-up, a 67% reduction in FPs (P < .00001). Videssa Breast can effectively detect BC when used in conjunction with imaging and can substantially reduce unnecessary medical procedures, as well as provide assurance to women that they likely do not have BC.
|
962201
|
Obscure gastrointestinal bleeding: rebleeding rates and rebleeding predictors found
|
Obscure gastrointestinal bleeding (OGIB) is defined as gastrointestinal bleeding from a source that cannot be determined even after upper or lower gastrointestinal endoscopy is performed. It is an intractable disease that can cause repeated bloody stools and anemia without an identifiable cause, and may require frequent blood transfusions. Although the pathogenesis of OGIB remains largely unclear, it is assumed that in most cases, the bleeding is from the small intestine.
Capsule endoscopy (CE) is a useful and noninvasive procedure for evaluating OGIB. Previous studies have shown that patients with severe comorbidities have a higher rate of positive CE findings — meaning that mucosal breaks, vascular lesions, tumors, or blood retention were observed — for OGIB. Additionally, for OGIB in which the initial CE fails to identify bleeding lesions, repeated CE can detect lesions at a higher rate. However, there have been no reports with a sufficiently large number of cases on the long-term outcomes of OGIB detected by CE and the risk of rebleeding.
Addressing this shortcoming, a research group led by Dr. Koji Otani from the Osaka Metropolitan University Graduate School of Medicine followed up on 389 patients who underwent CE as their initial small intestinal examination for OGIB and evaluated the risk of rebleeding over the long term. In addition, the team evaluated the risk of rebleeding in OGIB, in which no source of rebleeding was found in any part of the gastrointestinal tract, including the small intestine.
The analysis showed that the overall cumulative rebleeding rate during the five years after CE was 41.7%. In patients with positive CE findings, the cumulative rebleeding rate was 48.0%. The cumulative rebleeding rate in patients who underwent therapeutic intervention for positive CE findings was 31.8%.
Furthermore, overt OGIB, anticoagulants, positive balloon-assisted enteroscopy after CE, and iron supplements without therapeutic intervention were found to be independent predictors of rebleeding. Among the components of an index assessing the severity of complications, liver cirrhosis was an independent predictor associated with rebleeding in patients with OGIB.
“If capsule endoscopy can be used to properly diagnose and lead to therapeutic intervention, the risk of rebleeding can be reduced,” concluded Dr. Otani. “Even if the endoscopy does not detect any lesions, adequate follow-up is necessary. Here at Osaka Metropolitan University, we have been utilizing this tool clinically since its early days and have accumulated some of the world's leading clinical data. This study revealed a high rebleeding rate in OGIB patients and clarified the effects of rebleeding predictors and therapeutic intervention. We have high expectations that this will lead to better medical care in the future.”
|
10.1016/j.gie.2022.07.012
| 2,022 |
Gastrointestinal Endoscopy
|
Long-term rebleeding rate and predictive factors of rebleeding after capsule endoscopy in patients with obscure GI bleeding
|
The incidence of rebleeding in obscure GI bleeding (OGIB) remains unclear. This study used capsule endoscopy (CE) to determine the long-term rebleeding rate and predictive factors for rebleeding in patients with OGIB.This single-center, observational study enrolled consecutive patients with OGIB who underwent CE as the first small intestinal examination between March 2004 and December 2015 and were followed up through medical records or letters.Three hundred eighty-nine patients were included in the analysis. Survival curve analysis showed that the overall cumulative rebleeding rate in OGIB during the 5 years was 41.7%. Multivariate analysis using the Cox proportional hazards model revealed that overt OGIB (hazard ratio [HR], 2.017; 95% confidence interval [CI], 1.299-3.131; P = .002), anticoagulants (HR, 1.930; 95% CI, 1.093-3.410; P = .023), positive balloon-assisted enteroscopy findings after CE (HR, 2.927; 95% CI, 1.791-4.783; P < .001), and iron supplements without therapeutic intervention (HR, 2.202; 95% CI, 1.386-3.498; P = .001) were associated with rebleeding, whereas a higher minimum hemoglobin level (HR, .902; 95% CI, .834-.975; P = .009) and therapeutic intervention (HR, .288; 95% CI, .145-.570; P < .001) significantly reduced the risk of rebleeding. Among the Charlson Comorbidity Index components, liver cirrhosis was an independent predictor associated with rebleeding in patients with OGIB (HR, 4.362; 95% CI, 2.622-7.259; P < .001) and in patients with negative CE findings (HR, 8.961; 95% CI, 4.424-18.150; P < .001).Rebleeding is common during the long-term follow-up of patients with OGIB. Careful follow-up is required for patients with liver cirrhosis or previous massive bleeding.
|
538978
|
How a positive work environment leads to feelings of inclusion among employees
|
BINGHAMTON, NY - Fostering an inclusive work environment can lead to higher satisfaction, innovation, trust and retention among employees, according to new research from Binghamton University, State University of New York.
Kim Brimhall, assistant professor of social work at Binghamton University's College of Community and Public Affairs, noticed how the nonprofit sector generally suffers from high employee-turnover rates, low work performance and deficits among the leadership, and wanted to find out what could be done to break this cycle. She partnered with a large nonprofit hospital in Los Angeles, surveying employees on topics such as leader engagement, inclusion, innovation, job satisfaction and perceived quality of care. The full study also included one-on-one qualitative interviews, as well as several organizational observations.
Analyzing the data, Brimhall found that leaders who seek the input of organizational members from all job positions and encourage everyone, regardless of educational background or job responsibilities, to take initiative and participate in work-related processes are more likely to increase feelings of inclusion. This then leads to increased innovation, employee job satisfaction and quality of services in nonprofit organizations.
"When nonprofit organization members believe that they are valued for their unique personal characteristics and are recognized as important members of the organization, employee engagement, trust, satisfaction, commitment and retention improve," wrote Brimhall. "Leader engagement, that is, a leader's ability to actively engage all organizational members in critical decision making, may foster a climate for inclusion and positive organizational outcomes, such as a climate for innovation, job satisfaction and perceived quality of care."
The implications of these findings have applicability across national settings and for effective management of nonprofit organizations internationally, wrote Brimhall.
She hopes to develop economically practical, evidence-based tools that leaders can utilize to create inclusive work environments. She is partnering with another large nonprofit hospital to conduct an experimental study testing workplace interventions. These tools could help employees feel included and possibly lead to more innovation in the workplace and overall improvement in their feelings toward their job, which would then translate to improved quality of care given to clients.
The paper, "Inclusion is Important...But How Do I Include? Examining the Effects of Leader Engagement on Inclusion, Innovation, Job Satisfaction, and Perceived Quality of Care in a Diverse Nonprofit Health Care Organization," was published in Nonprofit and Voluntary Sector Quarterly.
###
|
10.1177/0899764019829834
| 2,019 |
Nonprofit and Voluntary Sector Quarterly
|
Inclusion Is Important . . . But How Do I Include? Examining the Effects of Leader Engagement on Inclusion, Innovation, Job Satisfaction, and Perceived Quality of Care in a Diverse Nonprofit Health Care Organization
|
Nonprofit leaders and managers are recognizing the benefits of creating inclusive organizations in which everyone feels valued and appreciated, yet little is known about how leaders can foster workplace inclusion. This study examined the relationships among leader engagement, inclusion, innovation, job satisfaction, and perceived quality of care in a diverse nonprofit health care organization. Data were collected at three points in 6-month intervals from a U.S. nonprofit hospital. Multilevel path analysis indicated significant direct associations between leader engagement, inclusion, and innovation. Innovation was directly linked to improved job satisfaction and perceived quality of care. Significant indirect effects were found from leader engagement to increased job satisfaction and perceived quality of care through increased climates for inclusion and innovation. Findings suggest that nonprofit leaders who engage others in critical organizational processes can help foster an inclusive climate that leads to increased innovation, employee job satisfaction, and perceived quality of care.
|
479421
|
Western diet may increase risk of gut inflammation, infection
|
Eating a Western diet impairs the immune system in the gut in ways that could increase risk of infection and inflammatory bowel disease, according to a study from researchers at Washington University School of Medicine in St. Louis and Cleveland Clinic.
The study, in mice and people, showed that a diet high in sugar and fat causes damage to Paneth cells, immune cells in the gut that help keep inflammation in check. When Paneth cells aren't functioning properly, the gut immune system is excessively prone to inflammation, putting people at risk of inflammatory bowel disease and undermining effective control of disease-causing microbes. The findings, published May 18 in Cell Host & Microbe, open up new approaches to regulating gut immunity by restoring normal Paneth cell function.
"Inflammatory bowel disease has historically been a problem primarily in Western countries such as the U.S., but it's becoming more common globally as more and more people adopt Western lifestyles," said lead author Ta-Chiang Liu, MD, PhD, an associate professor of pathology & immunology at Washington University. "Our research showed that long-term consumption of a Western-style diet high in fat and sugar impairs the function of immune cells in the gut in ways that could promote inflammatory bowel disease or increase the risk of intestinal infections."
Paneth cell impairment is a key feature of inflammatory bowel disease. For example, people with Crohn's disease, a kind of inflammatory bowel disease characterized by abdominal pain, diarrhea, anemia and fatigue, often have Paneth cells that have stopped working.
Liu and senior author Thaddeus Stappenbeck, MD, PhD, chair of the Department of Inflammation and Immunity at Cleveland Clinic, set out to find the cause of Paneth cell dysfunction in people. They analyzed a database containing demographic and clinical data on 400 people, including an assessment of each person's Paneth cells. The researchers found that high body mass index (BMI) was associated with Paneth cells that looked abnormal and unhealthy under a microscope. The higher a person's BMI, the worse his or her Paneth cells looked. The association held for healthy adults and people with Crohn's disease.
To better understand this connection, the researchers studied two strains of mice that are genetically predisposed to obesity. Such mice chronically overeat because they carry mutations that prevent them from feeling full even when fed a regular diet. To the researchers' surprise, the obese mice had Paneth cells that looked normal.
In people, obesity is frequently the result of eating a diet rich in fat and sugar. So the scientists fed normal mice a diet in which 40% of the calories came from fat or sugar, similar to the typical Western diet. After two months on this chow, the mice had become obese and their Paneth cells looked decidedly abnormal.
"Obesity wasn't the problem per se," Liu said. "Eating too much of a healthy diet didn't affect the Paneth cells. It was the high-fat, high-sugar diet that was the problem."
The Paneth cells returned to normal when the mice were put back on a healthy mouse diet for four weeks. Whether people who habitually eat a Western diet can improve their gut immunity by changing their diet remains to be seen, Liu said.
"This was a short-term experiment, just eight weeks," Liu said. "In people, obesity doesn't occur overnight or even in eight weeks. People have a suboptimal lifestyle for 20, 30 years before they become obese. It's possible that if you have Western diet for so long, you cross a point of no return and your Paneth cells don't recover even if you change your diet. We'd need to do more research before we can say whether this process is reversible in people."
Further experiments showed that a molecule known as deoxycholic acid, a secondary bile acid formed as a byproduct of the metabolism of gut bacteria, forms the link between a Western diet and Paneth cell dysfunction. The bile acid increases the activity of two immune molecules -- farnesoid X receptor and type 1 interferon -- that inhibit Paneth cell function.
Liu and colleagues now are investigating whether fat or sugar plays the primary role in impairing Paneth cells. They also have begun studying ways to restore normal Paneth cell function and improve gut immunity by targeting the bile acid or the two immune molecules.
|
10.1016/j.chom.2021.04.004
| 2,021 |
Cell Host & Microbe
|
Western diet induces Paneth cell defects through microbiome alterations and farnesoid X receptor and type I interferon activation
|
Intestinal Paneth cells modulate innate immunity and infection. In Crohn's disease, genetic mutations together with environmental triggers can disable Paneth cell function. Here, we find that a western diet (WD) similarly leads to Paneth cell dysfunction through mechanisms dependent on the microbiome and farnesoid X receptor (FXR) and type I interferon (IFN) signaling. Analysis of multiple human cohorts suggests that obesity is associated with Paneth cell dysfunction. In mouse models, consumption of a WD for as little as 4 weeks led to Paneth cell dysfunction. WD consumption in conjunction with Clostridium spp. increased the secondary bile acid deoxycholic acid levels in the ileum, which in turn inhibited Paneth cell function. The process required excess signaling of both FXR and IFN within intestinal epithelial cells. Our findings provide a mechanistic link between poor diet and inhibition of gut innate immunity and uncover an effect of FXR activation in gut inflammation.
|
809276
|
The 1950s: The decade in which gravity physics became experimental
|
In the 1950s and earlier, the gravity theory of Einstein's general relativity was largely a theoretical science. In a new paper published in EPJ H, Jim Peebles, a physicist and theoretical cosmologist who is currently the Albert Einstein Professor Emeritus of Science at Princeton University, New Jersey, USA, shares a historical account of how the experimental study of gravity evolved.
This review examines the broad range of new approaches initiated in the late 1950s, following through to the transition of experimental gravity physics to become a normal and accepted part of physical science in the late 1960s. Highlighting the importance of advances in technology in changing the lines of investigation in the field, it also emphasises the need for physical theories to be empirically tested, because experience shows that this can yield surprising results.
In this context, the review examines the role of scientists such as the US physicist Robert Dicke in changing the former perspective. At that time, Dicke made the mid-career decision to lead a research group dedicated to the experimental study of gravity, following new research directions inspired by old arguments associated with Ernst Mach and Paul Dirac.
In the mid-1950s, the experimental exploration of gravity physics was generally considered uninteresting, because it seemed that little could be done to better test general relativity theory. Now, the empirical basis for inflation, or other ideas about the role of gravity in the very early universe, are considered to be necessarily schematic, because better experiments don't appear to be feasible.
The community was surprised by the abundance of evidence that has grown out of the emergence of experimental gravity physics. Indeed, experimental findings show that the theory Einstein completed a century ago matches an abundance of experimental and observational evidence on scales ranging from the laboratory to the Solar System, and even to the observable universe. This experience suggests that there may be further surprising empirical developments to come, perhaps related to deeper tests of the nature of gravity, and perhaps ones that can tell us more about how the world began.
###
References: Robert Dicke and the naissance of experimental gravity physics, 1957-1967. P. J. E. Peebles (2016), European Physical Journal H, DOI 10.1140/epjh/e2016-70034-0
|
10.1140/epjh/e2016-70034-0
| 2,016 |
The European Physical Journal H
|
Robert Dicke and the naissance of experimental gravity physics, 1957–1967
|
The experimental study of gravity became much more active in the late 1950s, a change pronounced enough be termed the birth, or naissance, of experimental gravity physics. I present a review of developments in this subject since 1915, through the broad range of new approaches that commenced in the late 1950s, and up to the transition of experimental gravity physics to what might be termed a normal and accepted part of physical science in the late 1960s. This review shows the importance of advances in technology, here as in all branches of natural science. The role of contingency is illustrated by Robert Dicke's decision in the mid-1950s to change directions in mid-career, to lead a research group dedicated to the experimental study of gravity. The review also shows the power of nonempirical evidence. Some in the 1950s felt that general relativity theory is so logically sound as to be scarcely worth the testing. But Dicke and others argued that a poorly tested theory is only that, and that other nonempirical arguments, based on Mach's Principle and Dirac's Large Numbers hypothesis, suggested it would be worth looking for a better theory of gravity. I conclude by offering lessons from this history, some peculiar to the study of gravity physics during the naissance, some of more general relevance. The central lesson, which is familiar but not always well advertised, is that physical theories can be empirically established, sometimes with surprising results.
|
479499
|
Solved: the mystery of how dark matter in galaxies is distributed
|
The gravitational force in the Universe under which it has evolved from a state almost uniform at the Big Bang until now, when matter is concentrated in galaxies, stars and planets, is provided by what is termed 'dark matter'. But in spite of the essential role that this extra material plays, we know almost nothing about its nature, behaviour and composition, which is one of the basic problems of modern physics. In a recent article in Astronomy & Astrophysics Letters, scientists at the Instituto de Astrofísica de Canarias (IAC)/University of La Laguna (ULL) and of the National University of the North-West of the Province of Buenos Aires (Junín, Argentina) have shown that the dark matter in galaxies follows a 'maximum entropy' distribution, which sheds light on its nature.
Dark matter makes up 85% of the matter of the Universe, but its existence shows up only on astronomical scales. That is to say, due to its weak interaction, the net effect can only be noticed when it is present in huge quantities. As it cools down only with difficulty, the structures it forms are generally much bigger than planets and stars. As the presence of dark matter shows up only on large scales the discovery of its nature probably has to be made by astrophysical studies.
MAXIMUM ENTROPY
To say that the distribution of dark matter is organized according to maximum entropy (which is equivalent to 'maximum disorder' or 'thermodynamic equilibrium') means that it is found in its most probable state. To reach this 'maximum disorder' the dark matter must have had to collide within itself, just as gas molecules do, so as to reach equilibrium in which its density, pressure, and temperature are related. However, we do not know how the dark matter has reached this type of equilibrium.
"Unlike the molecules in the air, for example, because gravitational action is weak, dark matter particles ought hardly to collide with one another, so that the mechanism by which they reach equilibrium is a mystery", says Jorge Sánchez Almeida, an IAC researcher who is the first author of the article. "However if they did collide with one another this would give them a very special nature, which would partly solve the mystery of their origin", he adds.
The maximum entropy of dark matter has been detected in dwarf galaxies, which have a higher ratio of dark matter to total matter than have more massive galaxies, so it is easier to see the effect in them. However, the researchers expect that it is general behaviour in all types of galaxies.
The study implies that the distribution of matter in thermodynamic equilibrium has a much lower central density that astronomers have assumed for many practical applications, such as in the correct interpretation of gravitational lenses, or when designing experiments to detect dark matter by its self-annihilation.
This central density is basic for the correct interpretation of the curvature of the light by gravitational lenses: if it is less dense the effect of the lens is less. To use a gravitational lens to measure the mass of a galaxy one needs a model, if this model is changed, the measurement changes.
The central density also is very important for the experiments which try to detect dark matter using its self-annihilation. Two dark matter particles could interact and disappear in a process which is highly improbable, but which would be characteristic of their nature. For two particles to interact they must collide. The probability of this collision depends on the density of the dark matter; the higher the concentration of dark matter, the higher is the probability that the particles will collide.
"For that reason, if the density changes so will the expected rate of production of the self-annihilations, and given that the experiments are designed on the prediction of a given rate, if this rate were very low the experiment is unlikely to yield a positive result", says Sánchez Almeida.
Finally, thermodynamic equilibrium for dark matter could also explain the brightness profile of the galaxies. This brightness falls with distance from the centre of a galaxy in a specific way, whose physical origin is unknown, but for which the researchers are working to show that it is the result of an equilibrium with maximum entropy.
SIMULATION VERSUS OBSERVATION
The density of dark matter in the centres of galaxies has been a mystery for decades. There is a strong discrepancy between the predictions of the simulations (a high density) and that which is observed (a low value). Astronomers have put forward many types of mechanisms to resolve this major disagreement.
In this article, the researchers have shown, using basic physical principles, that the observations can be reproduced on the assumption that the dark matter is in equilibrium, i.e., that it has maximum entropy. The consequences of this result could be very important because they indicate that the dark matter has interchanged energy with itself and/or with the remaining "normal" (baryonic) matter.
"The fact that equilibrium has been reached in such a short time, compared with the age of the Universe, could be the result of a type of interaction between dark matter and normal matter in addition to gravity", suggests Ignacio Trujillo, an IAC researcher and a co-author of this article. "The exact nature of this mechanism needs to be explored, but the consequences could be fascinating to understand just what is this component which dominates the total amount of matter in the Universe".
|
10.1051/0004-6361/202039190
| 2,020 |
Astronomy and Astrophysics
|
The principle of maximum entropy explains the cores observed in the mass distribution of dwarf galaxies
|
Cold dark matter (CDM) simulations predict a central cusp in the mass distribution of galaxies. This prediction is in stark contrast with observations of dwarf galaxies that show a central plateau or “core” in their density distribution. The proposed solutions to this core-cusp problem can be classified into two types. One invokes feedback mechanisms produced by the baryonic component of the galaxies and the other assumes that the properties of the dark matter particle depart from the CDM hypothesis. Here we propose an alternative yet complementary explanation. We argue that cores are unavoidable in the self-gravitating systems of maximum entropy that result from non-extensive statistical mechanics. Their structure follows from the Tsallis entropy, which is attributed to systems with long-range interactions. Strikingly, the mass density profiles predicted by such thermodynamic equilibrium match the observed cores without any adjustment or tuning. Thus, the principle of maximum Tsallis entropy explains the presence of cores in dwarf galaxies.
|
880045
|
Being married may help prolong survival in cancer patients
|
New research has uncovered a link between being married and living longer among cancer patients, with the beneficial effect of marriage differing by race/ethnicity and place of birth. Published early online in CANCER, a peer-reviewed journal of the American Cancer Society, the findings have important public health implications, given the rising numbers of unmarried individuals in the United States in addition to the growing aging population.
For the analysis, a team led by Scarlett Lin Gomez, PhD, of the Cancer Prevention Institute of California, and María Elena Martínez, PhD, of the University of California, San Diego School of Medicine, assessed information on nearly 800,000 adults in California who were diagnosed in 2000 to 2009 with invasive cancer and were followed through 2012.
The investigators found that unmarried cancer patients had higher death rates than married patients. For males, the rate of death was 27 percent higher among those who were unmarried compared with those who were married. For females, the rate was 19 percent higher among unmarried patients. These patterns were minimally explained by greater economic resources among married patients, including having private health insurance and living in higher socioeconomic status neighborhoods.
The beneficial effect of being married on survival differed across racial/ethnic groups. Among men and women, whites benefitted the most from being married while Hispanics and Asian Pacific Islanders benefitted less. Also, Hispanic and Asian/Pacific Islander cancer patients who were born in the United States experienced a greater benefit than those born outside the country.
"While other studies have found similar protective effects associated with being married, ours is the first in a large population-based setting to assess the extent to which economic resources explain these protective effects," said Dr. Gomez. "Our study provides evidence for social support as a key driver." The findings indicate that physicians and other health professionals who treat unmarried cancer patients should ask if there is someone within their social network available to help them physically and emotionally.
Also, with the number of unmarried adults growing in the United States and the number of cancer patients also growing due to the aging population, the results have important public health implications. "Research is needed to understand the specific reasons behind these associations so that future unmarried patients can receive interventions to increase their chances of survival," said Dr. Martinez.
###
Article: "Effects of marital status and economic resources on survival after cancer: a population-based study." Scarlett Lin Gomez, Susan Hurley, Alison J. Canchola, Theresa H. M. Keegan, Iona Cheng, James D. Murphy, Christina Clarke, Sally L. Glaser, and María Elena Martínez. CANCER; Published Online: April 11, 2016 (DOI: 10.1002/cncr.29885).
URL Upon Publication: http://doi.wiley.com/10.1002/cncr.29885
Article: "Differences in marital status and mortality by race/ethnicity and nativity among California cancer patients." María Elena Martínez, Kristin Anderson, James D. Murphy, Susan Hurley, Alison J. Canchola, Theresa H. M. Keegan, Iona Cheng, Christina Clarke, Sally L. Glaser, and Scarlett L. Gomez. CANCER; Published Online: April 11, 2016 (DOI: 10.1002/cncr.29886).
URL Upon Publication: http://doi.wiley.com/10.1002/cncr.29886
Author Contact:
Donna Lock
CPIC
[email protected]
Yadira Galindo
UCSD
[email protected].
CANCER is a peer-reviewed publication of the American Cancer Society integrating scientific information from worldwide sources for all oncologic specialties. The objective of CANCER is to provide an interdisciplinary forum for the exchange of information among oncologic disciplines concerned with the etiology, course, and treatment of human cancer. CANCER is published on behalf of the American Cancer Society by Wiley and can be accessed online at http://wileyonlinelibrary.com/journal/cancer.
Follow us on Twitter @JournalCancer and Facebook https://www.facebook.com/ACSJournals
About Wiley
Wiley is a global provider of knowledge and knowledge-enabled services that improve outcomes in areas of research, professional practice and education. Through the Research segment, the Company provides digital and print scientific, technical, medical, and scholarly journals, reference works, books, database services, and advertising. The Professional Development segment provides digital and print books, online assessment and training services, and test prep and certification. In Education, Wiley provides education solutions including online program management services for higher education institutions and course management tools for instructors and students, as well as print and digital content. The Company's website can be accessed at http://www.wiley.com.
|
10.1002/cncr.29885
| 2,016 |
Cancer
|
Effects of marital status and economic resources on survival after cancer: A population‐based study
|
BACKGROUND Although married cancer patients have more favorable survival than unmarried patients, reasons underlying this association are not fully understood. The authors evaluated the role of economic resources, including health insurance status and neighborhood socioeconomic status (nSES), in a large California cohort. METHODS From the California Cancer Registry, we identified 783,167 cancer patients (386,607 deaths) who were diagnosed during 2000 through 2009 with a first primary, invasive cancer of the 10 most common sites of cancer‐related death for each sex and were followed through 2012. Age‐stratified and stage‐stratified Cox proportional hazard models were used to estimate hazard ratios (HRs) and 95% confidence intervals (95% CIs) for all‐cause mortality associated with marital status, adjusted for cancer site, race/ethnicity, and treatment. RESULTS Compared with married patients, unmarried patients had an elevated risk of mortality that was higher among males (HR, 1.27; 95% CI, 1.26‐1.29) than among females (HR, 1.19; 95% CI, 1.18‐1.20; P interaction < .001). Adjustment for insurance status and nSES reduced the marital status HRs to 1.22 for males and 1.15 for females. There was some evidence of synergistic effects of marital status, insurance, and nSES, with relatively higher risks observed for unmarried status among those who were under‐insured and living in high nSES areas compared with those who were under‐insured and living in low nSES areas ( P interaction = 6.8 × 10 −9 among males and 8.2 × 10 −8 among females). CONCLUSIONS The worse survival of unmarried than married cancer patients appears to be minimally explained by differences in economic resources. Cancer 2016;122:1618–25 . © 2016 American Cancer Society .
|
982511
|
Propeller advance paves way for quiet, efficient electric aviation
|
Electrification is seen as having an important role to play in the fossil-free aviation of tomorrow. But electric aviation is battling a trade-off dilemma: the more energy-efficient an electric aircraft is, the noisier it gets. Now, researchers at Chalmers University of Technology, Sweden, have developed a propeller design optimisation method that paves the way for quiet, efficient electric aviation.
In recent years, electrification has been described as having an important role in reducing emissions from future aviation. Due to the challenges posed by longer ranges, interest is chiefly focused on electric propeller planes covering shorter distances. Propellers connected to electric motors are considered the most efficient propulsion system for regional and domestic flights.
But while airplanes are electric, propellers cause another kind of emission – noise. The noise from the propeller blades wouldn’t just disturb air passengers. Future electric aircraft will need to fly at relatively low altitudes, with noise disturbance reaching residential areas and animal life.
Battling a trade-off dilemma
This is where the research community faces a dilemma. The ambition of developing electric aircraft that are both quiet and energy-efficient is somewhat thwarted by a trade-off problem.
“We can see that the more blades a propeller has, the lower the noise emissions. But with fewer blades, propulsion becomes more efficient and the electric aircraft can fly for longer. In that sense, there is a trade-off between energy efficiency and noise. This is something of an obstacle for electric aircrafts that are both quiet and efficient,” explains Hua-Dong Yao, Associate Professor and researcher in fluid dynamics and marine technology at Chalmers University of Technology.
An optimised design for quiet and efficient propellers
But now, Hua-Dong Yao and his research colleagues may be one step closer to a solution. They have succeeded in isolating and exploring the noise that occurs at the tip of the propeller blades, or “tip vortices”, a known but less well-explored source of noise. In isolating this noise, the researchers were able to fully understand its role in relation to other noise sources generated by propeller blades. By adjusting a range of propeller parameters, such as pitch angle, chord length and number of blades, the team found a way to optimise the propeller design and even out the trade-off effect between efficiency and noise. The method, described in the study published in the journal Aerospace, can now be used in the design process of quieter propellers for future electric aircraft.
“Modern aircraft propellers usually have two to four blades, but we’ve found that by using six blades designed using our optimisation framework, you can develop a propeller that’s both relatively efficient and quiet. The propeller achieves a noise reduction of up to 5-8 dBA* with only a 3.5 per cent thrust penalty, compared to a propeller with three blades. That’s comparable to the noise reduction of someone going from speaking in a normal conversation voice to the sound you would perceive in a quiet room,” says Hua-Dong Yao.
* A-weighted decibel (dBA or dB(A)) is an expression of the relative loudness of sounds as perceived by the human ear. A-weighting gives more value to frequencies in the mid-range of human hearing and less value to frequencies at the edges as compared to a flat audio decibel measurement. A-weighting is the standard for determining hearing damage and noise pollution.
More about the scientific publication
For more information, please contact:
Hua-Dong Yao, Associate Professor and researcher in fluid dynamics and marine technology, Department of Mechanics and Maritime Sciences, Chalmers University of Technology, Sweden
[email protected], +46 31 772 14 05
Aerospace
10.3390/aerospace9120825
Computational simulation/modeling
Not applicable
Blade-Tip Vortex Noise Mitigation Traded-Off against Aerodynamic Design for Propellers of Future Electric Aircraft
15-Dec-2022
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
10.3390/aerospace9120825
| 2,022 |
Aerospace
|
Blade-Tip Vortex Noise Mitigation Traded-Off against Aerodynamic Design for Propellers of Future Electric Aircraft
|
We study noise generation at the blade tips of propellers designed for future electric aircraft propulsion and, furthermore, analyze the interrelationship between noise mitigation and aerodynamics improvement in terms of propeller geometric designs. Classical propellers with three or six blades and a conceptual propeller with three joined dual-blades are compared to understand the effects of blade tip vortices on the noise generation and aerodynamics. The dual blade of the conceptual propeller is constructed by joining the tips of two sub-blades. These propellers are designed to operate under the same freestream flow conditions and similar electric power consumption. The Improved Delayed Detached Eddy Simulation (IDDES) is adopted for the flow simulation to identify high-resolution time-dependent noise sources around the blade tips. The acoustic computations use a time-domain method based on the convective Ffowcs Williams–Hawkings (FW-H) equation. The thrust of the 3-blade conceptual propeller is 4% larger than the 3-blade classical propeller and 8% more than the 6-blade one, given that they have similar efficiencies. Blade tip vortices are found emitting broadband noise. Since the classical and conceptual 3-blade propellers have different geometries, especially at the blade tips, they introduce deviations in the vortex development. However, the differences are small regarding the broadband noise generation. As compared to the 6-blade classical propeller, both 3-blade propellers produce much larger noise. The reason is that the increased number of blades leads to the reduced strength of tip vortices. The findings indicate that the noise mitigation through the modification of the blade design and number can be traded-off by the changed aerodynamic performance.
|
920138
|
Sexing ancient cremated human remains is possible through skeletal measurements
|
Ancient cremated human remains, despite being deformed, still retain sexually diagnostic physical features, according to a study released January 30, 2019 in the open-access journal PLOS ONE by Claudio Cavazzuti of Durham University, UK and colleagues. The authors provide a statistical approach for identifying traits that distinguish male and female remains within a population.
The ability to determine the sex of ancient human remains is essential for archaeologists tracking demographic data and cultural practices across civilizations. Large burial assortments can provide representative samples of ancient populations, but the process of cremation, which has been popular for millennia, warps and fragments bone, altering skeletal measurements that archaeologists might otherwise use to sex an individual. Few studies have attempted to identify skeletal traits that are sexually diagnostic after cremation. Thus, archaeologists lack a reliable method to sex cremated remains in the absence of external clues such as gendered grave goods.
Cavazzuti and colleagues aimed to resolve this deficiency by measuring 24 skeletal traits across 124 cremated individuals with clearly engendered grave goods (such as weapons for men and spindle whorls for women) from five Italian necropolises dating between the 12th and 6th centuries BCE. Assuming that gender largely correlates to sex, the authors statistically compared sex to variation in anatomical traits. Of the 24 traits examined, eight predicted sex with an accuracy of 80% or more, a reliability score similar to those obtained for uncremated ancient remains.
The authors conclude that anatomical sex determination is possible in cremated remains, though they caution that the measurements identified in this study differ from those used to sex modern cremated remains, indicating that sexually diagnostic traits differ between populations across time and space. Nonetheless, they suggest that, for ancient populations with large sample sizes, the statistical methods used in this study may be able to differentiate male and female remains.
Cavazzuti adds: "This is a new method for supporting the sex determination of human cremated remains in antiquity. Easy, replicable, reliable."
###
In your coverage please use this URL to provide access to the freely available article in PLOS ONE: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0209423
Citation: Cavazzuti C, Bresadola B, d'Innocenzo C, Interlando S, Sperduti A (2019) Towards a new osteometric method for sexing ancient cremated human remains. Analysis of Late Bronze Age and Iron Age samples from Italy with gendered grave goods. PLoS ONE 14(1): e0209423. https://doi.org/10.1371/journal.pone.0209423
Funding: This work was supported by H2020 Marie Sk?odowska-Curie Actions "Ex-SPACE", Exploring Social permeability of Ancient Communities of Europe grant no. 702930 to CC. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.
|
10.1371/journal.pone.0209423
| 2,019 |
PLoS ONE
|
Towards a new osteometric method for sexing ancient cremated human remains. Analysis of Late Bronze Age and Iron Age samples from Italy with gendered grave goods
|
Sex estimation of human remains is one of the most important research steps for physical anthropologists and archaeologists dealing with funerary contexts and trying to reconstruct the demographic structure of ancient societies. However, it is well known that in the case of cremations sex assessment might be complicated by the destructive/transformative effect of the fire on bones. Osteometric standards built on unburned human remains and contemporary cremated series are often inadequate for the analysis of ancient cremations, and frequently result in a significant number of misclassifications. This work is an attempt to overcome the scarcity of methods that could be applied to pre-proto-historic Italy and serve as methodological comparison for other European contexts. A set of 24 anatomical traits were measured on 124 Bronze Age and Iron Age cremated individuals with clearly engendered grave goods. Assuming gender largely correlated to sex, male and female distributions of each individual trait measured were compared to evaluate sexual dimorphism through inferential statistics and Chaktaborty and Majumder's index. The discriminatory power of each variable was evaluated by cross-validation tests. Eight variables yielded an accuracy equal to or greater than 80%. Four of these variables also show a similar degree of precision for both sexes. The most diagnostic measurements are from radius, patella, mandible, talus, femur, first metatarsal, lunate and humerus. Overall, the degree of sexual dimorphism and the reliability of estimates obtained from our series are similar to those of a modern cremated sample recorded by Gonçalves and collaborators. Nevertheless, mean values of the male and female distributions in our case study are lower, and the application of the cut-off point calculated from the modern sample to our ancient individuals produces a considerable number of misclassifications. This result confirms the need to build population-specific methods for sexing the cremated remains of ancient individuals.
|
873977
|
Big galaxies steal star-forming gas from their smaller neighbours
|
Large galaxies are known to strip the gas that occupies the space between the stars of smaller satellite galaxies.
In research published today, astronomers have discovered that these small satellite galaxies also contain less 'molecular' gas at their centres.
Molecular gas is found in giant clouds in the centres of galaxies and is the building material for new stars. Large galaxies are therefore stealing the material that their smaller counterparts need to form new stars.
Lead author Dr Adam Stevens is an astrophysicist based at UWA working for the International Centre for Radio Astronomy Research (ICRAR) and affiliated to the ARC Centre of Excellence in All Sky Astrophysics in 3 Dimensions (ASTRO 3D).
Dr Stevens said the study provides new systematic evidence that small galaxies everywhere lose some of their molecular gas when they get close to a larger galaxy and its surrounding hot gas halo.
"Gas is the lifeblood of a galaxy," he said.
"Continuing to acquire gas is how galaxies grow and form stars. Without it, galaxies stagnate.
"We've known for a long time that big galaxies strip 'atomic' gas from the outskirts of small galaxies.
"But, until now, it hadn't been tested with molecular gas in the same detail."
ICRAR-UWA astronomer Associate Professor Barbara Catinella said galaxies don't typically live in isolation.
"Most galaxies have friends," she says.
"And when a galaxy moves through the hot intergalactic medium or galaxy halo, some of the cold gas in the galaxy is stripped away.
"This fast-acting process is known as ram pressure stripping."
The research was a global collaboration involving scientists from the University of Maryland, Max Planck Institute for Astronomy, University of Heidelberg, Harvard-Smithsonian Center for Astrophysics, University of Bologna and Massachusetts Institute of Technology.
Molecular gas is very difficult to detect directly.
The research team took a state-of-the-art cosmological simulation and made direct predictions for the amount of atomic and molecular gas that should be observed by specific surveys on the Arecibo telescope in Puerto Rico and the IRAM 30-meter telescope in Spain.
They then took the actual observations from the telescopes and compared them to their original predictions.
The two were remarkably close.
Associate Professor Catinella, who led the Arecibo survey of atomic gas, says the IRAM 30-meter telescope observed the molecular gas in more than 500 galaxies.
"These are the deepest observations and largest sample of atomic and molecular gas in the local Universe," she says.
"That's why it was the best sample to do this analysis."
The team's finding fits with previous evidence that suggests satellite galaxies have lower star formation rates.
Dr Stevens said stripped gas initially goes into the space around the larger galaxy.
"That may end up eventually raining down onto the bigger galaxy, or it might end up just staying out in its surroundings," he said.
But in most cases, the little galaxy is doomed to merge with the larger one anyway.
"Often they only survive for one to two billion years and then they'll end up merging with the central one," Dr Stevens said.
"So it affects how much gas they've got by the time they merge, which then will affect the evolution of the big system as well.
"Once galaxies get big enough, they start to rely on getting more matter from the cannibalism of smaller galaxies."
|
10.1093/mnras/staa3662
| 2,020 |
Monthly Notices of the Royal Astronomical Society
|
Molecular hydrogen in IllustrisTNG galaxies: carefully comparing signatures of environment with local CO and SFR data
|
We examine how the post-processed content of molecular hydrogen (H$_2$) in galaxies from the TNG100 cosmological, hydrodynamic simulation changes with environment at $z\!=\!0$, assessing central/satellite status and host halo mass. We make close comparisons with the carbon monoxide (CO) emission survey xCOLD GASS where possible, having mock-observed TNG100 galaxies to match the survey's specifications. For a representative sample of host haloes across $10^{11}\!\lesssim\!M_{\rm 200c}/{\rm M}_{\odot}\!<\!10^{14.6}$, TNG100 predicts that satellites with $m_*\!\geq\!10^9\,{\rm M}_{\odot}$ should have a median deficit in their H$_2$ fractions of $\sim$0.6 dex relative to centrals of the same stellar mass. Once observational and group-finding uncertainties are accounted for, the signature of this deficit decreases to $\sim$0.2 dex. Remarkably, we calculate a deficit in xCOLD GASS satellites' H$_2$ content relative to centrals of 0.2--0.3 dex, in line with our prediction. We further show that TNG100 and SDSS data exhibit continuous declines in the average star formation rates of galaxies at fixed stellar mass in denser environments, in quantitative agreement with each other. By tracking satellites from their moment of infall in TNG100, we directly show that atomic hydrogen (HI) is depleted at fractionally higher rates than H$_2$ on average. Supporting this picture, we find that the H$_2$/HI mass ratios of satellites are elevated relative to centrals in xCOLD GASS. We provide additional predictions for the effect of environment on H$_2$ -- both absolute and relative to HI -- that can be tested with spectral stacking in future CO surveys.
|
494505
|
Right electrolyte doubles novel two-dimensional material's ability to store energy
|
10.1038/s41560-019-0339-9
| 2,019 |
Nature Energy
|
Influences from solvents on charge storage in titanium carbide MXenes
|
Pseudocapacitive energy storage in supercapacitor electrodes differs significantly from the electrical double-layer mechanism of porous carbon materials, which requires a change from conventional thinking when choosing appropriate electrolytes. Here we show how simply changing the solvent of an electrolyte system can drastically influence the pseudocapacitive charge storage of the two-dimensional titanium carbide, Ti3C2 (a representative member of the MXene family). Measurements of the charge stored by Ti3C2 in lithium-containing electrolytes with nitrile-, carbonate- and sulfoxide-based solvents show that the use of a carbonate solvent doubles the charge stored by Ti3C2 when compared with the other solvent systems. We find that the chemical nature of the electrolyte solvent has a profound effect on the arrangement of molecules/ions in Ti3C2, which correlates directly to the total charge being stored. Having nearly completely desolvated lithium ions in Ti3C2 for the carbonate-based electrolyte leads to high volumetric capacitance at high charge–discharge rates, demonstrating the importance of considering all aspects of an electrochemical system during development.
|
|
903038
|
Investigating plasma levels as a biomarker for Alzheimer's disease
|
A Centre for Healthy Brain Ageing (CHeBA) paper published in Current Alzheimer Research presents the first detailed study of the relationship between plasma levels of two amyloid beta peptides (Aβ1-40 and Aβ1-42), brain volumetrics (measures studying the size of brain, which shrinks with Alzheimer's disease) and cognitive performance in an investigation of the usefulness of plasma levels as a biomarker for Alzheimer's disease (AD).
Lead author on the paper and head of CHeBA's Proteomics Group at the University of New South Wales, Dr Anne Poljak, said that since amyloid beta (Aβ) peptides are the main component of the amyloid plaques found in Alzheimer patients' brains, changes in levels of Aβ in blood plasma may provide a biomarker for detecting increased risk or early diagnosis of disease.
"While Aβ has traditionally been measured using cerebrospinal fluid, plasma presents a more accessible sample for routine collection and screening although results to date have been variable," Dr Poljak said.
The study examined age-matched cognitively normal controls (n=126), individuals with amnestic mild cognitive impairment (aMCI, n=89) from CHeBA's Sydney Memory & Ageing Study, as well as individuals with Alzheimer's disease (AD, n=39).
Plasma levels of the two peptides and the Aβ1-42/1-40 ratio were lower in aMCI and Alzheimer's disease than in cognitively normal controls, and lower levels of Aβ1-42 were associated with lower global cognition and hippocampal volume and higher levels of white matter hyperintensities (which are believed to contribute to Alzheimer's disease). A genetic component was also identified, with associations between Aβ1-40 and cognitive and brain volume measures predominantly observed in individuals carrying the ε4 allele, while the opposite was observed in non-carriers. Longitudinal analysis revealed greater decline in global cognition and memory for the highest quintiles of Aβ1-42 and the ratio measure.
Director of CHeBA and co-author on the paper, Professor Perminder Sachdev, said he was encouraged by the findings.
"These findings certainly suggest that plasma Aβ measures may serve as biomarkers of Alzheimer's disease," he said.
|
10.2174/1567205013666151218150202
| 2,015 |
Current Alzheimer Research
|
The Relationship Between Plasma Aβ Levels, Cognitive Function and Brain Volumetrics: Sydney Memory and Ageing Study
|
Objectives: Determine whether (1) a relationship exists between plasma amyloid-β (Aβ)1- 40 and 1-42 peptide levels, brain volumetrics and cognitive performance in elderly individuals with and without amnestic mild cognitive impairment (aMCI), (2) plasma Aβ peptide levels differ between apolipoprotein E (APOE) ε4 carriers and non-carriers and (3) longitudinal changes in cognition and brain volume relate to Aβ levels. Methods: Subjects with aMCI (n = 89) and normal cognition (n = 126) were drawn from the Sydney Memory and Aging Study (Sydney MAS), a population based study of non-demented 70-90 year old individuals; 39 Alzheimer’s disease (AD) patients were recruited from a specialty clinic. Sydney MAS participants underwent brain MRI scans and were assessed on 19 cognitive measures and were APOE ε4 genotyped. Plasma levels of Aβ1-40 and 1-42 were quantified using ELISA. Results: Wave1 plasma levels of Aβ peptides and Aβ1−42/1-40 ratio were lower in aMCI and AD, and Aβ1−42 was positively associated with global cognition and hippocampal volume and negatively with white matter hyperintensities. The relationships of Aβ1-40 and Aβ1-42 were predominantly observed in ε4 allele carriers and non-carriers respectively. Longitudinal analysis revealed greater decline in global cognition and memory for the highest quintiles of Aβ1−42 and the ratio measure. Conclusion: Plasma Aβ levels and the Aβ1−42/1-40 ratio are related to cognition and hippocampal volumes, with differential associations of Aβ1-40 and Aβ1-42 in ε4 carriers and non-carriers. These data support the Aβ sink model of AD pathology, and suggest that plasma Aβ measures may serve as biomarkers of AD. Keywords: Aβ1-40, Aβ1-42, APOE, brain volume, cognition, neuropsychological test, MRI, plasma, white matter hyperintensities.
|
929201
|
An experimental loop for simulating nuclear reactors in space
|
Nuclear thermal propulsion, which uses heat from nuclear reactions as fuel, could be used one day in human spaceflight, possibly even for missions to Mars. Its development, however, poses a challenge. The materials used must be able to withstand high heat and bombardment of high-energy particles on a regular basis.
Will Searight, a nuclear engineering doctoral student at penn State, is contributing to research that could make these advancements more feasible. He published findings from a preliminary design simulation in Fusion Science and Technology, a publication of the American Nuclear Society.
To better investigate nuclear thermal propulsion, Searight simulated a small-scale laboratory experiment known as a hydrogen test loop. The setup mimics a reactor's operation in space, where flowing hydrogen travels through the core and propels the rocket — at temperatures up to nearly 2,200 degrees Fahrenheit. Searight developed the simulation using dimensions from detailed drawings of tie tubes, the components that make up much of the test loop through which hydrogen flows. Industry partner Ultra Safe Nuclear Corporation (USNC) provided the drawings.
“Understanding how USNC’s components behave in a hot hydrogen environment is crucial to bringing our rockets to space,” Searight said. “We’re thrilled to be working with one of the main reactor contractors for NASA’s space nuclear propulsion project, which is seeking to produce a demonstration nuclear thermal propulsion engine within a decade.”
Advised by Leigh Winfrey, associate professor and undergraduate program chair of nuclear engineering, Searight used Ansys Fluent, a modeling software, to design a simulation loop from a stainless steel pipe with an outer diameter of about two inches. In the model, the loop connects to a hydrogen pump and circulates hot hydrogen through a test section adjacent to a heating element.
Searight found that while consistent heating of hydrogen to 2,200 degrees Fahrenheit was possible, it was necessary to include a heating element directly above the test section to prevent a reduction in heating. Data collected from the modeling software showed that the flow of hydrogen through the test section was smooth and uniform, reducing uneven distribution of heat through the loop that could jeopardize the setup’s safety and lifespan. Analysis of the results also verified that stainless steel would allow for more convenient and cost-effective construction of the loop.
“We are excited to take the first steps in developing a unique capability for extreme environment simulation at Penn State,” Winfrey said. “This preliminary work will enable us to pursue research that could have a major impact on the future of space exploration.”
With further research, Searight’s preliminary work could enable expanded testing of materials that could one day be implemented to create faster, more efficient space travel using reactor-fueled rockets.
Recently, Searight received the George P. Shultz and James W. Behrens Graduate Scholarship from ANS. Searight will use the award to support his future work on the test loop. The $3,000 scholarship honors Shultz, a nuclear nonproliferation advocate and Presidential Medal of Freedom recipient who died in February, and Behrens, a previous ANS board member who held numerous positions in the national security sector.
A NASA Small Business Innovation Research contract supported this work.
Fusion Science & Technology
10.1080/15361055.2021.1913373
Preliminary Design of a Hot Hydrogen Test Loop for Plasma-Material–Interaction Evaluation
28-Aug-2021
|
10.1080/15361055.2021.1913373
| 2,021 |
Fusion Science & Technology
|
Preliminary Design of a Hot Hydrogen Test Loop for Plasma-Material–Interaction Evaluation
|
One of the most pressing issues in the commercial development of fusion energy is the design and testing of high-temperature materials that can withstand high heat and particle fluxes while maintaining desirable structural and material performance. This challenge is also present in advanced fission reactor and nuclear thermal propulsion (NTP) system development, and experimental data generated from common material candidates provide novel cross-disciplinary validation and verification of model development. To this end, a hot hydrogen test loop capable of producing circulating hydrogen at temperatures up to 1200°C is being designed and constructed at The Pennsylvania State University, with the immediate intent to study the effects of hydrogen exposure on NTP component materials. These materials can include metals, ceramics, and any materials combination of interest. This work details the preliminary design work behind the current loop design, demonstrating effective operation at the current temperature requirement, and will inform higher-temperature designs where plasma effects become more significant.
|
506684
|
Teach yourself everyday happiness with imagery training
|
Flashbacks of scenes from traumatic events often haunt those suffering from psychiatric conditions, such as Post Traumatic Stress Disorder (PTSD). "The close relationship between the human imagery system and our emotions can cause deep emotional perturbations", says Dr Svetla Velikova of Smartbrain in Norway. "Imagery techniques are often used in cognitive psychotherapy to help patients modify disturbing mental images and overcome negative emotions." Velikova and her team set out to see if such techniques could become self-guided and developed at home, away from the therapist's chair.
Healthy people are also emotionally effected by what we see and the images we remember. Velikova explains, "if we visually remember an image from an unpleasant interaction with our boss, this can cause an increased level of anxiety about our work and demotivation." There is great interest in ways to combat such everyday negative emotional responses through imagery training. But she warns, "this is a challenging task and requires a flexible approach. Each day we face different problems and a therapist teaches us how to identify topics and strategies for imagery exercises."
To find out if we can train ourselves to use imagery techniques and optimize our emotional state, Velikova and co-workers turned to 30 healthy volunteers. During a two-day workshop the volunteers learnt a series of imagery techniques. They learnt how to cope with negative emotions from past events through imagery transformation, how to use positive imagery for future events or goals, and techniques to improve social interactions and enhance their emotional balance in daily life. They then spent the next 12 weeks training themselves at home for 15-20 minutes a day, before attending another similar two-day workshop.
Velikova compared the results of participant psychological assessment and brain activity, or electroencephalographic (EEG), measurement, before and after the experiment. "The psychological testing showed that depressive symptoms were less prominent. The number of those with subthreshold depression, expressing depressive symptoms but not meeting the criteria for depression, was halved. Overall, volunteers were more satisfied with life and perceived themselves as more efficient" she explains.
Following analysis, the EEG data showed significant changes in the beta activity in the right medial prefrontal cortex of the brain. Velikova notes that this region is known to be involved in imaging pleasant emotions and contributing to the degree of satisfaction with life. There were also changes in the functional connectivity of the brain, including increased connectivity between the temporal regions from both hemispheres, which Velikova attributes to enhanced coordination of networks linked to processing of images. She concludes, "this combination of EEG findings also suggests a possible increase in the activity of GABA (gamma -aminobutyric acid), well known for its anti-anxiety and antidepressant properties."
Velikova and co-workers' results indicate that self-guided emotional imagery training has great potential to improve the everyday emotional wellbeing in healthy people. The team is now further exploring how the approach affects the cognitive function of healthy people. With minimal professional intervention, this technique could be developed to be a cost-effective aid for those with subthreshold depression. It could also be promoted by businesses to help improve workforce morale and drive up productivity.
###
|
10.3389/fnhum.2016.00664
| 2,017 |
Frontiers in Human Neuroscience
|
Can the Psycho-Emotional State be Optimized by Regular Use of Positive Imagery?, Psychological and Electroencephalographic Study of Self-Guided Training
|
The guided imagery training is considered as an effective method and therefore widely used in modern cognitive psychotherapy, while less is known about the effectiveness of self-guided. The present study investigated the effects of regular use of self-guided positive imagery, applying both subjective (assessment of the psycho-emotional state) and objective (electroencephalographic, EEG) approaches to research. Thirty healthy subjects participated in the cognitive imagery-training program for twelve weeks. The schedule began with group training with an instructor for two days, where the participants learned various techniques of positive imagery, after which they continued their individual training at home. Psychological and EEG evaluations were applied at the baseline and at the end of the training period. The impact of training on the psycho-emotional states of the participants was evaluated through: Center for epidemiologic studies- Depression (CES-D) 20 item scale, Satisfaction with life scale (SWLS) and General Self-Efficacy scale (GSE). EEGs (19-channels) were recorded at rest with eyes closed. EEG analysis was performed using Low resolution electromagnetic tomography (LORETA) software that allows the comparison of current source density (CSD) and functional connectivity (lagged phase and coherence) in the default mode network before and after a workout. Initial assessment with CES-D indicated that 22 participants had subthreshold depression. After the training participants had less prominent depressive symptoms (CES-D, p=0.002), were more satisfied with their lives (SWLS, p=0.036), and also evaluated themselves as more effective (GSE, p=0.0002). LORETA source analysis revealed an increase in the CSD in the right mPFC (Brodmann area 10) for beta-2 band after training (p=0.038). LORETA connectivity analysis demonstrated an increase in lagged coherence between temporal gyruses of both hemispheres in the delta band, as well as between the Posterior cingulate cortex and right BA21 in the theta band after a workout. Since mPFC is involved in emotional regulation, functional changes in this region can be seen in line with the results of psychological tests and their objective validation. A possible activation of GAMK-ergic system is discussed. Self-guided positive imagery (after instructions) can be helpful for emotional selfregulation in healthy subjects and has the potential to be useful in subthreshold depression.
|
778992
|
Gut bacteria may hold key to treating autoimmune disease
|
Defects in the body's regulatory T cells (T reg cells) cause inflammation and autoimmune disease by altering the type of bacteria living in the gut, researchers from The University of Texas Health Science Center at Houston have discovered. The study, "Resetting microbiota by Lactobacillus reuteri inhibits T reg deficiency-induced autoimmunity via adenosine A2A receptors," which will be published online December 19 in The Journal of Experimental Medicine, suggests that replacing the missing gut bacteria, or restoring a key metabolite called inosine, could help treat children with a rare and often fatal autoimmune disease called IPEX syndrome.
T reg cells suppress the immune system and prevent it from attacking the body's own tissues by mistake. Defects in T reg cells therefore lead to various types of autoimmune disease. Mutations in the transcription factor Foxp3, for example, disrupt T reg function and cause IPEX syndrome. This inherited autoimmune disorder is characterized by a variety of inflammatory conditions including eczema, type I diabetes, and severe enteropathy. Without a stem cell transplant from a suitable donor, IPEX syndrome patients usually die before the age of two.
Autoimmune diseases can also be caused by changes in the gut microbiome, the population of bacteria that reside within the gastrointestinal tract. In the study, the team led by Yuying Liu and J. Marc Rhoads at The University of Texas Health Science Center at Houston McGovern Medical School find that mice carrying a mutant version of the Foxp3 gene show changes in their gut microbiome at around the same time that they develop autoimmune symptoms. In particular, the mice have lower levels of bacteria from the genus Lactobacillus. The researchers discovered that by feeding the mice with Lactobacillus reuteri, they could "reset" the gut bacterial community and reduce the levels of inflammation, significantly extending the animals' survival.
Bacteria can secrete metabolic molecules that have large effects on their hosts. The levels of a metabolite called inosine were reduced in mice lacking Foxp3 but were restored to normal after resetting the gut microbiome with L. reuteri. The researchers found that, by binding to cell surface proteins called adenosine A2A receptors, inosine inhibits the production of Th1 and Th2 cells. These pro-inflammatory T cell types are elevated in Foxp3-deficient mice, but their numbers are diminished by treatment with either L. reuteri or inosine itself, reducing inflammation and extending the animals' life span.
"Our findings suggest that probiotic L. reuteri, inosine, or other A2A receptor agonists could be used therapeutically to control T cell-mediated autoimmunity," says Yuying Liu.
###
Conflict of interest statement: Some of the authors of this study, including Yuying Liu and J. Marc Rhoads, have a patent application pending on use of inosine and A2A agonists in IPEX syndrome.
He, B., et al. 2017. J. Exp. Med. https://doi.org/10.1084/jem.20160961
About The Journal of Experimental Medicine
The Journal of Experimental Medicine (JEM) features peer-reviewed research on immunology, cancer biology, stem cell biology, microbial pathogenesis, vascular biology, and neurobiology. All editorial decisions are made by research-active scientists in conjunction with in-house scientific editors. JEM provides free online access to many article types from the date of publication and to all archival content. Established in 1896, JEM is published by The Rockefeller University Press. For more information, visit jem.org.
Visit our Newsroom, and sign up for a weekly preview of articles to be published. Embargoed media alerts are for journalists only.
Follow JEM on Twitter at @JExpMed and @RockUPress.
|
10.1084/jem.20160961
| 2,016 |
The Journal of Experimental Medicine
|
Resetting microbiota by <i>Lactobacillus reuteri</i> inhibits T reg deficiency–induced autoimmunity via adenosine A2A receptors
|
Regulatory T (T reg) cell deficiency causes lethal, CD4+ T cell–driven autoimmune diseases. Stem cell transplantation is used to treat these diseases, but this procedure is limited by the availability of a suitable donor. The intestinal microbiota drives host immune homeostasis by regulating the differentiation and expansion of T reg, Th1, and Th2 cells. It is currently unclear if T reg cell deficiency–mediated autoimmune disorders can be treated by targeting the enteric microbiota. Here, we demonstrate that Foxp3+ T reg cell deficiency results in gut microbial dysbiosis and autoimmunity over the lifespan of scurfy (SF) mouse. Remodeling microbiota with Lactobacillus reuteri prolonged survival and reduced multiorgan inflammation in SF mice. L. reuteri changed the metabolomic profile disrupted by T reg cell deficiency, and a major effect was to restore levels of the purine metabolite inosine. Feeding inosine itself prolonged life and inhibited multiorgan inflammation by reducing Th1/Th2 cells and their associated cytokines. Mechanistically, the inhibition of inosine on the differentiation of Th1 and Th2 cells in vitro depended on adenosine A2A receptors, which were also required for the efficacy of inosine and of L. reuteri in vivo. These results reveal that the microbiota–inosine–A2A receptor axis might represent a potential avenue for combatting autoimmune diseases mediated by T reg cell dysfunction.
|
723710
|
Rheumatoid arthritis and systemic lupus erythematosus during COVID-19 quarantine period
|
In the Philippines, in the early months of the COVID-19 pandemic, there occurred a supply shortage of hydroxychloroquine and methotrexate. Limited access to medication and the life changes caused from the COVID-19 pandemic may prompt patients with rheumatoid arthritis (RA) or systemic lupus erythematosus (SLE) to experience disease flares.
The researchers investigated self-reported symptoms of disease flares among patients with rheumatoid arthritis or systemic lupus erythematosus during the COVID-19 pandemic. They collected information through online surveys from 512 patients with SLE or RA. The data included sociodemographic characteristics, self-reported physical symptoms, health service utilization, and availability of hydroxychloroquine and methotrexate.
From the data, 79% of respondents had lupus, while 21% had RA. One-third of the respondents had contact with their attending physician during the two-month quarantine period before the survey. 82% were prescribed hydroxychloroquine and 23.4% were prescribed methotrexate by the doctors, from which 68.6% and 65%, respectively, had "irregular" intake of these medicines due to lack of availability. 66.2% had good health status, 24% showed no symptoms during the two-week period prior to the survey. The most common symptoms experienced were joint pain, muscle pain, headache, and skin rash. 5% of the respondents had a combination of these four most common symptoms.
Irregular supply of hydroxychloroquine among patients with SLE was associated with more frequent appearance of muscle pain or rash. Irregular supply of methotrexate among RA patients prescribed hydroxychloroquine and methotrexate was linked with more frequent occurrence of joint pains with or without swelling. Irregular supply of hydroxychloroquine was associated with less frequent occurrence of dizziness in RA patients.
There was a significant association between the irregular supply of hydroxychloroquine or methotrexate with the presence of muscle pain, rash, or joint pains during the two weeks prior to the survey.
Read the full-text of the article here: https://benthamopen.com/ABSTRACT/TORJ-15-16
###
The Open Rheumatology
|
10.2174/1874312902115010016
| 2,021 |
The Open Rheumatology Journal
|
Self-Reported Symptoms in a Cohort of Rheumatoid Arthritis and Systemic Lupus Erythematosus during the COVID-19 Quarantine Period
|
Background: During the first three months of the COVID-19 pandemic in the Philippines, there was a supply shortage of hydroxychloroquine and methotrexate. Limited access to medication and the life changes resulting from the COVID-19 pandemic may predispose patients with rheumatoid arthritis (RA) or systemic lupus erythematosus (SLE) to disease flares. Objective: This study aimed to investigate self-reported symptoms of disease flares among patients with rheumatoid arthritis or systemic lupus erythematosus during the COVID-19 pandemic. Methods: A total of 512 completed online surveys from patients with SLE or RA were collected. The data included sociodemographic characteristics, self-reported physical symptoms, health service utilization, and availability of hydroxychloroquine and methotrexate. Results: Seventy-nine percent of respondents had lupus, while 21% had RA. One-third of the cohort had contact with their attending physician during the two-month quarantine period prior to the survey. Eighty-two percent were prescribed hydroxychloroquine and 23.4% were prescribed methotrexate; but 68.6% and 65%, respectively, had “irregular” intake of these medicines due to unavailability. The current health status was reported as good by 66.2%; 24% had no symptoms during the two-week period prior to the survey. The most common symptoms experienced were joint pain (51%), muscle pain (35%), headache (26.8%), and skin rash (19.1%). Five percent had a combination of these four most common symptoms. Irregular supply of hydroxychloroquine among patients with SLE (n=323) was associated with more frequent occurrence of muscle pain (40.6% vs 27.9%, p=0.03) or rash (27.4% vs 11.7%, p<0.001). Irregular supply of methotrexate among RA patients prescribed hydroxychloroquine and methotrexate (n=36) was associated with more frequent occurrence of joint pains with or without swelling (73.9% vs 38.5%, p=0.04). Irregular supply of hydroxychloroquine was associated with less frequent occurrence of dizziness (0 vs 66.7%, p<0.001) among RA patients (n=18). Conclusion: In our cohort of RA and SLE, the majority reported at least one symptom that may indicate disease flare. There was a significant association between the irregular supply of hydroxychloroquine or methotrexate with the presence of muscle pain, rash, or joint pains during the 14-day period prior to the survey.
|
750429
|
New process allows 3-D printing of nanoscale metal structures
|
For the first time, it is possible to create complex nanoscale metal structures using 3-D printing, thanks to a new technique developed at Caltech.
The process, once scaled up, could be used in a wide variety of applications, from building tiny medical implants to creating 3-D logic circuits on computer chips to engineering ultralightweight aircraft components. It also opens the door to the creation of a new class of materials with unusual properties that are based on their internal structure. The technique is described in a study that will be published in Nature Communications on February 9.
In 3-D printing--also known as additive manufacturing--an object is built layer by layer, allowing for the creation of structures that would be impossible to manufacture by conventional subtractive methods such as etching or milling. Caltech materials scientist Julia Greer is a pioneer in the creation of ultratiny 3-D architectures built via additive manufacturing. For instance, she and her team have built 3-D lattices whose beams are just nanometers across--far too small to be seen with the naked eye. These materials exhibit unusual, often surprising properties; Greer's team has created exceptionally lightweight ceramics that spring back to their original shape, spongelike, after being compressed.
Greer's group 3-D prints structures out of a variety of materials, from ceramics to organic compounds. Metals, however, have been difficult to print, especially when trying to create structures with dimensions smaller than around 50 microns, or about half the width of a human hair.
The way 3-D printing works at the nanoscale is that a high-precision laser zaps the liquid in specific locations of the material with just two photons, or particles of light. This provides enough energy to harden liquid polymers into solids, but not enough to fuse metal.
"Metals don't respond to light in the same way as the polymer resins that we use to manufacture structures at the nanoscale," says Greer, professor of materials science, mechanics, and medical engineering in Caltech's Division of Engineering and Applied Science. "There's a chemical reaction that gets triggered when light interacts with a polymer that enables it to harden and then form into a particular shape. In a metal, this process is fundamentally impossible."
Greer's graduate student Andrey Vyatskikh came up with a solution. He used organic ligands--molecules that bond to metal--to create a resin containing mostly polymer, but which carries along with it metal that can be printed, like a scaffold.
In the experiment described in the Nature Communications paper, Vyatskikh bonded together nickel and organic molecules to create a liquid that looks a lot like cough syrup. They designed a structure using computer software, and then built it by zapping the liquid with a two-photon laser. The laser creates stronger chemical bonds between the organic molecules, hardening them into building blocks for the structure. Since those molecules are also bonded to the nickel atoms, the nickel becomes incorporated into the structure. In this way, the team was able to print a 3-D structure that was initially a blend of metal ions and nonmetal, organic molecules.
Vyatskikh then put the structure into an oven that slowly heated it up to 1,000 degrees Celsius (around 1,800 degrees Fahrenheit) in a vacuum chamber. That temperature is well below the melting point of nickel (1,455 degrees Celsius, or about 2,650 degrees Fahrenheit) but is hot enough to vaporize the organic materials in the structure, leaving only the metal. The heating process, known as pyrolysis, also fused the metal particles together.
In addition, because the process vaporized a significant amount of the structure's material, its dimensions shrank by 80 percent, but it maintained its shape and proportions.
"That final shrinkage is a big part of why we're able to get structures to be so small," says Vyatskikh, lead author on the Nature Communications paper. "In the structure we built for the paper, the diameter of the metal beams in the printed part is roughly 1/1000th the size of the tip of a sewing needle."
Greer and Vyatskikh are still refining their technique; right now, the structure reported on in their paper includes some voids left behind by the vaporized organic materials as well as some minor impurities. Also, if the technique is to be of use to industry, it will need to be scaled up to produce much more material, says Greer. Although they started with nickel, they are interested in expanding to other metals that are commonly used in industry but are challenging or impossible to fabricate in small 3-D shapes, such as tungsten and titanium. Greer and Vyatskikh are also looking to use this process to 3-D print other materials, both common and exotic, such as ceramics, semiconductors, and piezoelectric materials (materials with electrical effects that result from mechanical stresses).
###
The study is titled "Additive Manufacturing of 3D Nano-Architected Metals." Co-authors include Caltech Resnick Sustainability Institute Postdoctoral Scholar in Applied Physics and Materials Science Akira Kudo and mechanical engineering graduate student Carlos Portela as well as collaborators Stéphane Delalande of the Centre Technique de Vélizy in France and Xuan Zhang of Tsinghua University in China. Funding for this research came from the Department of Defense.
|
10.1038/s41467-018-03071-9
| 2,018 |
Nature Communications
|
Additive manufacturing of 3D nano-architected metals
|
Most existing methods for additive manufacturing (AM) of metals are inherently limited to ~20–50 μm resolution, which makes them untenable for generating complex 3D-printed metallic structures with smaller features. We developed a lithography-based process to create complex 3D nano-architected metals with ~100 nm resolution. We first synthesize hybrid organic–inorganic materials that contain Ni clusters to produce a metal-rich photoresist, then use two-photon lithography to sculpt 3D polymer scaffolds, and pyrolyze them to volatilize the organics, which produces a >90 wt% Ni-containing architecture. We demonstrate nanolattices with octet geometries, 2 μm unit cells and 300–400-nm diameter beams made of 20-nm grained nanocrystalline, nanoporous Ni. Nanomechanical experiments reveal their specific strength to be 2.1–7.2 MPa g−1 cm3, which is comparable to lattice architectures fabricated using existing metal AM processes. This work demonstrates an efficient pathway to 3D-print micro-architected and nano-architected metals with sub-micron resolution. Most current methods for additive manufacturing of complex metallic 3D structures are limited to a resolution of 20–50 µm. Here, the authors developed a lithography-based process to produce 3D nanoporous nickel nanolattices with octet geometries and a resolution of 100 nm.
|
953210
|
400 GW wind, solar power per year to meet 1.5 C Paris Agreement
|
What will it cost to reach the goal of the Paris Agreement and limit global warming to 1.5 degrees Celsius? Is it at all possible? And what if we aim for two degrees instead? Will it be cheaper?
These are some of the questions that researchers from Aarhus University, in collaboration with German researchers, have tried to answer by modelling the green transition of the sector-coupled European energy system, which also takes into account fossil fuel depending industries. The research has just been published in the prestigious journal, Joule.
The answer is that, if we are to have a climate-neutral energy supply before 2050, and keep the global temperature increase to 1.5 degrees, we need to start installing a lot of solar and wind power, while at the same time investing massively in Power-to-X technologies and include at least some carbon-capture.
"We can reach the goal of the Paris Agreement, but there’s a price to pay. Among other things, it will require massive installation of wind and solar energy amounting to 400 GW of new capacity every year. This aligns well with the Danish Government's goal of four-times more wind and solar energy before 2030, but the goal also applies to all European countries," says Associate Professor Marta Victoria, an expert in energy systems modelling and solar photovoltaics at Aarhus University, Department of Mechanical and Production Engineering.
The model confirms the need for installation of 400 GW solar and wind energy in the years 2025-2035. This is far above the European historical maximum of approx. 50 GW. Furthermore, the existing energy grid needs strengthening, so that it can sustain this significant deployment of fluctuating, renewable energy sources.
The deployment of renewable energy will lead to comprehensive electrification of European societies. The parts of industry and the transport sector that cannot be electrified, for example aviation, shipping and freight transport, have a great need for green fuels and chemicals with a high energy density:
"Here, Power-to-X including hydrogen production, play a crucial role, and these technologies will also be used as storage media to help balance solar and wind energy production" says Marta Victoria.
She continues,
"It’s also important to develop technologies that can capture CO2 from the atmosphere. Without these, it will be almost impossible to meet the challenge posed by the Paris Agreement and keep us within 1.5 degrees Celsius as suggested in the Agreement."
The associate professor and the rest of the research group have also considered whether it would be cheaper and more cost-effective to be a little less ambitious about the climate. The answer is both yes and no, because among other things it depends on the cost associated with stronger climate change impacts in the 2 C option, compared to the 1.5 C.
“If we assume that the 2 C option suffers from more serious climate change impacts, the economic consequences of this will outweigh the costs of the 1.5 C option,” says Marta Victoria, although she confirms that a 2 C option will require a significantly lower annual installation rate for wind and solar power.
Marta Victoria stresses that this research has only looked at what it will take to achieve political goals, for example a maximum temperature increase of 1.5 degrees Celsius. The research is based on all known forms of energy and high-resolution time data for all European countries, and it describes the most cost-effective, cheapest and fastest way to reach the goals. The high-resolution model was run in PRIME, a high-performance computing cluster at Aarhus University, Department of Mechanical and Production Engineering.
The research was coordinated by Aarhus University and it was carried out in collaboration with researchers from the Technical University of Berlin. The scientific article is available via this link.
Joule
10.1016/j.joule.2022.04.016
Computational simulation/modeling
Speed of technological transformations required in Europe to achieve different climate goals
18-May-2022
|
10.1016/j.joule.2022.04.016
| 2,022 |
Joule
|
Speed of technological transformations required in Europe to achieve different climate goals
|
Europe's contribution to global warming will be determined by the cumulative emissions until climate neutrality is achieved. In this paper, we investigate alternative transition paths under carbon budgets corresponding to temperature increases between 1.5 and 2C. We use PyPSA-Eur-Sec, an open model of the sector-coupled European energy system with high spatial and temporal resolution. All the paths entail similar technological transformations, but the timing of the scale-up of important technologies like water electrolysis, carbon capture and hydrogen networks differs in the model. In our results, solar PV, onshore and offshore wind become the cornerstone of a net-zero energy system enabling the decarbonisation of other sectors via direct electrification (e.g. heat pumps and electric vehicles) or indirect electrification (e.g. using synthetic fuels). Under the cost and performance assumptions applied, for a social cost of carbon (SCC) of 120EUR/tCO2, transition paths under 1.5 and 1.6C budgets are, respectively, 8%, and 1% more expensive than the 2C-budget because building assets earlier costs more. These pathways also see a faster ramp-up of new technologies before 2035. Under these assumptions, the 1.5C-budget is cost-optimal in our model, if SCC of at least 300 EUR/tCO2 is considered. Moreover, we discuss the strong implications of the SCC and discount rate assumed when comparing alternative paths. We also analyse the consequences of different assumptions on the cost and potential of CO2 sequestration.
|
891866
|
A new player helping viruses hijack their hosts
|
A particular long noncoding RNA gives viruses a replication boost as they infect their hosts, helping them alter their host cell's metabolism to their advantage, scientists report. The finding reveals a new way that viruses interact with hosts, for survival, and reveals a potential target for developing broad-acting antiviral therapeutics. Viruses thrive in the hosts they infect because they alter the metabolic networks of these organisms, though just which molecules and mechanisms are involved in this process - allowing viruses to prosper - has been unclear. Identifying them is critical for better understanding viral infection broadly, which helps in developing antiviral strategies. Here, Pin Wang and colleagues sought to explore host-virus interactions outside of those controlled by type 1 interferon. They focused specifically on long noncoding RNAs, the function of which in virally infected cells has been unclear. Working in mouse and human cells, they identified a novel long noncoding RNA they call lncRNA-ACOD1 that was induced by viral infection, by multiple viruses. Its presence enhanced replication of these viruses through interaction with a particular metabolic enzyme, the researchers report. Critically, in cells deficient in this long noncoding RNA, viral replication was weaker, substantiating the molecule's role as a helper in the viral effort to hijack a host.
###
|
10.1126/science.aao0409
| 2,017 |
Science
|
An interferon-independent lncRNA promotes viral replication by modulating cellular metabolism
|
Host RNA helps promote viral replication Viruses exploit host metabolic networks for survival. Wang et al. identified a long noncoding RNA (lncRNA) that enhances replication of multiple viruses in both mouse and human cells (see the Perspective by Kotzin et al ). The expression of this cytoplasmic lncRNA was induced by viruses and independent of type I interferon. The lncRNA directly bound to and stimulated the metabolic enzyme glutamic-oxaloacetic transaminase. This viral strategy may have relevance for clinical diseases involving metabolic dysfunction and viral infection. Science , this issue p. 1051 ; see also p. 993
|
791909
|
Firefighters exposed to more potentially harmful chemicals than previously thought
|
CORVALLIS, Ore. - A new Oregon State University study suggests that firefighters are more likely to be exposed to potentially harmful chemicals while on duty compared to off duty.
The on-duty firefighters in the Kansas City, Missouri, area experienced higher exposures of polycyclic aromatic hydrocarbons, or PAHs, which are a family of chemicals that are known to have the potential to cause cancer. They were also exposed to 18 PAHs that have not been previously reported as firefighting exposures in earlier research.
The study, funded by the Federal Emergency Management Agency, is published in the journal Environment International.
The results are important because previous studies have shown that firefighters have an increased risk of developing cancer and other damaging health effects, said study lead Kim Anderson, an environmental chemist and Extension specialist in OSU's College of Agricultural Sciences.
PAHs are a large group of chemical compounds that contain carbon and other elements. They form naturally after almost any type of combustion, both natural and human-created. In addition to burning wood, plants and tobacco, PAHs are also in fossil fuels.
"We don't have enough data to profile the source of the PAHs, but we know PAHs appear from combustion, and obviously combustion is their work," Anderson said. "They are also putting on a heavy load of protective gear that has PAHs, and they use cleaning products that have PAHs."
The firefighters in the study wore personal passive samplers in the shape of a military-style dog tag made of silicone on an elastic necklace. The tags are made of the same material as OSU's patented silicone wristbands that Anderson's lab has been using for several years to study chemical exposure in humans and cats.
This study demonstrates that the dog tags, which absorb chemicals from the air and skin, appear to be a reliable sampling technology necessary for assessing chemical exposures in firefighters, Anderson said.
"I'm quite confident those exposures existed but if you don't have something to help you find them you don't know for sure," Anderson said. "Certainly, we found that it's a lot more than what people had thought."
For their study, the researchers sampled individual firefighters' exposures at two departments - the Raytown Fire Protection District and Southern Platte Fire Protection District. They defined the Raytown department as a "high call volume" department, with a historic average of 12 fire calls per month, and the Southern Platte department as "low call volume," with less than two calls per month historically.
After completing a survey on demographics, occupational history, and suspected current exposures, the recruited firefighters wore a dog tag during the next 30 on- and off-shift days. During fire calls, tags were worn over clothing but underneath their gear. The firefighters were instructed to wear the dog tags continuously during all regular activities, including eating, showering and sleeping. Sampling occurred from November 2018 to April 2019.
When they analyzed the dog tags that were returned to Anderson's lab at Oregon State, 45 unique PAHs, of which 18 have not been previously reported as firefighting exposures, were detected. PAH exposures increased as the number of fires a participant responded to increased. PAH concentrations were not only higher when on-duty compared to off-duty, but also higher from the high call volume department compared to the low call volume department.
Each of the participating firefighters has been provided a report on their basic health information and chemical exposure, Anderson said. The participants also received a fact sheet about firefighters and cancer risk. The fact sheet includes some simple steps firefighters can take to reduce their exposure to harmful chemicals, such as always wearing their personal protective equipment, taking a shower after each fire and before ending their shift, and cleaning their gear after every fire.
|
10.1016/j.envint.2020.105818
| 2,020 |
Environment International
|
Discovery of firefighter chemical exposures using military-style silicone dog tags
|
Occupational chemical hazards in the fire service are hypothesized to play a role in increased cancer risk, and reliable sampling technologies are necessary for conducting firefighter chemical exposure assessments. This study presents the military-style dog tag as a new configuration of silicone passive sampling device to sample individual firefighters' exposures at one high and one low fire call volume department in the Kansas City, Missouri metropolitan area. The recruited firefighters (n = 56) wore separate dog tags to assess on- and off-duty exposures (ndogtags = 110), for a total of 30 24 h shifts. Using a 63 PAH method (GC-MS/MS), the tags detected 45 unique PAHs, of which 18 have not been previously reported as firefighting exposures. PAH concentrations were higher for on- compared to off-duty tags (0.25 < Cohen's d ≤ 0.80) and for the high compared to the low fire call volume department (0.25 ≤ d < 0.70). Using a 1530 analyte screening method (GC-MS), di-n-butyl phthalate, diisobutyl phthalate, guaiacol, and DEET were commonly detected analytes. The number of fire attacks a firefighter participated in was more strongly correlated with PAH concentrations than firefighter rank or years in the fire service. This suggested that quantitative data should be employed for firefighter exposure assessments, rather than surrogate measures. Because several detected analytes are listed as possible carcinogens, future firefighter exposure studies should consider evaluating complex mixtures to assess individual health risks.
|
879818
|
IU study finds slight shift in attitudes toward bisexuals, from negative to neutral
|
While positive attitudes toward gay men and lesbians have increased over recent decades, a new study led by researchers at IU's Center for Sexual Health Promotion shows attitudes toward bisexual men and women are relatively neutral, if not ambivalent.
The study, led by Brian Dodge, associate professor in the Department of Applied Health Science and associate director of the Center for Sexual Health Promotion at Indiana University's School of Public Health-Bloomington, was recently published in PLOS ONE, an open-access, peer-reviewed online journal. Dodge and his colleagues are presenting the data today at the Annual Meeting of the American Public Health Association in Denver, Colorado.
The study is only the second to explore attitudes toward bisexual men and women -- those with the capacity for physical, romantic and/or sexual attraction to more than one sex or gender -- in a nationally representative sample. It is also the first to do so with a sample of gay, lesbian and other-identified individuals (pansexual, queer and other identity labels), in addition to those who identify as heterosexuals. The nationally representative sample was taken from the Center for Sexual Health Promotion's 2015 National Survey of Sexual Health and Behavior, one wave of data from an ongoing population-based survey of adults and adolescents in the U.S.
"While recent data demonstrates dramatic shifts in attitude (from negative to positive) toward homosexuality, gay/lesbian individuals and same-sex marriage in the U.S., most of these surveys do not ask about attitudes toward bisexuality or bisexual individuals," Dodge said. "And many rely on convenience sampling strategies that are not representative of the general population of the U.S."
The study looked at five negative connotations, found in previous studies, associated with bisexual men and women, including the idea that they are confused or in transition regarding their sexual orientation, that they are hypersexual and that they are vectors of sexually transmitted diseases.
The research showed that a majority of male and female respondents, more than one-third, were most likely to "neither agree nor disagree" with the attitudinal statements. In regard to bisexual men and women having the capability to be faithful in a relationship, nearly 40 percent neither agreed nor disagreed.
Those who identified as "other" had the most positive attitudes toward bisexuality, followed by gay/lesbian respondents and then heterosexuals.
Age played a factor in the results, with participants under the age of 25 indicating more positive attitudes toward bisexual men and women. Income and education also played a role: Higher-income participants were more likely to report more positive attitudes toward bisexual men and women, in addition to participants with higher levels of education.
Overall, attitudes toward bisexual women were more positive than attitudes toward bisexual men.
"While our society has seen marked shifts in more positive attitudes toward homosexuality in recent decades, our data suggest that attitudes toward bisexual men and women have shifted only slightly from very negative to neutral," Dodge said. "That nearly one-third of participants reported moderately to extremely negative attitudes toward bisexual individuals is of great concern given the dramatic health disparities faced by bisexual men and women in our country, even relative to gay and lesbian individuals."
Bisexual men and women face a disproportionate rate of physical, mental and other health disparities in comparison to monosexuals -- those who identify as exclusively heterosexual and exclusively homosexual, Dodge said. Although research has not determined the cause, Dodge said that negative attitudes and stigma associated with bisexuality could play a role.
Data from the National Survey of Sexual Health and Behavior shows that approximately 2.6 percent of adult men and 3.6 percent of adult women in the U.S. identify as bisexual. For females, that number is more than double the number of women who identify as lesbian, 0.9 percent. When it comes to adolescents, 1.5 percent of male adolescents (age 14 to 17) and 8.4 percent of female adolescents identify as bisexual.
Dodge said he hopes the results emphasize the need for efforts to decrease negative stereotypes and increase acceptance of bisexual individuals as a component of broader initiatives aimed at tolerance of sexual and gender minority individuals.
"After documenting the absence of positive attitudes toward bisexual men and women in the general U.S. population, we encourage future research, intervention and practice opportunities focused on assessing, understanding and eliminating biphobia -- for example, among clinicians and other service providers -- and determining how health disparities among bisexual men and women can be alleviated," he said.
|
10.1371/journal.pone.0164430
| 2,016 |
PLoS ONE
|
Attitudes toward Bisexual Men and Women among a Nationally Representative Probability Sample of Adults in the United States
|
As bisexual individuals in the United States (U.S.) face significant health disparities, researchers have posited that these differences may be fueled, at least in part, by negative attitudes, prejudice, stigma, and discrimination toward bisexual individuals from heterosexual and gay/lesbian individuals. Previous studies of individual and social attitudes toward bisexual men and women have been conducted almost exclusively with convenience samples, with limited generalizability to the broader U.S.Our study provides an assessment of attitudes toward bisexual men and women among a nationally representative probability sample of heterosexual, gay, lesbian, and other-identified adults in the U.S. Data were collected from the 2015 National Survey of Sexual Health and Behavior (NSSHB), via an online questionnaire with a probability sample of adults (18 years and over) from throughout the U.S. We included two modified 5-item versions of the Bisexualities: Indiana Attitudes Scale (BIAS), validated sub-scales that were developed to measure attitudes toward bisexual men and women. Data were analyzed using descriptive statistics, gamma regression, and paired t-tests. Gender, sexual identity, age, race/ethnicity, income, and educational attainment were all significantly associated with participants' attitudes toward bisexual individuals. In terms of responses to individual scale items, participants were most likely to "neither agree nor disagree" with all attitudinal statements. Across sexual identities, self-identified other participants reported the most positive attitudes, while heterosexual male participants reported the least positive attitudes. As in previous research on convenience samples, we found a wide range of demographic characteristics were related with attitudes toward bisexual individuals in our nationally-representative study of heterosexual, gay/lesbian, and other-identified adults in the U.S. In particular, gender emerged as a significant characteristic; female participants' attitudes were more positive than male participants' attitudes, and all participants' attitudes were generally more positive toward bisexual women than bisexual men. While recent population data suggest a marked shift in more positive attitudes toward gay men and lesbian women in the general population of the U.S., the largest proportions of participants in our study reported a relative lack of agreement or disagreement with all affective-evaluative statements in the BIAS scales. Findings document the relative lack of positive attitudes toward bisexual individuals among the general population of adults in the U.S. and highlight the need for developing intervention approaches to promote more positive attitudes toward bisexual individuals, targeted toward not only heterosexual but also gay/lesbian individuals and communities.
|
976381
|
Forecasting earthquakes that get off schedule
|
Results of a new study by Northwestern University researchers will help earthquake scientists better deal with seismology’s most important problem: when to expect the next big earthquake on a fault.
Seismologists commonly assume that big earthquakes on faults are pretty regular and that the next quake will occur after approximately the same amount of time as between the previous two. Unfortunately, the Earth often doesn’t work that way. Although earthquakes sometimes come sooner or later than expected, seismologists didn’t have a way to describe this.
Now they do. The Northwestern research team of seismologists and statisticians has developed an earthquake probability model that is more comprehensive and realistic than what is currently available. Instead of just using the average time between past earthquakes to forecast the next one, the new model considers the specific order and timing of previous earthquakes. It helps explain the puzzling fact that earthquakes sometimes come in clusters — groups with relatively short times between them, separated by longer times without earthquakes.
“Considering the full earthquake history, rather than just the average over time and the time since the last one, will help us a lot in forecasting when future earthquakes will happen,” said Seth Stein, William Deering Professor of Earth and Planetary Sciences in the Weinberg College of Arts and Sciences. “When you're trying to figure out a team's chances of winning a ball game, you don't want to look only at the last game and the long-term average. Looking back over additional recent games can also be helpful. We now can do a similar thing for earthquakes."
The study, titled “A More Realistic Earthquake Probability Model Using Long-Term Fault Memory,” was published recently in the Bulletin of the Seismological Society of America. Authors of the study are Stein, Northwestern professor Bruce D. Spencer and recent Ph.D. graduates James S. Neely and Leah Salditch. Stein is a faculty associate of Northwestern’s Institute for Policy Research (IPR), and Spencer is an IPR faculty fellow.
"Earthquakes behave like an unreliable bus,” said Neely, now at the University of Chicago. “The bus might be scheduled to arrive every 30 minutes, but sometimes it’s very late, other times it’s too early. Seismologists have assumed that even when a quake is late, the next one is no more likely to arrive early. Instead, in our model if it’s late, it’s now more likely to come soon. And the later the bus is, the sooner the next one will come after it.”
Traditional model and new model
The traditional model, used since a large earthquake in 1906 destroyed San Francisco, assumes that slow motions across the fault build up strain, all of which is released in a big earthquake. In other words, a fault has only short-term memory — it "remembers" only the last earthquake and has "forgotten" all the previous ones. This assumption goes into forecasting when future earthquakes will happen and then into hazard maps that predict the level of shaking for which earthquake-resistant buildings should be designed.
However, “Large earthquakes don’t occur like clockwork,” Neely said. “Sometimes we see several large earthquakes occur over relatively short time frames and then long periods when nothing happens. The traditional models can’t handle this behavior.”
In contrast, the new model assumes that earthquake faults are smarter — have longer-term memory — than seismologists assumed. The long-term fault memory comes from the fact that sometimes an earthquake didn't release all the strain that built up on the fault over time, so some remains after a big earthquake and can cause another. This explains earthquakes that sometimes come in clusters.
"Earthquake clusters imply that faults have long-term memory,” said Salditch, now at the U.S. Geological Survey. “If it's been a long time since a large earthquake, then even after another happens, the fault's ‘memory’ sometimes isn't erased by the earthquake, leaving left-over strain and an increased chance of having another. Our new model calculates earthquake probabilities this way."
For example, although large earthquakes on the Mojave section of the San Andreas fault occur on average every 135 years, the most recent one occurred in 1857, only 45 years after one in 1812. Although this wouldn’t have been expected using the traditional model, the new model shows that because the 1812 earthquake occurred after a 304-year gap since the previous earthquake in 1508, the leftover strain caused a sooner-than-average quake in 1857.=
"It makes sense that the specific order and timing of past earthquakes matters," said Spencer, a professor of statistics. "Many systems' behavior depends on their history over a long time. For example, your risk of spraining an ankle depends not just on the last sprain you had, but also on previous ones."
Bulletin of the Seismological Society of America
10.1785/0120220083
A More Realistic Earthquake Probability Model Using Long‐Term Fault Memory
27-Dec-2022
|
10.1785/0120220083
| 2,022 |
Bulletin of the Seismological Society of America
|
A More Realistic Earthquake Probability Model Using Long-Term Fault Memory
|
ABSTRACT Forecasts of the probability of a large earthquake occurring on a fault during a specific time interval assume that a probability distribution describes the interevent times between large earthquakes. However, current models have features that we consider unrealistic. In these models, earthquake probabilities remain constant or even decrease after the expected mean recurrence interval, implying that additional accumulated strain does not make an earthquake more likely. Moreover, these models assume that large earthquakes release all accumulated strain, despite evidence for partial strain release in earthquake histories showing clusters and gaps. As an alternative, we derive the necessary equations to calculate earthquake probabilities using the long-term fault memory (LTFM) model. By accounting for partial strain release, LTFM incorporates the specific timing of past earthquakes, which commonly used probability models cannot do, so it can forecast gaps and clusters. We apply LTFM to the southern San Andreas fault as an example and show how LTFM can produce better forecasts when clusters and gaps are present. LTFM better forecasts the exceptionally short interevent time before the 1857 Fort Tejon earthquake. Although LTFM is more complex than existing models, it is more powerful because (unlike current models) it incorporates fundamental aspects of the strain accumulation and release processes causing earthquakes.
|
669282
|
Sales of sugar-sweetened drinks in Jamie's Italian restaurants fall by 11 percent after 10p levy
|
Introducing a small levy of 10 pence per drink to the price of sugar-sweetened beverages (SSBs) sold in Jamie's Italian restaurants across the UK is likely to have contributed to a significant decline in SSB sales, according to new research published in the Journal of Epidemiology & Community Health.
The study was led by the London School of Hygiene & Tropical Medicine with the University of Cambridge, and funded by the National Institute for Health Research. After adjusting for general trends in sales it found that adding a 10 pence levy to SSBs sold in 37 Jamie's Italian restaurants, combined with activities such as re-designing menus, offering new lower sugar drinks and related publicity, was associated with an 11% decline in sales of SSBs per customer 12 weeks after the levy was introduced. A decline in sales of 9.3% per customer was still observed six months after the levy was introduced. The authors say further research with a longer follow-up is required to assess whether this is sustained. Reductions were greatest in restaurants with higher SSB sales per customer.
Consumption of sugar-sweetened beverages is associated with obesity, type 2 diabetes, cardiovascular disease and tooth decay. Decreasing consumption of SSBs can reduce body weight and weight gain in children and adolescents. In the UK, SSBs are thought to account for up to half of the excess calories consumed per day by children. Adults consume an average of 50 calories per day from SSBs1.
Cutting consumption of SSBs is therefore seen as important in improving public health but the most effective way of encouraging this change in behaviour is less clear. Financial measures, alongside wider strategies, are one option but there is limited evidence on the effectiveness of such measures on either the sale or consumption of SSBs.
In September 2015 Jamie's Italian, a chain of restaurants founded by chef Jamie Oliver, added a 10p levy to the price of its non-alcoholic SSBs. At the same time, the chain reorganised the non-alcoholic beverage menu into two sections: SSBs and 'other' beverages which included fresh fruit juices, bottled waters and diet cola. In addition, fruit spritzers (fruit juice mixed with water) were added to the main non-alcoholic beverage menu. The menu also explained the decision to implement the levy and that proceeds would go directly to a Children's Health Fund which supports children's health initiatives. Using sales data from 37 Jamie's Italian restaurants, this study explored the effects of these changes on sales of all types of SSBs.
Steven Cummins, Professor of Population Health at the London School of Hygiene & Tropical Medicine, who led the study, said: "Obesity, type 2 diabetes and cardiovascular disease are among the most pressing global health challenges facing the world today. Evidence suggests that excessive consumption of sugar-sweetened beverages is an important contributor to these potentially life-threatening conditions but we still don't have a clear answer on how best to encourage people to consume fewer of them.
"Our study showed that a combination of the levy, menu changes and clearly explaining to customers why it was introduced and that the proceeds would go directly to a worthy cause, looks to have had a relatively large effect on consumer behaviour given the small size of the levy. This type of 'complex intervention' has also been shown to be successful in economic studies of levies on alcohol."
The study also found there was a general decrease in the numbers of non-alcoholic beverages sold per customer, with the exception of fruit juice, which increased by 22% after six months. Sales of diet cola and bottled waters also declined.
Professor Cummins said: "A possible reason for this decline could be that more people were choosing tap water, but data on tap water orders was not available as it was not recorded on the restaurant's sales system. Overall, our study suggests that a small levy on sugar-sweetened drinks sold in restaurants, coupled with complementary activities, may have the potential to change consumer behaviour and reduce the consumption of these drinks which are associated with major health risks."
Dr Laura Cornelsen, Assistant Professor in Public Health Economics and MRC Career Development Fellow at the London School of Hygiene & Tropical Medicine, co-led the study. She said: "This study raises interesting questions about how fiscal interventions may work beyond the effect of the price increase. For example, how big a role did the menu change or the text introducing the levy play? Taxes and levies on sugar-sweetened drinks are regularly framed as health related measures and such framing might be having an effect of its own, including on both the quantity and the range of beverages sold."
Professor Martin White, Director of the NIHR's Public Health Research (PHR) Programme and a study author, is conducting an evaluation of the impact of the UK Government's forthcoming levy on sugary drinks with Professor Cummins and others.
Professor White said: "This is an important piece of research which explored how a fiscal intervention and associated publicity affected consumer choices in a national restaurant chain. Further research is needed to work out how such levies can best be used in different settings and circumstances. Obesity, heart disease, diabetes and tooth decay are serious conditions that have a huge impact on health. Population interventions of this sort could be vital in combating them."
The authors acknowledge limitations of their study including the fact that by taking advantage of a change that was already happening, they could only directly observe the changes that occurred, not necessarily the causes. The study was also undertaken in just one chain of restaurants.
###
|
10.1136/jech-2017-209947
| 2,017 |
Journal of Epidemiology & Community Health
|
Change in non-alcoholic beverage sales following a 10-pence levy on sugar-sweetened beverages within a national chain of restaurants in the UK: interrupted time series analysis of a natural experiment
|
Background This study evaluates changes in sales of non-alcoholic beverages in Jamie’s Italian, a national chain of commercial restaurants in the UK, following the introduction of a £0.10 per-beverage levy on sugar-sweetened beverages (SSBs) and supporting activity including beverage menu redesign, new products and establishment of a children’s health fund from levy proceeds. Methods We used an interrupted time series design to quantify changes in sales of non-alcoholic beverages 12 weeks and 6 months after implementation of the levy, using itemised electronic point of sale data. Main outcomes were number of SSBs and other non-alcoholic beverages sold per customer. Linear regression and multilevel random effects models, adjusting for seasonality and clustering, were used to investigate changes in SSB sales across all restaurants (n=37) and by tertiles of baseline restaurant SSB sales per customer. Results Compared with the prelevy period, the number of SSBs sold per customer declined by 11.0% (−17.3% to −4.3%) at 12 weeks and 9.3% (−15.2% to −3.2%) at 6 months. For non-levied beverages, sales per customer of children’s fruit juice declined by 34.7% (−55.3% to −4.3%) at 12 weeks and 9.9% (−16.8% to −2.4%) at 6 months. At 6 months, sales per customer of fruit juice increased by 21.8% (14.0% to 30.2%) but sales of diet cola (−7.3%; −11.7% to −2.8%) and bottled waters (−6.5%; −11.0% to −1.7%) declined. Changes in sales were only observed in restaurants in the medium and high tertiles of baseline SSB sales per customer. Conclusions Introduction of a £0.10 levy on SSBs alongside complementary activities is associated with declines in SSB sales per customer in the short and medium term, particularly in restaurants with higher baseline sales of SSBs.
|
660357
|
Smarter strategies
|
Though small and somewhat nondescript, quagga and zebra mussels pose a huge threat to local rivers, lakes and estuaries. Thanks to aggressive measures to prevent contamination, Santa Barbara County's waters have so far been clear of the invasive mollusks, but stewards of local waterways, reservoirs and water recreation areas remain vigilant to the possibility of infestation by these and other non-native organisms.
Now, UC Santa Barbara-based research scientist Carolynn Culver and colleagues at UCSB's Marine Science Institute are adding to this arsenal of prevention measures with a pair of studies that appear in a special edition of the North American Journal of Fisheries Management. They focus on taking an integrated approach to the management of aquatic invasive species as the state works to move beyond its current toxic, water quality-reducing methods.
"With integrated pest management you're looking for multiple ways to manipulate vulnerabilities of a pest, targeting different life stages with different methods in a combined way that can reduce the pest population with minimal harm to people and the environment," said Culver, an extension specialist with California Sea Grant who also holds an academic appointment at Scripps Institution of Oceanography. "Often there is concentrated effort on controlling one part of the life cycle, like removing adults--which are easier to see--without thinking about the larvae that are out there."
Could hungry fish fight invasive mussels?
In one study, Culver and her colleagues explored whether certain species of sunfish could be used as a biological control method to help manage invasive freshwater mussels in Southern California lakes.
The quagga mussel and closely related zebra mussel are two of the most devastating aquatic pests in the United States. The small freshwater mussels grow on hard surfaces such as water pipes, and can cause major problems for water infrastructure. They can also negatively impact ecosystems and fisheries by feeding on microscopic plants and animals that support the food web. First appearing in North America in the 1980s, they appeared in California in 2007. The cost of managing these mussels is estimated at billions of dollars since their introduction into the U.S.
Culver has worked closely with lake and reservoir managers in California to help them prepare for and respond to mussel invasions. This research was needed, she said, because many of the control systems long used in other places were developed for facilities and involved chemical applications or toxic coatings that can't readily be used in California in bodies of water that serve as sources of drinking water, or are home to endangered species that could be hurt by the chemicals. That covers the majority of California locations that have mussel infestations. In San Diego, for instance, rapid colonization of the reservoirs by these mussels caused docks and buoys to sink, but conventional, toxic methods of controlling them were a cause for concern.
"Commonly used mussel control methods are problematic for San Diego reservoirs since they are primary water supply reservoirs," said study co-author Dan Daft, a water production superintendent and biologist with the city of San Diego, who found that biocontrol methods were both effective and ecologically sound for sensitive water sources.
The study found that when one species of sunfish, bluegill, was penned up in an area where mussels occur, it could significantly reduce microscopic larvae and newly settled young mussels on surfaces within the pen, and on the pen itself. This method could be one key piece of an integrated pest management strategy, and provides a new, non-chemical method for targeting early life stages of the mussels, which are hard to detect.
"Essentially you can put these fish to work in specific areas where mussels occur," Culver said.
The researchers studied two species of resident sunfish in many infested southern California lakes, bodies of water that are human-built and nearly all serve as water supplies. Although not native to California, they were stocked into these man-made reservoirs. According to the researchers, the methods could be applied to predatory species in different places, but no other good candidates were available where they were working.
"It's important to point out that we don't support introducing non-native species," Culver said.
A better way to clean your boat
The other study assessed an integrated management framework that Culver and colleagues had developed to manage biofouling -- the growth of organisms such as algae, barnacles and other aquatic plants and animals that settle on hard surfaces such as piers, pilings and boat hulls -- while balancing both boat operations and ecosystem health. The paper describes how, when applied as part of an integrated framework, a combination of non-toxic methods can help maintain clean boats without the use of toxic paints and coatings that are increasingly regulated due to their environmental impacts.
"Controlling the growth of these organisms is critical for boat maintenance, because they create drag that slows vessels, reduces fuel efficiency, and makes boats harder to steer," said co-author Leigh Johnson, coastal advisor emerita with UC Cooperative Extension and former California Sea Grant Extension advisor. Johnson was instrumental in initiating the research and bringing attention to the need for a balanced biofouling control management approach. "However," she added, "the methods used to control fouling on boats can impact water quality and increase transport of invasive species so it is important to consider all of these issues when deciding how to maintain a clean hull."
The primary method of controlling biofouling around the world has long been toxic antifouling paints. But there are growing concerns about the impacts of currently used copper-based paints on water quality, and many countries and US states, including California and Washington, have set standards to reduce the copper levels and leaching rates of antifouling paints. These actions, however, increase the risks of moving biofouling invasive species from place to place, including vulnerable ecosystems, such as the islands off the coast of California.
In this study, researchers tested a variety of hull coatings, California-based hull cleaning practices, and conditions in various California harbors, to identify methods that could be used in combination to control biofouling.
They found that although copper-based paints were effective when first applied, they lost effectiveness fairly quickly, and that non-native species tended to accumulate first on the toxic coatings -- sometimes within just a few months. The team also showed that frequent, minimally abrasive, in-water hull cleaning was effective and did not cause an increase in fouling as reported for other hull cleaning practices. Their documentation of the time of year when different organisms were attaching to surfaces also helped to illustrate how adjusting the timing and frequency of hull cleaning could help increase its effectiveness.
Results from the study, along with other research findings, informed the development of an integrated pest management framework that boaters can adapt to different regions and specific needs.
"It's not a one-size-fits-all approach -- it's adaptive," Culver said. "Boaters can tailor it to local environments, regulations and boating patterns, and it can be applied in areas where toxic paints have been restricted, as well as where they continue to be used. It can help to keep boat hulls clean, while reducing impacts on water quality and transport of invasive species -- three issues that often are not considered together."
Culver and her colleagues have provided information to boat owners, resource managers, and regulators about applications of this integrated approach. There also has been interest, she said, in using the technique to inform biofouling management guidance and regulations in California and elsewhere.
|
10.1002/nafm.10363
| 2,019 |
North American Journal of Fisheries Management
|
An Integrated Pest Management Tactic for Quagga Mussels: Site‐Specific Application of Fish Biological Control Agents
|
Abstract The quagga mussel Dreissena bugensis is a harmful aquatic pest that invaded the southwestern USA in 2007. Challenges with managing this pest have been encountered because the invaded systems are primarily open‐water sources used for human consumption and/or are connected to freshwater habitats containing threatened and endangered species. Existing chemical and physical control methods are undesirable, and the use of some methods is restricted or prohibited because they pose risks to humans and ecosystems more broadly. To address this problem, we investigated the efficacy of using resident fishes as biocontrol agents for managing different life stages of quagga mussels on different spatial scales in a site‐specific manner. We conducted field experiments to test whether planktivorous Bluegill Lepomis macrochirus reduced mussel infestations on substrates of varying orientations in small and large pens through predation on larval mussels. We also performed an experiment to evaluate whether carnivorous Redear Sunfish L. microlophus reduced mussel infestations established on substrates of varying orientations in small pens through predation on juvenile and adult mussels. Bluegill significantly reduced mussel infestations on all substrates in the pens through predation on larvae and small juvenile mussels. Redear Sunfish reduced existing juvenile and adult mussel populations in some cases, with consumption varying among individuals and substrate orientations. Our results indicate that fishes, specifically Bluegill, may represent effective site‐specific biocontrol agents for quagga mussels, reducing impacts on targeted infrastructure (e.g., water towers, docks, and pipes) and habitats having different surface orientations by controlling more than one life stage of the pest. Development of an integrated pest management strategy that considers application of this tactic in combination with others would undoubtedly improve the management of quagga mussels—and potentially that of congeneric zebra mussels D. polymorpha —within lake and reservoir ecosystems.
|
669179
|
A new accurate computational method designed to enhance drug target stability
|
Scientists from the Moscow Institute of Physics and Technology (MIPT), the Skolkovo Institute of Science and Technology (Skoltech), and the University of Southern California (USC) have developed a new computational method for the design of thermally stable G protein-coupled receptors (GPCR) that are of great help in creating new drugs. The method has already proved useful in obtaining the structures of several principal human receptors. An overview of the new method was published in the prestigious science journal Current Opinion on Structural Biology.
Receptors are molecules that capture and transmit signals and play a key role in the human body regulation. GPCRs are among the best-known human protein families involved in vision, olfaction, immune response, and brain processes, making them an important drug target. For a receptor to serve as a target, the researchers need to understand its structure in great detail, just as a locksmith needs to know the lock's inner structure to make a key that fits. Studying a receptor that becomes unstable when detached from the cell membrane is a much more challenging task, which is largely facilitated by the computational methods that help to accurately predict the receptor's soft spots and the changes that will make it more stable.
"The structural studies of GPCRs are of high scientific and applied value, since these proteins are the target for 30 to 40 percent of drugs. Our method relies on several approaches, including machine learning, molecular modeling, and bioinformatics, that are tailored specifically to GPCRs. These approaches are complementary, which enables effectively predicting the smallest possible changes that can enhance the receptor's stability and make it easier to obtain its molecular structure," explains professor Petr Popov of MIPT's Laboratory of Structural Biology of G Protein-Coupled Receptors and the Skoltech Center for Computational and Data-Intensive Science and Engineering.
The new method developed at MIPT, Skoltech, and USC allowed researchers to obtain the structures of four important human receptors, including the cannabinoid receptor involved in brain signal transmission and pain perception, and the prostaglandin receptor implicated in inflammatory processes in the human body. The results of the study were published in the top international science journals Cell and Nature Chemical Biology.
|
10.1016/j.sbi.2019.02.010
| 2,019 |
Current Opinion in Structural Biology
|
Computational design for thermostabilization of GPCRs
|
GPCR superfamily is the largest clinically relevant family of targets in human genome; however, low thermostability and high conformational plasticity of these integral membrane proteins make them notoriously hard to handle in biochemical, biophysical, and structural experiments. Here, we describe the recent advances in computational approaches to design stabilizing mutations for GPCR that take advantage of the structural and sequence conservation properties of the receptors, and employ machine learning on accumulated mutation data for the superfamily. The fast and effective computational tools can provide a viable alternative to existing experimental mutation screening and are poised for further improvements with expansion of thermostability datasets for training the machine learning models. The rapidly growing practical applications of computational stability design streamline GPCR structure determination and may contribute to more efficient drug discovery.
|
578557
|
SATB1 vital for maintenance of hematopoietic stem cells
|
Blood plays the important role of transporting oxygen and hormones necessary for the human body. Blood contains blood cells, such as erythrocytes, neutrophils, and lymphocytes, which are generated from hematopoietic stem cells, or hematopoietic stem cells (HSCs).
HSCs possess the abilities of multipotency (the ability to differentiate into all functional blood cells) and self-renewal (the ability to give rise to HSCs without differentiation). A group of researchers led by Takafumi Yokota at Osaka University clarified that the Special AT-rich Sequence Binding Protein 1 (SATB1), a nuclear global chromatin organizer that regulates chromatin structure, played an important role in differentiation of HSCs into lymphocytic lineages. (Satoh et al., Immunity, 2013) However, they obtained these findings from experiments using genetically engineered cells and mouse fetuses, so the biological processes occurring in adults were not fully understood. In addition, it remained unclear how differentiation of lymphoid-lineage cells began.
This group revealed that expression of SATB1 was involved in both differences in HSC self-renewal ability and differences in the ability of HSCs to differentiate into lymphocytic lineages. Their research results were published in Cell Reports.
The researchers generated genetically modified mice in which SATB1 was deficient only in hematopoietic cells and reporter mice in which a red fluorescent protein was expressed under control of the endogenous SATB1 promoter. They examined them to confirm that SATB1 was essential for maintaining function of adult HSCs.
In addition, they revealed that (1) both high SATB1 expression and low SATB1 expression were present in HSCs, (2) the volume of SATB1 expression changed during HSC self-renewal, and (3) higher SATB1 expression had greater lymphoid differentiation ability.
Lead author Yukiko Doi says, "We generated genetically engineered mice to observe a phenomenon in the initial stage of differentiation of HSCs into lymphocytic lineages in adult mice."
This group's achievements will deepen an understanding of the mechanism behind differentiation of HSCs into lymphocytic lineages. They will become a research base for treatment of diseases caused by immune disorders, such as infectious diseases and disorders of the hematopoietic system, through application to regenerative medicine and gene therapy.
###
Osaka University was founded in 1931 as one of the seven imperial universities of Japan and now has expanded to one of Japan's leading comprehensive universities. The University has now embarked on open research revolution from a position as Japan's most innovative university and among the most innovative institutions in the world according to Reuters 2015 Top 100 Innovative Universities and the Nature Index Innovation 2017. The university's ability to innovate from the stage of fundamental research through the creation of useful technology with economic impact stems from its broad disciplinary spectrum.
Website: http://resou.osaka-u.ac.jp/en/top
|
10.1016/j.celrep.2018.05.042
| 2,018 |
Cell Reports
|
Variable SATB1 Levels Regulate Hematopoietic Stem Cell Heterogeneity with Distinct Lineage Fate
|
Hematopoietic stem cells (HSCs) comprise a heterogeneous population exhibiting self-renewal and differentiation capabilities; however, the mechanisms involved in maintaining this heterogeneity remain unclear. Here, we show that SATB1 is involved in regulating HSC heterogeneity. Results in conditional Satb1-knockout mice revealed that SATB1 was important for the self-renewal and lymphopoiesis of adult HSCs. Additionally, HSCs from Satb1/Tomato-knockin reporter mice were classified based on SATB1/Tomato intensity, with transplantation experiments revealing stronger differentiation toward the lymphocytic lineage along with high SATB1 levels, whereas SATB1− HSCs followed the myeloid lineage in agreement with genome-wide transcription and cell culture studies. Importantly, SATB1− and SATB1+ HSC populations were interconvertible upon transplantation, with SATB1+ HSCs showing higher reconstituting and lymphopoietic potentials in primary recipients relative to SATB1− HSCs, whereas both HSCs exhibited equally efficient reconstituted lympho-hematopoiesis in secondary recipients. These results suggest that SATB1 levels regulate the maintenance of HSC multipotency, with variations contributing to HSC heterogeneity.
|
718651
|
New 'hyper glue' formula developed by UBCO and UVic researchers
|
With many of the products we use every day held together by adhesives, researchers from UBC's Okanagan campus and the University of Victoria hope to make everything from protective clothing to medical implants and residential plumbing stronger and more corrosion resistant thanks to a newly-developed 'hyper glue' formula.
The team of chemists and composite materials researchers discovered a broadly applicable method of bonding plastics and synthetic fibres at the molecular level in a procedure called cross-linking. The cross-linking takes effect when the adhesive is exposed to heat or long-wave UV light making strong connections that are both impact-resistant and corrosion-resistant. Even with a minimal amount of cross-linking, the materials are tightly bonded.
"It turns out the adhesive is particularly effective in high-density polyethylene, which is an important plastic used in bottles, piping, geomembranes, plastic lumber and many other applications," says Professor Abbas Milani, director of UBC's Materials and Manufacturing Research Institute, and the lead researcher at the Okanagan node of the Composite Research Network. "In fact, commercially available glues didn't work at all on these materials, making our discovery an impressive foundation for a wide range of important uses."
UVic Organic Chemistry Professor Jeremy Wulff, whose team led the design of the new class of cross-linking materials, collaborated with the UBC Survive and Thrive Applied Research to explore how it performed in real-world applications.
"The UBC STAR team was able put the material through its paces and test its viability in some incredible applications, including ballistic protection for first responders," says Wulff.
The discovery, he says, is already playing an important role in the Comfort-Optimized Materials for Operational Resilience, Thermal-transport and Survivability (COMFORTS) network, a team of researchers from UBC, UVic and the University of Alberta who are collaborating to create high-performance body armour.
"By using this cross-linking technology, we're better able to strongly fuse together different layers of fabric types to create the next generation of clothing for extreme environments," says Wulff. "At the same time, the cross-linker provides additional material strength to the fabric itself."
Milani is quick to point out that an incredibly strong bonding agent is just the beginning of what it can do.
"Imagine paints that never peel or waterproof coatings that never need to be resealed," says Milani. "We're even starting to think about using it as a way to bond lots of different plastic types together, which is a major challenge in the recycling of plastics and their composites."
"There is real potential to make some of our everyday items stronger and less prone to failure, which is what many chemists and composite materials engineers strive for."
|
10.1126/science.aay6230
| 2,019 |
Science
|
A broadly applicable cross-linker for aliphatic polymers containing C–H bonds
|
Addition of molecular cross-links to polymers increases mechanical strength and improves corrosion resistance. However, it remains challenging to install cross-links in low-functionality macromolecules in a well-controlled manner. Typically, high-energy processes are required to generate highly reactive radicals in situ, allowing only limited control over the degree and type of cross-link. We rationally designed a bis-diazirine molecule whose decomposition into carbenes under mild and controllable conditions enables the cross-linking of essentially any organic polymer through double C-H activation. The utility of this molecule as a cross-linker was demonstrated for several diverse polymer substrates (including polypropylene, a low-functionality polymer of long-standing challenge to the field) and in applications including adhesion of low-surface-energy materials and the strengthening of polyethylene fabric.
|
953276
|
Ions and Rydberg-atoms: A bond between David and Goliath
|
When single particles like atoms and ions bond, molecules emerge. Such bonds between to particles can arise if they have for example opposite electrical charges and hence attract each other. The molecule observed at the University of Stuttgart exhibits a special feature: It consists of a positive electrically charged ion and a neutral atom in a so-called Rydberg state. These Rydberg atoms have grown in size a thousand times compared to typical atoms. As the charge of the ion deforms the Rydberg atom in a very specific way, the bond between the two particles emerges.
Rubidium cloud cooled down close to the absolute zero
To verify and study the molecule, the researchers prepared an ultra-cold rubidium cloud, which was cooled down close to the absolute zero at -273°C. Only at these low temperatures, the force between the particles is strong enough to form a molecule. In these ultra-cold atomic ensembles, the ionization of single atoms with laser fields prepares the first building block of the molecule – the ion. Additional laser beams excite a second atom into the Rydberg state. The electric field of the ion deforms this gigantic atom. Interestingly the deformation can be attractive or repulsive depending on the distance between the two particles, letting the binding partners oscillate around an equilibrium distance and inducing the molecular bond. The distance between the binding partners is unusually large and amounts to about the tenth of the thickness of a human hair.
Microscopy with the aid of electric fields
A special ion microscope made this observation possible. It was developed, build and commissioned by the researches at the 5th Physical Institute in close collaboration with the workshops of the University Stuttgart. In contrast to typical microscopes working with light, the device influences the dynamics of charged particles with the help of electrical fields to magnify and image the particles onto a detector. “We could image the free floating molecule and its constituents with this microscope and directly observe and study the alignment of this molecule in our experiment”, explains Nicolas Zuber, PhD student at the 5th Physical Institute, the results.
In a next step, the researchers want to study dynamical processes within this unusual molecule. With the help of the microscope, it should be possible to study vibrations and rotations of the molecul. Because of its gigantic size and the weak binding of the molecule, the dynamical processes are slower compared to usual molecules. The research group hopes to gain new and more detailed knowledge about the inner structure of the molecule.
Nature
10.1038/s41586-022-04577-5
Experimental study
Not applicable
Observation of a molecular bond between ions and Rydberg atoms
18-May-2022
|
10.1038/s41586-022-04577-5
| 2,022 |
Nature
|
Observation of a molecular bond between ions and Rydberg atoms
|
Atoms with a highly excited electron, called Rydberg atoms, can form unusual types of molecular bonds1-4. The bonds differ from the well-known ionic and covalent bonds5,6 not only by their binding mechanisms, but also by their bond lengths ranging up to several micrometres. Here we observe a new type of molecular ion based on the interaction between the ionic charge and a flipping-induced dipole of a Rydberg atom with a bond length of several micrometres. We measure the vibrational spectrum and spatially resolve the bond length and the angular alignment of the molecule using a high-resolution ion microscope7. As a consequence of the large bond length, the molecular dynamics is extremely slow. These results pave the way for future studies of spatio-temporal effects in molecular dynamics (for example, beyond Born-Oppenheimer physics).
|
500822
|
A window into the hidden world of colons
|
Biomedical engineers at Duke University have developed a system that allows for real-time observations of individual cells in the colon of a living mouse.
Researchers expect the procedure to allow new investigations into the digestive system's microbiome as well as the causes of diseases such as inflammatory bowel disease and colon cancer and their treatments.
The procedure described online on December 11 in Nature Communications involves surgically implanting a transparent window into a mouse's abdominal skin above the colon. Similar setups are already being used to allow live looks into the detailed inner workings of the brain, spinal cord, liver, lungs and other organs. Imaging a live colon, however, is a slipperier proposition.
"A brain doesn't move around a lot, but the colon does, which makes it difficult to get detailed images down to a single cell," said Xiling Shen, the Hawkins Family Associate Professor of Biomedical Engineering at Duke University. "We've developed a magnetic system that is strong enough to stabilize the colon in place during imaging to obtain this level of resolution, but can quickly be turned off to allow the colon to move freely."
Immobilizing the colon for imaging is a tricky task for traditional methods such as glue or stitches. At best they can cause inflammation that would ruin most experiments. At worst they can cause obstructions, which can quickly kill the mouse being studied.
To skirt this issue, Shen developed a magnetic device that looks much like a tiny metal nasal strip and can be safely attached to the colon. A magnetic field snaps the colon into place and keeps it stable during imaging, but once turned off, leaves the colon free to move and function as normal.
A vital organ that houses much of the digestive system's microbiome, the colon can be afflicted by diseases such as inflammatory bowel disease, functional gastrointestinal disorders and cancer. It also plays a key role in regulating the immune system, and can communicate directly with the brain through sacral nerves.
"There is a great need to better understand the colon, because it can suffer from so many diseases and plays so many roles with significant health implications," Shen said.
In the study, Shen and his colleagues conducted several proof-of-principle experiments that provide starting points for future lines of research.
The researchers first colonized a living mouse colon with E. coli bacteria, derived from Crohn's disease patients, that had been tagged with fluorescent proteins. The researchers then showed they could track the migration, growth and decline of the bacteria for more than three days. This ability could help researchers understand not only how antagonistic bacteria afflict the colon, Shen says, but the positive roles probiotics can play and which strains can best help people with gastrointestinal disorders.
In the next experiment, mice were bred with several types of fluorescent immune cells. The researchers then induced inflammation in the colon and carefully watched the activation of these immune cells. Shen says, this approach could be used with various types of immune cells and diseases to gain a better understanding of how the immune system responds to challenges.
Shen and his colleagues then showed that they could tag and track colon epithelial stem cells associated with colorectal cancer throughout radiation treatment. They also demonstrated that they could watch nerves throughout the colon respond to sacral nerve stimulation, an emerging therapy for treating motility and immune disorders such as functional gastrointestinal disorders and irritable bowel disorder.
"While we know electrically stimulating the sacral nerves can alleviate the symptoms of these gastrointestinal disorders, we currently have no idea why or any way to optimize these treatments," Shen said. "Being able to see how the colon's neurons respond to different waveforms, frequencies and amplitudes of stimulation will be invaluable in making this approach a better option for more patients."
|
10.1038/s41467-019-13699-w
| 2,019 |
Nature Communications
|
An intravital window to image the colon in real time
|
Abstract Intravital microscopy is a powerful technique to observe dynamic processes with single-cell resolution in live animals. No intravital window has been developed for imaging the colon due to its anatomic location and motility, although the colon is a key organ where the majority of microbiota reside and common diseases such as inflammatory bowel disease, functional gastrointestinal disorders, and colon cancer occur. Here we describe an intravital murine colonic window with a stabilizing ferromagnetic scaffold for chronic imaging, minimizing motion artifacts while maximizing long-term survival by preventing colonic obstruction. Using this setup, we image fluorescently-labeled stem cells, bacteria, and immune cells in live animal colons. Furthermore, we image nerve activity via calcium imaging in real time to demonstrate that electrical sacral nerve stimulation can activate colonic enteric neurons. The simple implantable apparatus enables visualization of live processes in the colon, which will open the window to a broad range of studies.
|
976306
|
Affordable device for fixing broken bones piloted in Gaza, Sri Lanka and Ukraine
|
Imperial researchers have developed a low-cost, easy-to-manufacture stabiliser for broken bones to help in regions where such devices are expensive or in short supply and people sometimes resort to homemade options.
The stabiliser, known as an external fixator, holds broken bones in place with metal pins or screws attached to a surrounding metal frame.
When soft tissue is severely damaged together with bone, external fixators are the first step in keeping fractures in legs and arms in place before an operation to definitively fix the bones can be carried out.
However, their cost and low availability in many regions mean people resort to homemade or low-quality fixators that may lead to serious complications or improper healing.
The Imperial external fixator is currently being tested in Gaza and Sri Lanka, and since the invasion of Ukraine, more than 500 fixators have been manufactured in Poland to help with the crisis.
This fixator, details of which are published in Frontiers in Medical Technology, is low-cost and has a lightweight design that can be manufactured locally to international standards. The team developed the design and a toolkit to allow repeated precise manufacture of the fixator anywhere in the world, including in the least developed countries.
In Sri Lanka, it is being tested for road traffic accidents, which account for around 70 percent of fractures in low- and middle-income countries (LMICs). In Ukraine and Gaza, both regions with unpredictable demand and supply of such devices, it is being used for gunshot wounds and other conflict trauma.
Lead researcher Dr Mehdi Saeidi, from the Department of Bioengineering at Imperial, said: “We have managed to develop an external fixator that is one-tenth of the cost of commercial devices but with similar performance. This device can provide surge capacity for conflict zones or in response to unpredictable incidents and situations, which was the case with the war in Ukraine.”
The fixator is made up of four clamping systems and a rod, which can be manufactured from stainless steel and aluminium, which are readily available materials, using conventional manufacturing techniques, such as milling and turning.
However, because of the precision of the parts, initial tests showed that the fixator would need to be built by highly skilled operators or using advanced machinery. Dr Saeidi therefore developed a manufacturing toolkit with components including drill bits, a saw and cutting guides to make the manufacturing easier, faster and reproducible with high accuracy.
The fixator was then tested in cadaver leg bones, showing it had similar stiffness to commercial devices, as well as undergoing mechanical testing that simulated pressure on the device to show its ability to keep the bones in position over a longer term.
The device is now being trialled in three countries. The device was originally conceived in response to a reported shortage by partners in Sri Lanka, and with Dr Puji Silva at the University of Moratuwa, the Imperial external fixator and related designs are being tested.
In Gaza, in collaboration with Professor Ghassan Abu-Sitt at the American University of Beirut, the device is largely being trialled with gunshot wounds. This trial is also assessing the ability of the external fixator to be cleaned, sterilised and reused.
Professor Abu-Sittah said: “In previous wars hospitals in Gaza had run out of external fixators, which jeopardised patient care. Developing the capacity to manufacture fixators locally means that this will not happen again.”
A second trial will soon start with the devices fully manufactured in Gaza, in collaboration with the Islamic University of Gaza (IUG). In preparation, Dr Saeidi trained Dr Sadiq Abdelall from IUG on manufacturing the external fixator using the toolkit at Imperial.
At the outbreak of the Ukraine conflict, Imperial’s Professor Anthony Bull was approached by surgeons in Poland who urgently needed such fixators, resulting in more than 500 of the devices being manufactured for use in Ukraine. The drawings provided freely on Imperial’s website were all that was needed by the engineers.
Professor Jonathan Jeffers, one of the study investigators from the Department of Mechanical Engineering at Imperial, said: “This work, conceived years ago based on needs identified by our military and civilian trauma surgeons, shows how basic engineering can mitigate suffering in the most dreadful of situations. The Ukraine situation is exactly why this project was conceived and demonstrates the ability to respond to surge demand.”
The team now expects to roll out the design to more LMICs at a larger scale with the help of partners in the World Health Organization and the United Nations Development Programme. The work was funded by the NIHR (project reference 16/137/45).
Frontiers in Medical Technology
10.3389/fmedt.2022.1004976
Low-cost locally manufacturable unilateral Imperial external fixator for low- and middle-income countries
28-Nov-2022
|
10.3389/fmedt.2022.1004976
| 2,022 |
Frontiers in Medical Technology
|
Low-cost locally manufacturable unilateral imperial external fixator for low- and middle-income countries
|
Treating open fractures in long bones can be challenging and if not performed properly can lead to poor outcomes such as mal/non-union, deformity, and amputation. One of the most common methods of treating these fracture types is temporary external fixation followed by definitive fixation. The shortage of high-quality affordable external fixators is a long-recognised need, particularly in Low- and Middle-Income Countries (LMICs). This research aimed to develop a low-cost device that can be manufactured locally to international standards. This can provide surge capacity for conflict zones or in response to unpredictable incidents and situations. The fixator presented here and developed by us, the Imperial external fixator, was tested on femur and tibia specimens under 100 cycles of 100 N compression-tension and the results were compared with those of the Stryker Hoffmann 3 frame. The Imperial device was stiffer than the Stryker Hoffmann 3 with a lower median interfragmentary motion (of 0.94 vs. 1.48 mm). The low-cost, easy to use, relatively lightweight, and easy to manufacture (since minimum skillset and basic workshop equipment and materials are needed) device can address a critical shortage and need in LMICs particularly in conflict-affected regions with unpredictable demand and supply. The device is currently being piloted in three countries for road traffic accidents, gunshot wounds and other conflict trauma-including blast cohorts.
|
588967
|
The geological record of mud deposits
|
The nature of the sediments on the Basque continental shelf is very heterogeneous. From the point of view of distribution, two clearly differentiated sectors can be picked out in terms of grain size. "In the area of Bizkaia medium to coarse-sized sands predominate, whereas on the coast of Gipuzkoa there is a predomination of deposits of very fine sand, silts and clays, currently known as the Basque Mud Patch (BMP)," explained Maria Jesus Irabien, researcher in the UPV/EHU's Department of Mineralogy and Petrology.
"This mud patch has an irregular surface area of approximately 680 km2. Metals and contaminants, in general, are more likely to build up in this type of muddy material. So if what we are aiming to do is study anthropogenic, industrial or human influence, it is necessary to explore the mud patch in the area of Gipuzkoa," said the researcher in the Harea: Coastal Geology group of the UPV/EHU.
So, as Irabien pointed out, "we analysed three cores (19-46 cm deep) from a multidisciplinary perspective that includes the analysis of various metals, foraminifera (small organisms characterised by a shell or chalky conch), pollen and various natural and artificial isotopes".
"The results obtained have made it possible to calculate that the sediments build up at an approximate rate of one millimetre per year. An increase in the concentrations of metals from the end of the 19th century onwards can also be observed, showing that the influence of industrialization and human activity taking place in the Basque Country extends to the marine environment. In the case of lead (Pb), for example, the content in the most recent samples is five times higher than in that recorded in the past. However, the foraminifera are not affected by this contamination. Finally, the pollen analysis displays a growing trend in conifers and a reduction in indigenous species (Deciduous Quercus), possibly as a result of reforestation," highlighted the researcher of the Harea: Coastal Geology group of the UPV/EHU.
"The results confirm that the influence of coastal anthropogenic activities extends to the adjacent shelf where muddy deposits are likely to act as a trap for contaminants," said Irabien.
The researcher stresses "the importance of continuing to make interpretations of this type in marine depths to get to know marine evolution from a historical perspective. It is clear that human activity is exerting a significant influence on the coast, too; the only advantage that all this has is knowing we can stop," concluded María Jesús Irabien.
|
10.1016/j.quaint.2020.03.042
| 2,020 |
Quaternary International
|
Recent coastal anthropogenic impact recorded in the Basque mud patch (southern Bay of Biscay shelf)
|
The historical anthropogenic impact on sediments from the Basque Mud Patch (southern Bay of Biscay) is explored using a multidisciplinary approach including the analysis of natural (210Pb) and artificial (137Cs, 239/240Pu) radiotracers, major elements (Al, Mn), metals (Zn, Pb, Cu, Cr), Pb isotopic ratios, and foraminiferal and pollen contents. The study of three short cores (19?46 cm), despite being hindered by the effects of biomixing, allow the calculation of a sedimentation rate of 1 ± 0.1 mm yr?1. Distribution with depth of Al-normalised concentrations of metals reflects an increasing trend since 1880 CE, related to the industrialization of the Basque coastal area. According to the Sediment Quality Guidelines applied, contents of Zn and Pb appear as a potential cause of concern, given that they exceed the values from which adverse biological effects can be occasionally expected. However, foraminiferal assemblages do not show recognizable changes along the cores following increasing trace metal concentrations. Finally, pollen results reveal an increasing trend of coniferous taxa and a parallel reduction of authochthonous Deciduous Quercus, probably as a consequence of reforestation works. Data obtained confirm that effects of coastal anthropogenic activities extend to the adjacent shelf, where muddy deposits are likely to act as a trap for contaminants. All samples were kindly provided by Dr Ana Pascual (UPV/EHU).Aintzane Goffard (UPV/EHU) prepared samples and produced for-aminiferal results from core KI-06. This research was funded by SpanishMINECO (CGL2013-41083-P and RTI2018-095678-B-C21, MCIU/AEI/FEDER, UE), UPV/EHU (UFI11/09) and EJ/GV (IT976-16) projects.Aitor Fernández Martín-Consuegra was supported by a predoctoralgrant from the Basque Government (PRE_2017_1_0173). The authorsthank technical and human support provided by SGIker ( UPV/EHU/ERDT, EU). Two anonymous reviewers improved the original manu-script with their comments and constructive suggestions. This is con-tribution 53 of the Geo-Q Zentroa Research Unit (Joaquín Gómez deLlarena Laboratory).
|
744314
|
Immunity key to motor neurone disease treatment
|
Customised immune-blocking medication may be the key to treating patients with motor neurone disease (MND), which currently has no cure and limited therapeutic options.
University of Queensland researchers have tested immune cells that circulate in the blood to determine if they're linked with specific characteristics and features of MND.
The team analysed immune cells from 23 healthy people and 48 patients with MND to measure differences in patients' immune profiles.
Research assistant and UQ medical student Raquel McGill said their study showed certain immune cells were associated with distinct MND features, including impaired swallowing, speech and breathing, as well as disease severity and rate of progression.
"A challenging aspect of MND treatment is the diverse nature of the disease; many MND patients present and progress differently," Ms McGill said.
"Prior research has identified the immune system as a possible key factor in the progression of MND, but what drives the different MND types is less clear.
"Our findings show that abnormal immune cells in the blood are linked with a MND patient's clinical characteristics and disease progression."
UQ School of Biomedical Sciences lead author Professor Trent Woodruff said the research also suggested that immune-blocking drugs could be personalised to treat each patient's unique disease symptoms and stages.
"There are several immune-targeted drugs currently progressing to human clinical trials, including UQ's own discoveries," Professor Woodruff said.
"Our findings may help with patient selection for these trials, which could lead to improved outcomes."
|
10.1093/braincomms/fcaa013
| 2,020 |
Brain Communications
|
Monocytes and neutrophils are associated with clinical features in amyotrophic lateral sclerosis
|
Immunity has emerged as a key player in neurodegenerative diseases such as amyotrophic lateral sclerosis, with recent studies documenting aberrant immune changes in patients and animal models. A challenging aspect of amyotrophic lateral sclerosis research is the heterogeneous nature of the disease. In this study, we investigate the associations between peripheral blood myeloid cell populations and clinical features characteristic of amyotrophic lateral sclerosis. Peripheral blood leukocytes from 23 healthy controls and 48 patients with amyotrophic lateral sclerosis were analysed to measure myeloid cell alterations. The proportion of monocytes (classical, intermediates and non-classical subpopulations) and neutrophils, as well as the expression of select surface markers, were quantitated using flow cytometry. Given the heterogeneous nature of amyotrophic lateral sclerosis, multivariable linear analyses were performed to investigate associations between patients' myeloid profile and clinical features, such as the Revised Amyotrophic Lateral Sclerosis Functional Rating Scale, bulbar subscore of the Revised Amyotrophic Lateral Sclerosis Functional Rating Scale, change in Revised Amyotrophic Lateral Sclerosis Functional Rating Scale over disease duration and respiratory function. We demonstrate a shift in monocyte subpopulations in patients with amyotrophic lateral sclerosis, with the ratio of classical to non-classical monocytes increased compared with healthy controls. In line with this, patients with greater disease severity, as determined by a lower Revised Amyotrophic Lateral Sclerosis Functional Rating Scale score, had reduced non-classical monocytes. Interestingly, patients with greater bulbar involvement had a reduction in the proportions of classical, intermediate and non-classical monocyte populations. We also revealed several notable associations between myeloid marker expression and clinical features in amyotrophic lateral sclerosis. CD16 expression on neutrophils was increased in patients with greater disease severity and a faster rate of disease progression, whereas HLA-DR expression on all monocyte populations was elevated in patients with greater respiratory impairment. This study demonstrates that patients with amyotrophic lateral sclerosis with distinct clinical features have differential myeloid cell signatures. Identified cell populations and markers may be candidates for targeted mechanistic studies and immunomodulation therapies in amyotrophic lateral sclerosis.
|
802048
|
Detecting hydrothermal vents in volcanic lakes
|
Geothermal manifestations at Earth's surface can be mapped and characterized by a variety of well-established exploration methods. However, mapping hydrothermal vents in aquatic environments is more challenging as conventional methods can no longer be applied. In fact, chemical composition of lake water may indicate inflow of fluids from a volcanic system, but it does not provide spatial information on the location of hydrothermal vents, their abundance and current state of activity.
Changes in the behaviour of hydrothermal vents may be indicative of changes in the volcanic system underneath, thus being a useful precursor for the next generation of early warning systems. Increased volcanic activity beneath volcanic lakes could also trigger increased gas input, in particular CO2, which could again result in catastrophic gas outbursts as reported from Lake Nyos or Lake Monoun in Cameroon. New exploration approaches will help improving site-specific risk assessment and monitoring concepts by taking a closer look at hydrothermal vents.
The study describes an integrated approach of (1) bathymetry, (2) thermal mapping of the lake floor, and (3) gas emission measurements at the water surface, which was tested successfully at Lake Ngozi in Tanzania. Multiple hydrothermal feed zones could be identified by hole-like structures and increased lake floor temperatures, in combination with increased CO2 emissions from the lake surface. The developed approach has the advantage that (1) it does not require a complex technical setup, (2) data can be obtained in-situ, and (3) it is transferable to other volcanic lakes for mapping hydrothermal feed sources.
Further research activities at volcanic lakes and in shallow marine environments with hydrothermal activity (e.g., Iceland, Italy) are currently in preparation with partners from the Scientific Diving Centre (SDC) at the Technical University Bergakademie Freiberg, Germany, and the Marine & Freshwater Research Institute in Reykjavík, Iceland. This will also include research related to future offshore geothermal exploration.
|
10.1038/s41598-019-48576-5
| 2,019 |
Scientific Reports
|
Detecting gas-rich hydrothermal vents in Ngozi Crater Lake using integrated exploration tools
|
Gas-rich hydrothermal vents in crater lakes might pose an acute danger to people living nearby due to the risk of limnic eruptions as a result of gas accumulation in the water column. This phenomenon has been reported from several incidents, e.g., the catastrophic Lake Nyos limnic eruption. CO2 accumulation has been determined from a variety of lakes worldwide, which does not always evolve in the same way as in Lake Nyos and consequently requires a site-specific hazard assessment. This paper discusses the current state of Lake Ngozi in Tanzania and presents an efficient approach how major gas-rich hydrothermal feed zones can be identified based on a multi-disciplinary concept. The concept combines bathymetry, thermal mapping of the lake floor and gas emission studies on the water surface. The approach is fully transferable to other volcanic lakes, and results will help to identify high-risk areas and develop suitable monitoring and risk mitigation measures. Due to the absence of a chemical and thermal stratification of Lake Ngozi the risk of limnic eruptions is rather unlikely at present, but an adapted monitoring concept is strongly advised as sudden CO2 input into the lake could occur as a result of changes in the magmatic system.
|
975911
|
Can algae enhance skin regeneration and wound healing?
|
A product of a freshwater single-celled green algae called Euglena gracilis may enhance skin regeneration to speed up wound healing, according to new research published in Advanced Materials Interfaces.
Investigators developed a system based on microvesicles that bud from the cell surface of Euglena gracilis and contain β-glucan, a carbohydrate with immunoregulatory activity, regeneration ability, and antioxidant properties.
In laboratory experiments, these microvesicles promoted the proliferation and migration of skin cells, increasing both collagen synthesis and the expression of proliferation-associated proteins. A wound healing test also generated promising results.
“This technique is expected to be applied to other cells, thereby enabling the design of new types of extracellular vesicles that are applicable for skin treatments and care in the pharmaceutical and cosmetic industries,” the authors wrote.
URL upon publication: https://onlinelibrary.wiley.com/doi/10.1002/admi.202202255
Additional Information
NOTE: The information contained in this release is protected by copyright. Please include journal attribution in all coverage. For more information or to obtain a PDF of any study, please contact: Sara Henning-Stout, [email protected].
About the
|
10.1002/admi.202202255
| 2,023 |
Advanced Materials Interfaces
|
Nonanimal <i>Euglena gracilis</i>‐Derived Extracellular Vesicles Enhance Skin‐Regenerative Wound Healing
|
Abstract This study proposes using microalgae‐containing carbohydrate bioactives, an Euglena gracilis ‐derived extracellular microvesicle (EMV EG ) system, for enhanced skin regeneration. The critical deformation ratio, 1.67, during cell extrusion enables the authors to tune the particle size of the EMV EG at ≈1 µm, thus satisfying the encapsulation yield of β‐1,3‐glucan and the cellular delivery performance. In vitro 5‐bromo‐2'‐deoxyuridine and cell scratch assays reveal that the EMV EG promotes the proliferation and migration of skin cells, thereby increasing both collagen synthesis and the expressions of proliferation‐associated proteins. An ex vivo wound‐healing test using both artificial and porcine skin reveals that similar to that seen using β‐1,3‐glucan, the EMV EG can substantially increase the cell population, expressing the proliferation‐related protein, termed proliferating cell nuclear antigen. These results demonstrate that the EMV EG system shows considerable potential in the field of skin regeneration. This technique is expected to design new types of extracellular vesicles that are applicable for skin regeneration in the pharmaceutical and cosmetic industries.
|
892844
|
Nordic seas cooled 500,000 years before global oceans
|
The cooling of the Nordic Seas towards modern temperatures started in the early Pliocene, half a million years before the global oceans cooled. A new study of fossil marine plankton published in Nature Communications today demonstrates this.
In the Pliocene, 5.3 to 2.6 million years ago, the world was generally warmer than today. The cooling of the oceans toward the modern situation started from 4 million years ago, but a new study now shows that the Nordic Seas cooled 500,000 year earlier.
Stijn De Schepper, researcher at Uni Research and the Bjerknes Centre for Climate research, has together with colleagues from the University of Bergen, the Alfred Wegener Institute in Germany and the Korea Polar Research Institute, investigated the fossil remains of microscopic marine plankton, especially dinoflagellate cysts, in two sediment cores from the Norwegian Sea and the Iceland Sea.
"We see that the dinoflagellate cyst assemblages underwent fundamental changes around 4.5 million years ago. Together with the simultaneous first occurrence of cool-water Pacific mollusks in Iceland, our results demonstrate that the Nordic Seas cooled significantly", De Schepper says.
Major ocean current changes
This new study and the earlier work on migration of Pacific mollusks into the Nordic Seas suggest that the Bering Strait was open at this time, and that cool water from the Pacific flowed into the Arctic. This cool water flowed southwards along East Greenland and into the Nordic Seas, where we started to see the same temperature and circulation pattern as we have today.
Today, the Nordic Seas surface waters are characterised by an east-west temperature gradient. The southernmost part of Greenland is at the same latitude as Bergen and Oslo in Norway, but the climate in Greenland is much cooler. The warm water near Scandinavia is brought northwards via the Norwegian Atlantic Current, a continuation of the North Atlantic Current, and is today responsible for the mild winter climate along the coast of Norway. Along east Greenland a cold water current known as the East Greenland Current flows southward and transports the major part of all exported Arctic sea ice.
Thermal isolation of Greenland
"Our study shows that a surface water temperature gradient was only established since 4.5 million years ago, when warm waters continued to flow along the Scandinavian coast and cool water entered the Nordic Seas along Greenland's east coast", De Schepper says.
In the early Pliocene the icecap on Greenland was restricted to mountain glaciers. The cool surface water that arrives from 4.5 million years ago in the western Nordic Seas isolates Greenland from the warmer water in the eastern Nordic Seas. This cool water likely leads to cooler temperatures in Greenland and the expansion of the Greenland ice sheet in the late Pliocene.
###
Timeline: https://cdn.knightlab.com/libs/timeline3/latest/embed/index.html?source=1lR3QCd_0yhMBltHHeiJsiJVcNuMMadP1fk2l-X4ZfDg&font=Default⟨=en&initial_zoom=2&height=800
Reference: De Schepper, S., Schreck, M., Beck, K.M., Matthiessen, J., Fahl, K. & Mangerud. G.
Early Pliocene onset of modern Nordic Seas circulation related to ocean gateway changes.
Nature Communications 6:8659, 10.1038/ncomms9659
|
10.1038/ncomms9659
| 2,015 |
Nature Communications
|
Early Pliocene onset of modern Nordic Seas circulation related to ocean gateway changes
|
The globally warm climate of the early Pliocene gradually cooled from 4 million years ago, synchronous with decreasing atmospheric CO2 concentrations. In contrast, palaeoceanographic records indicate that the Nordic Seas cooled during the earliest Pliocene, before global cooling. However, a lack of knowledge regarding the precise timing of Nordic Seas cooling has limited our understanding of the governing mechanisms. Here, using marine palynology, we show that cooling in the Nordic Seas was coincident with the first trans-Arctic migration of cool-water Pacific mollusks around 4.5 million years ago, and followed by the development of a modern-like Nordic Seas surface circulation. Nordic Seas cooling precedes global cooling by 500,000 years; as such, we propose that reconfiguration of the Bering Strait and Central American Seaway triggered the development of a modern circulation in the Nordic Seas, which is essential for North Atlantic Deep Water formation and a precursor for more widespread Greenland glaciation in the late Pliocene.
|
816234
|
What do breast cancer cells feel inside the tumour?
|
Using a new technique, a team of McGill University researchers has found tiny and previously undetectable 'hot spots' of extremely high stiffness inside aggressive and invasive breast cancer tumours. Their findings suggest, for the first time, that only very tiny regions of a tumor need to stiffen for metastasis to take place. Though still in its infancy, the researchers believe that their technique may prove useful in detecting and mapping the progression of aggressive cancers.
"We are now able to see these features because our approach allows us to take measurements within living, intact, 3D tissues," says Chris Moraes, from McGill University's Department of Chemical Engineering, a Canada Research Chair and senior author on a recent research paper in Nature Communications. "When tissue samples are disrupted in any way, as is normally required with standard techniques, signs of these 'hot spots' are eliminated."
"Smart" hydrogels provide information about cancer progression
The researchers built tiny hydrogel sensors that can expand on demand, much like inflating balloons the size of individual cells, and placed them inside 3D cultures and mouse models of breast cancer. When triggered, the expansion of the hydrogel can be used to measure very local stiffness inside the tumour.
This unusual technique, developed through a collaboration between McGill's Department of Chemical Engineering and the Rosalind and Morris Goodman Cancer Research Centre at McGill, allows the researchers to sense, from the perspective of a cancer cell, what is going on in their surrounding environment.
What cells sense drives their behaviour
"Human cells are not static. They grab and pull on the tissue around them, checking out how rigid or soft their surroundings are. What cells feel around them typically drives their behaviour: immune cells can activate, stem cells can become specialized, and cancer cells can become dangerously aggressive," explains Moraes. "Breast cancer cells usually feel surroundings that are quite soft. However, we found that cancer cells inside aggressive tumours experienced much harder surroundings than previously expected, as hard as really old and dried up gummy bears."
The researchers believe that their findings suggest new ways in which cell mechanics, even at the early stages of breast cancer, might affect disease progression.
"Developing methods to analyze the mechanical profiles in 3D tissues may better predict patient risk and outcome," says Stephanie Mok, the first author on the paper and a PhD candidate in the Department of Chemical Engineering. "Whether these 'hot spots' of stiffness are really causing cancer progression rather than simply being correlated with it remains an open, but critically important question to resolve."
|
10.1038/s41467-020-18469-7
| 2,020 |
Nature Communications
|
Mapping cellular-scale internal mechanics in 3D tissues with thermally responsive hydrogel probes
|
Local tissue mechanics play a critical role in cell function, but measuring these properties at cellular length scales in living 3D tissues can present considerable challenges. Here we present thermoresponsive, smart material microgels that can be dispersed or injected into tissues and optically assayed to measure residual tissue elasticity after creep over several weeks. We first develop and characterize the sensors, and demonstrate that internal mechanical profiles of live multicellular spheroids can be mapped at high resolutions to reveal broad ranges of rigidity within the tissues, which vary with subtle differences in spheroid aggregation method. We then show that small sites of unexpectedly high rigidity develop in invasive breast cancer spheroids, and in an in vivo mouse model of breast cancer progression. These focal sites of increased intratumoral rigidity suggest new possibilities for how early mechanical cues that drive cancer cells towards invasion might arise within the evolving tumor microenvironment.
|
956906
|
Targeting a human protein to squash SARS-CoV-2 and other viruses
|
More than two years into the COVID-19 pandemic, people are realizing that the “new normal” will probably involve learning to co-exist with SARS-CoV-2. Some treatments are available, but with new variants emerging, researchers are looking toward new strategies. In ACS Infectious Diseases, scientists now report that apratoxin S4, an anticancer drug candidate that targets a human protein, can interfere with the replication of many viruses, including SARS-CoV-2 and influenza A, offering a possible pan-viral therapy.
Although COVID-19 vaccines exist, some people who received the shots have still become sick with the disease, and only a fraction of the world’s population is vaccinated. That means treatments are still needed, and a few are now available that target the virus’s RNA polymerase — the enzyme it uses to make more of its own RNA inside human cells. But some of these drugs, such as remdesivir, don’t work unless given at very early stages and can require injections.
In the hunt for new ways to treat COVID-19, various teams have revisited drugs that are already known to fight other diseases, a strategy called “repurposing.” One such preclinical stage compound is apratoxin S4 (Apra S4), which is a molecule based on a natural product that has anti-cancer activity. Previous studies have shown that apratoxins can target a human protein called Sec61, which ensures that certain proteins are properly glycosylated and folded correctly. Since viruses don’t have their own machinery to do this, they hijack the process and force human cells to make functional viral proteins. Sec61 is essential for the influenza A, HIV and dengue viruses to cause infection, so Hendrik Luesch and colleagues wondered if apratoxins could be a broadly effective, pan-viral medication that could also combat SARS-CoV-2.
In tests with monkey and human cells exposed to SARS-CoV-2, the researchers found that treatment with Apra S4 reduced the number of infected cells compared with remdesivir treatment. The molecule was also effective against influenza A, Zika virus, dengue and West Nile virus infections. Further testing revealed that Apra S4 didn’t prevent SARS-CoV-2 from entering cells, but it reduced the amount of viral protein that was produced and transported in cells, especially the spike protein, and it decreased viral RNA replication. With electron microscopy, the team observed that Apra S4 also largely blocked the formation of new viruses, with many vesicles in SARS-CoV-2-exposed monkey cells having no or very few brand-new viral particles in them. The researchers say more studies are needed, but these results suggest that Apra S4 and other inhibitors of the human Sec61 protein are broadly acting antivirals that could help in the fight against future pandemics.
The authors acknowledge funding from the National Institutes of Health, the Debbie and Sylvia DeSantis Chair professorship, the Department of Defense, the Dengue Human Immunology Project Consortium, philanthropic donations, JPB Foundation, the Open Philanthropy Project and the Swiss National Science Foundation.
The complete competing interest statement by the authors is available in the paper and by request at [email protected].
The paper’s abstract will be available on June 29 at 8 a.m. Eastern time here: http://pubs.acs.org/doi/abs/10.1021/acsinfecdis.2c00008.
For more of the latest research news, register for our upcoming meeting, ACS Fall 2022. Journalists and public information officers are encouraged to apply for complimentary press registration by completing this form.
The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio.
To automatically receive news releases from the American Chemical Society, contact [email protected].
Follow us: Twitter | Facebook | LinkedIn | Instagram
ACS Infectious Diseases
10.1021/acsinfecdis.2c00008
Sec61 Inhibitor Apratoxin S4 Potently Inhibits SARS-CoV‑2 and Exhibits Broad-Spectrum Antiviral Activity
29-Jun-2022
|
10.1021/acsinfecdis.2c00008
| 2,022 |
ACS Infectious Diseases
|
Sec61 Inhibitor Apratoxin S4 Potently Inhibits SARS-CoV-2 and Exhibits Broad-Spectrum Antiviral Activity
|
There is a pressing need for host-directed therapeutics that elicit broad-spectrum antiviral activities to potentially address current and future viral pandemics. Apratoxin S4 (Apra S4) is a potent Sec61 inhibitor that prevents cotranslational translocation of secretory proteins into the endoplasmic reticulum (ER), leading to anticancer and antiangiogenic activity both in vitro and in vivo. Since Sec61 has been shown to be an essential host factor for viral proteostasis, we tested Apra S4 in cellular models of viral infection, including SARS-CoV-2, influenza A virus, and flaviviruses (Zika, West Nile, and Dengue virus). Apra S4 inhibited viral replication in a concentration-dependent manner and had high potency particularly against SARS-CoV-2 and influenza A virus, with subnanomolar activity in human cells. Characterization studies focused on SARS-CoV-2 revealed that Apra S4 impacted a post-entry stage of the viral life-cycle. Transmission electron microscopy revealed that Apra S4 blocked formation of stacked double-membrane vesicles, the sites of viral replication. Apra S4 reduced dsRNA formation and prevented viral protein production and trafficking of secretory proteins, especially the spike protein. Given the potent and broad-spectrum activity of Apra S4, further preclinical evaluation of Apra S4 and other Sec61 inhibitors as antivirals is warranted.
|
658538
|
Asthma increases risk of complications during pregnancy and delivery
|
Women with asthma suffer more often from preeclampsia (PE) and run a higher risk of giving birth to underweight babies. These and other complications during pregnancy and delivery can not be explained by hereditary or environmental factors, according to a study from Karolinska Institutet published in The Journal of Allergy and Clinical Immunology: In Practice.
Asthma is a common disease caused by chronic inflammation in the lungs with symptoms of coughing and breathlessness, and affects between 8-10 percent of women of childbearing age in Sweden.
Using data from the Swedish birth, prescribed drug and patient registers, researchers at Karolinska Institutet have been able to examine the link between asthma in pregnant women and pregnancy/delivery outcomes. Studying more than 1 million births to just over 700,000 women between 2001 and 2013, they found that 10 percent of the babies born had a mother with asthma.
"Four percent of all pregnant women develop preeclampsia. We found that the risk of preeclampsia is 17 percent higher in women with asthma compared to women without asthma", says the study's lead author Dr Gustaf Rejnö, obstetrician and doctoral student at Karolinska Institutet's Department of Medical Epidemiology and Biostatistics.
Additionally, women with asthma were more likely to have underweight babies, instrumental deliveries, caesarean sections and shorter pregnancies.
To ascertain whether the complications could be attributed to hereditary or environmental factors, the researchers also identified the women's asthma-free cousins and sisters who had given birth during the same period. On comparing the groups they found that the correlations between maternal asthma and complications during pregnancy and delivery held.
"It seems to be the asthma per se that causes these complications," says Dr Rejnö. "This means that well-controlled asthma during pregnancy could reduce the relative incidence of complications during pregnancy and childbirth. In an earlier study we saw that this was indeed the case."
|
10.1016/j.jaip.2017.07.036
| 2,017 |
The Journal of Allergy and Clinical Immunology In Practice
|
Adverse Pregnancy Outcomes in Asthmatic Women: A Population-Based Family Design Study
|
Asthma is associated with several adverse pregnancy and perinatal outcomes. Familial factors may confound these associations.To examine the role of measured and unmeasured confounding by conducting a study that compared differentially exposed cousins and siblings from the same families.We retrieved data on adverse pregnancy outcomes, prescribed drugs, and physician-diagnosed asthma from nationwide registers for all women in Sweden with singleton births between 2001 and 2013. Logistic and linear regression estimated the association between maternal asthma and several outcomes in the whole population and within differently exposed pregnant relatives.In total, 1,075,153 eligible pregnancies were included and 10.1% of the study population had asthma. We identified 475,200 cousin and 341,205 sister pregnancies. Women with asthma had increased risks for preeclampsia (adjusted odds ratio [aOR], 1.17; 95% CI, 1.13-1.21), emergency cesarean section (aOR, 1.24; 95% CI, 1.22-1.27), and having a child small for gestational age (aOR, 1.18; 95% CI, 1.12-1.23). In the conditional regression analyses, after adjustment for familial factors, the associations remained: preeclampsia in cousins (aOR, 1.16; 95% CI, 1.07-1.25) and siblings (aOR, 1.23; 95% CI, 1.08-1.38), emergency cesarean section in cousins (aOR, 1.28) and siblings (aOR, 1.21), and small for gestational age in cousins (aOR, 1.17) and siblings (aOR, 1.13).Factors shared by siblings and cousins do not seem to explain the observed association between maternal asthma and adverse pregnancy outcomes. This implies that targeting the asthma disease will continue to be important in reducing risks for adverse outcomes in pregnancy.
|
930049
|
How a committed minority can change society
|
Over the last year, handshakes have been replaced by fist or elbow bumps as a greeting. It shows that age-old social conventions can not only change, but do so suddenly. But how does this happen? Robotic engineers and marketing scientists from the University of Groningen joined forces to study this phenomenon, combining online experiments and statistical analysis into a mathematical model that shows how a committed minority can influence the majority to overturn long-standing practices. The results, which were published in Nature Communications on 29 September, may help to stimulate sustainable behaviour.
How does complex human behaviour take shape? This is studied in many ways, mostly relying on lots of data from observations and experiments. Ming Cao, Professor of Networks and Robotics at the Faculty of Science and Engineering at the University of Groningen, has studied complex group behaviour in robots by using agent-based simulations, among other methods. These agents follow a limited number of simple rules, often inspired by nature, which can lead to realistic complex behaviour. ‘Swarming birds or schools of fish are a good example’, Cao explains, ‘their movements can be reproduced by agents that follow a few simple rules on keeping a certain distance and heading in the same direction as their neighbours.’
Game
In parallel, the Marketing research group at the Faculty of Economics and Business, led by Dr Jan Willem Bolderdijk, Dr Hans Risselada, and Prof. Bob Fennis, has carried out various research projects into human behaviour, but not so many using these kinds of agent-based models. After a discussion with Cao and his colleagues, both groups saw possibilities for such models. Consequently, marketing PhD student Zan Mlakar and the two post-doc researchers in Cao’s group, Mengbin Ye and Lorenzo Zino, worked together creating an online experiment to gather data on the social diffusion of new behavioural trends.
They developed an online game in which 12 participants act as board members of a company that plans to launch one of two potential products. The participants have to vote on which product to launch. The catch is that the decision has to be taken unanimously. The participants cannot discuss their choice, they vote in 24 consecutive rounds, and they only see the distribution of votes at the end of each round. If unanimity is reached, the participants receive a reward.
Rules
Unknown to the participants, between two to four participants in the groups studied were computer bots, programmed to stick to their choice. ‘If the majority voted for product A in the first round, the bots were set to vote for B to try and overturn the majority’, explains Ye, who now works as Senior Research Fellow at Curtin University in Australia. Meanwhile, the votes of the human participants over all the rounds studied were registered. The vast majority of over 20 of these online game rounds resulted in a unanimous vote, with humans eventually siding with the bots to vote for product B. The results of all the games were then analysed to look for patterns in the voting decisions of the human participants.
Ye: ‘In quite few cases, we saw a delay before the votes started changing, but when they did, the group would reach unanimity in just a few voting rounds.’ The overall voting behaviour was able to be reproduced in an agent-based model with three simple rules: do as the majority does, stick to your previous decision, and follow the trend. ‘These rules are acknowledged in the literature as group coordination, inertia, and trend-seeking’, explains Ye. ‘They have been separately studied in human behaviour, but never combined in one model; this combination was critical in capturing social change.’
The results of the experiments and the simulations show that new conventions can suddenly arise when the influence of a committed minority reaches a threshold. A small group of ‘activists’ can therefore change social conventions. Cao: ‘However, this only happens if the minority is also able to influence others in their network. And this depends on the amount of risk-taking present among the other voters.’ The team are now interested in exploring what might enhance or inhibit this risk-taking behaviour. ‘We now have a solid framework and a model, which can be used to examine environmental factors that might make people have greater inertia, or be more susceptible to trends’, says Ye.
The three basic rules could help in steering the behaviour of large groups. ‘Of course, we can’t control people’, stresses Cao. ‘But we can provide guidelines, for example on how to nudge people to change their behaviour.’ This could be useful in the energy transition, or in getting people to reduce their meat consumption. ‘Governments already spend money to convince people to adopt more sustainable behaviour. Our research can help them to spend it in a more effective way.’
Reference: Mengbin Ye, Lorenzo Zino, Zan Mlakar, Jan Willem Bolderdijk, Hans Risselada, Bob M. Fennis and Ming Cao: Collective patterns of social diffusion are shaped by individual inertia and trend-seeking. Nature Communications, 29 September 2021
Nature Communications
10.1038/s41467-021-25953-1
Experimental study
People
Collective patterns of social diffusion are shaped by individual inertia and trend-seeking.
29-Sep-2021
none
|
10.1038/s41467-021-25953-1
| 2,021 |
Nature Communications
|
Collective patterns of social diffusion are shaped by individual inertia and trend-seeking
|
Abstract Social conventions change when individuals collectively adopt an alternative over the status quo, in a process known as social diffusion. Our repeated trials of a multi-round experiment provided data that helped motivate the proposal of an agent-based model of social diffusion that incorporates inertia and trend-seeking, two behavioural mechanisms that are well documented in the social psychology literature. The former causes people to stick with their current decision, the latter creates sensitivity to population-level changes. We show that such inclusion resolves the contradictions of existing models, allowing to reproduce patterns of social diffusion which are consistent with our data and existing empirical observations at both the individual and population level. The model reveals how the emergent population-level diffusion pattern is critically shaped by the two individual-level mechanisms; trend-seeking guarantees the diffusion is explosive after the diffusion process takes off, but inertia can greatly delay the time to take-off.
|
855356
|
New magnetically controlled thrombolytic successfully passed preclinical testing
|
New anti-thrombosis drug based on magnetite nanoparticles developed at ITMO University was successfully tested on animals. Preclinical studies conducted in terms of the project "PHARMA 2020" showed drug's high efficacy and no side effects. Clot dissolution time of the new drug is 20 times shorter than the one of traditional medications. The range of permissible concentrations is very high, and the minimum dose of the active substance required to achieve the effect was a hundred times smaller than usual. The results are published in Applied Materials and Interfaces.
Currently, thrombosis related conditions remain the leading cause of death. There are 2 conventional ways of treatment: either surgery, requiring a complex, high-risk operation, or thrombolytics. Although this class of drugs appeared as a treatment method about 40 years ago, so far it failed to become widespread because of the side effects that occur during its systemic use. To avoid these effects, the action of thrombolytics should be localized, meaning that the drug must be delivered right to the clot. In order to do this scientists use magnetic nanoparticles.
Researchers at ITMO University developed thrombolytics based on magnetite nanoparticles coated with heparin and urokinase. Magnetite is a biocompatible iron oxide with pronounced magnetic properties. Therefore, magnetite particles movement can be controlled by a magnetic field. Urokinase is a first-generation thrombolytic with simple molecules, affordable price and efficiency almost equal to newer drugs. As soon as the nanoparticles with urokinase are injected into the blood, they can be sent to the place of clot formation using a magnetic field. Once the clot is destroyed, the magnetic field is turned off, the nanoparticles are redistributed to the liver and spleen and, finally, gradually removed.
"We initially focused on simple and inexpensive substances to make the final product affordable. Since urokinase and magnetite are equally charged, we had to use a linker. We choose heparin, an anticoagulant that often comes with thrombolytics in order to thin the blood. Typically, heparin inhibits urokinase, but we managed to avoid this effect. Preclinical trials showed that we also managed to achieve high efficiency and minimize side effects," comments Arthur Prilepskii, member of SCAMT Laboratory of ITMO University.
The new drug successfully passed preclinical studies including toxicity, allergenicity, mutagenicity, and immunotoxic tests. No side effects were identified during animal experiments. At the same time, the range of permissible drug concentrations turned out to be very high, while the minimum dose of urokinase, necessary to achieve a therapeutic effect, was approximately two orders of magnitude lower than with the introduction of the usual urokinase. Moreover, the clot dissolution time was 20 times shorter.
"Preclinical trials were conducted as part of the Pharma 2020 project. The project included 3 stages for 2 years, during which the synthesis of the drug was optimized. Moreover, we carefully studied chemical characteristics efficacy and safety of the new medication," notes Anna Fakhardo, researcher at SCAMT Laboratory of ITMO University.
###
Reference:
Urokinase-Conjugated Magnetite Nanoparticles as a Promising Drug Delivery System for Targeted Thrombolysis: Synthesis and Preclinical Evaluation
Artur Prilepskii et al. ACS Appl. Mater. Interfaces
https://pubs.acs.org/doi/10.1021/acsami.8b14790
|
10.1021/acsami.8b14790
| 2,018 |
ACS Applied Materials & Interfaces
|
Urokinase-Conjugated Magnetite Nanoparticles as a Promising Drug Delivery System for Targeted Thrombolysis: Synthesis and Preclinical Evaluation
|
Mortality and disabilities as outcomes of cardiovascular diseases are primarily related to blood clotting. Optimization of thrombolytic drugs is aimed at the prevention of side effects (in particular, bleeding) associated with a disbalance between coagulation and anticoagulation caused by systemically administered agents. Minimally invasive and efficient approaches to deliver the thrombolytic agent to the site of clot formation are needed. Herein, we report a novel nanocomposite prepared by heparin-mediated cross-linking of urokinase with magnetite nanoparticles (MNPs@uPA). We showed that heparin within the composition evoked no inhibitory effects on urokinase activity. Importantly, the magneto-control further increased the thrombolytic efficacy of the composition. Using our nanocomposition, we demonstrated efficient lysis of experimental clots in vitro and in animal vessels followed by complete restoration of blood flow. No sustained toxicity or hemorrhagic complications were registered in rats and rabbits after single bolus i.v. injection of therapeutic doses of MNPs@uPA. We conclude that MNPs@uPA is a prototype of easy-to-prepare, inexpensive, biocompatible, and noninvasive thrombolytic nanomedicines potentially useful in the treatment of blood clotting.
|
466781
|
Statement advising caution on interpretation of recent paper on cancer risk & hyperthyroidism issued
|
Caution is advised in interpreting the findings of the recent JAMA Internal Medicine publication1 on radioactive iodine treatment for hyperthyroid patients and cancer mortality. The paper's conclusion that "in RAI-treated patients with hyperthyroidism, greater organabsorbed doses appeared to be modestly positively associated with risk of death from solid cancer, including breast cancer", has raised concerns among patients and clinicians.
To help address the concerns of patients and clinicians, the Society for Endocrinology and British Thyroid Association have issued a statement2 indicating that caution is needed in interpreting these findings. Although this retrospective analysis of data from the large multicentre Cooperative Thyrotoxicosis Therapy Follow?up study does suggest a modest increase in potential risk of death from cancer in people who receive radioiodine therapy for hyperthyroidism, there are some limitations to take into account. Our statement highlights these caveats and advises that more research is needed.
"Radioiodine is a very effective treatment for hyperthyroidism and has been used successfully for more than 70 years. The recent JAMA Internal Medicine article has raised concerns for health care practitioners and patients. We felt that the findings of this paper need to be interpreted in the right context and that continued surveillance of patients who have been treated with radioiodine is required." The Society for Endocrinology Clinical Committee & the British Thyroid Association Executive Committee
|
10.1111/cen.14136
| 2,019 |
Clinical Endocrinology
|
Joint statement from the Society for Endocrinology and the British Thyroid Association regarding ‘Association of Radioactive Iodine Treatment with cancer mortality in patients with hyperthyroidism’
|
Recent observations have shown the importance of achieving good control of hyperthyroidism in a timely fashion to improve long-term cardiovascular and mortality outcomes.15, 16 In this context, it would be unfortunate if patients were deprived of the option of rapid, effective control of their hyperthyroidism with radioiodine, due to concerns of cancer risk. Overall, on the basis that current evidence shows no excess cancer risk, it would be reasonable to continue with current approaches to the management of hyperthyroidism, whilst further, appropriately controlled studies are undertaken. We believe that long-term monitoring of outcomes, including cancer mortality risk, is essential for patients who have undergone radioiodine therapy. We endorse and would actively support efforts to construct large national databases of radioiodine-treated hyperthyroid patients to assess such outcomes.
|
814587
|
Marmosets serve as an effective model for non-motor symptoms of Parkinson's disease
|
San Antonio, Texas (September 5, 2018) - Small, New World monkeys called marmosets can mimic the sleep disturbances, changes in circadian rhythm, and cognitive impairment people with Parkinson's disease develop, according to a new study by scientists at Texas Biomedical Research Institute.
By developing an effective animal model that can emulate both the motor and non-motor symptoms of Parkinson's disease, scientists have a better chance of understanding the molecular mechanisms of the neuro-circuitry responsible for changes in the brain during the course of the disease. Scans like magnetic resonance imaging (MRIs) and analysis after dissections may lead to potential targets for new therapies for patients.
Associate Scientist Marcel Daadi, Ph.D., leader of the Regenerative Medicine and Aging Unit at the Southwest National Primate Research Center on the Texas Biomed campus, is the lead author of the study that tracked marmosets using devices around their necks similar to Fitbits humans use to track their activity and sleep. The study was published in a recent edition of the journal PLOS ONE. In the case of the tiny monkeys, investigators wanted to see if the marmosets with induced classic Parkinson's motor symptoms - like tremors - could also serve as an effective model for non-motor symptoms. In addition, scientists videotaped the animals to monitor their ability to perform certain tasks and how those abilities were impacted over time by the disease.
"Most of the early studies in Parkinson's have been conducted with rodents," Dr. Daadi explained, "but there are some complex aspects of this disease you simply cannot investigate using rodents in a way that is relevant to human patients. Nonhuman primates are critical in his aspect because we can see these symptoms clearly whether it is the dyskinesia (abnormality or impairment of voluntary movements), or the sleep disturbances that you can monitor or the fine motors skills."
Parkinson's disease affects a million people in the United States and 10 million people worldwide. With the aging population, the incidence of the neurodegenerative disorder is on the rise. 60,000 people are diagnosed with Parkinson's each year in the U.S. alone. The hallmark symptoms of Parkinson's include tremors, slow movements, balance problems and rigid or stiff muscles. However, non-motor symptoms including disorders of the sleep-wake cycle and problems thinking clearly can be just as difficult for patients to handle.
"This study is a great first step," Dr. Daadi stated. "More studies are needed to expand on these non-motor symptoms in marmosets in the longer-term, and perhaps, include other nonhuman primates at the SNPRC like macaques and baboons."
###
Dr. Daadi's work on this study was supported by the Worth Family Fund, The Perry & Ruby Stevens Charitable Foundation, The Robert J. Kleberg, Jr., and Helen C. Kleberg Foundation, by the Southwest National Primate Research Center grant P51 OD011133 from the Office of Research Infrastructure Programs, National Institutes of Health, and the National Center for Advancing Translational Sciences, National Institutes of Health, through Grant UL1 TR001120 (M.M.D.).
###
Texas Biomed is one of the world's leading independent biomedical research institutions dedicated to advancing health worldwide through innovative biomedical research. The Institute is home to the Southwest National Primate Research Center (SNPRC) and provides broad services in primate research. SNPRC contributes to a national network of National Primate Research Centers (NPRCs) with specialized technologies, capabilities and primate resources, many of which are unique to the SNPRC. The Center also serves investigators around the globe with research and technical procedures for collaborative projects. For more information on Texas Biomed, go to http://www.TxBiomed.org or for more information on SNPRC, visit http://www.SNPRC.org.
|
10.1371/journal.pone.0202770
| 2,018 |
PLoS ONE
|
Charting the onset of Parkinson-like motor and non-motor symptoms in nonhuman primate model of Parkinson’s disease
|
Parkinson's disease is a progressive neurodegenerative disease increasingly affecting our aging population. Remarkable advances have been made in developing novel therapies to control symptoms, halt or cure the disease, ranging from physiotherapy and small molecules to cell and gene therapy. This progress was enabled by the existence of reliable animal models. The nonhuman primate model of Parkinson's disease emulates the cardinal symptoms of the disease, including tremor, rigidity, bradykinesia, postural instability, freezing and cognitive impairment. However, this model is established through the specific loss of midbrain dopaminergic neurons, while our current knowledge reflects the reality of Parkinson's disease as a multisystem disease. Parkinson's disease involves both motor and non-motor symptoms, such as sleep disturbance, olfaction, gastrointestinal dysfunctions, depression and cognitive deficits. Some of the non-motor symptoms emerge earlier at the prodromal phase and worsen with disease progression, yet in basic and translational studies, they are rarely considered as endpoints. In this study, we set to characterize an ensemble of less described motor and non-motor dysfunctions in the marmoset MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) model. We provide evidence that this animal model expresses postural head tremor and a progressive worsening of fine motor skills, movement coordination and cognitive abilities over a 6-month period. We report for the first time a non-invasive approach showing detailed analysis of daytime and nighttime sleep and circadian rhythm disturbance remarkably similar to Parkinson's disease patients. This study describes the incidence of tremors, motor and non-motor dysfunctions in a preclinical model and highlights the need for their consideration in translating effective new therapeutic approaches for Parkinson's disease.
|
941281
|
Study shows surprising breadth of plant viruses that hitchhike on pollen
|
We rely on pollinators like honeybees for all sorts of different crops. But that same flexibility could put plants at risk of disease, according to new Pitt research.
In the first study to take a broad look at virus hitchhikers on pollen grains, Pitt biologists show that a variety of viruses travel on pollen — especially in areas close to agriculture and human development where honeybees dominate.
“Our understanding of viruses on pollen at large was nonexistent before this study,” said Department of Biological Sciences Distinguished Professor Tia-Lynn Ashman in the Kenneth P. Dietrich School of Arts & Sciences. “Most of what we know about plant viruses comes from agricultural species that are obviously sick. We just didn’t really have any idea what was out there."
Since most prior research focused on just a small handful of viruses, the team didn’t know what to expect on their search, or even whether to expect much at all.
“That was one of our one of our questions,” Ashman said. “Do we not know much about these viruses because there aren’t many out there, or we just don’t know how to look at them?"
By sequencing the genetic material present on the pollen grains of 24 plant species across the U.S., the group found signs of many of the plant viruses already shown to travel on pollen — along with six new species, three new variants of known species and the incomplete traces of more than 200 more that have never before been identified.
The team, including Pitt biologist James Pipas, former Ph.D. student Andrea Fetters (A&S‘21G) and Ph.D. student Amber Stanley, published their research in the journal Nature Communications Jan. 26. For viruses, the tiny, spiky vehicles for plant genetic material we know as pollen represents a convenient way to travel from host to host. It’s also a direct path to a plant’s reproductive organs, the one part of a plant where cells aren’t covered by a hard outer surface. In that way, it’s similar to how viruses invade our own bodies through our less-protected noses and mouths.
Ashman offered another analogy: “Pollinators are essentially the go-betweens for plant sex — since plants can’t get up and move to another plant, they rely on an intermediate,” she said. “So you can relate this to a sexually transmitted disease.”
Driving that point home, the researchers found that pollen produced by plants with more flowers that help them attract pollinators also harbored more kinds of viruses. The team also saw a wider variety of pollen-borne viruses in areas close to human habitation and agriculture. Ashman suspects one reason for this pattern may be honeybees: Since they visit a wide variety of flowers over a big area, they meet all the criteria to spread viruses. Native pollinators are far more specialized.
It’s a lesson not just for how we perform agriculture, but also for backyard beekeepers.
“Honeybees have superspreader potential,” Ashman said. “People think that doing beekeeping at home is helping pollinators. But when we do an activity like bringing honeybees into the city, we’re bringing everything that comes with them.”
Including, perhaps, all the viruses they pick up in their travels. As for what those viruses are doing — whether they’re harming pollinators and plants or paradoxically helping them — it’ll be up to future studies to determine. Regardless, the work shows yet another way humans can throw a wrench in the gears when we engineer ecosystems for our own benefit.
“It’s a cautionary story about how when we alter our environment, we’re potentially changing those viral-host interactions,” Ashman said. “All of these things are interconnected.”
Nature Communications
10.1038/s41467-022-28143-9
Experimental study
Not applicable
The pollen virome of wild plants and its association with variation in floral traits and land use
|
10.1038/s41467-022-28143-9
| 2,022 |
Nature Communications
|
The pollen virome of wild plants and its association with variation in floral traits and land use
|
Abstract Pollen is a unique vehicle for viral spread. Pollen-associated viruses hitchhike on or within pollen grains and are transported to other plants by pollinators. They are deposited on flowers and have a direct pathway into the plant and next generation via seeds. To discover the diversity of pollen-associated viruses and identify contributing landscape and floral features, we perform a species-level metagenomic survey of pollen from wild, visually asymptomatic plants, located in one of four regions in the United States of America varying in land use. We identify many known and novel pollen-associated viruses, half belonging to the Bromoviridae, Partitiviridae, and Secoviridae viral families, but many families are represented. Across the regions, species harbor more viruses when surrounded by less natural and more human-modified environments than the reverse, but we note that other region-level differences may also covary with this. When examining the novel connection between virus richness and floral traits, we find that species with multiple, bilaterally symmetric flowers and smaller, spikier pollen harbored more viruses than those with opposite traits. The association of viral diversity with floral traits highlights the need to incorporate plant-pollinator interactions as a driver of pollen-associated virus transport into the study of plant-viral interactions.
|
852242
|
Searching for new bridge forms that can span further
|
Newly identified bridge forms could enable significantly longer bridge spans to be achieved in the future, potentially making a crossing over the Strait of Gibraltar, from the Iberian Peninsula to Morocco, feasible.
The new bridge forms use a new mathematical modelling technique to identify optimal forms for very long-span bridges. The research is published on 19 September 2018 in the Proceedings of the Royal Society.
A bridge's span is the distance of suspended roadway between towers, with the current world record standing at just under 2km. The most popular form for long spans is the suspension bridge form, as used for the Humber Bridge, though the cable-stayed bridge form, where cables directly connect the tower to the roadway - such as used in the recently constructed Queensferry Crossing in Scotland - is becoming increasingly popular.
As bridge spans become longer, a rapidly growing proportion of the structure is needed just to carry the bridge's own weight, rather than the traffic crossing it. This can create a vicious cycle: a relatively small increase in span requires use of significantly more material, leading to a heavier structure that requires yet more material to support it. This also sets a limit on how long a bridge span can be; beyond this limit a bridge simply cannot carry its own weight.
One option is to use stronger, lighter materials. However, steel remains the preferred choice because it is tough, readily available and relatively cheap. So the only other way to increase span is change the bridge's design.
Professor Matthew Gilbert from the University of Sheffield, who led the research, said: "The suspension bridge has been around for hundreds of years and while we've been able to build longer spans through incremental improvements, we've never stopped to look to see if it's actually the best form to use. Our research has shown that more structurally efficient forms do exist, which might open the door to significantly longer bridge spans in the future."
The technique devised by the team draws on theory developed by Professor Gilbert's namesake, Davies Gilbert, who in the early 19th Century used mathematical theory to persuade Thomas Telford that the suspension cables in his original design for the Menai Strait bridge in North Wales followed too shallow a curve. He also proposed a 'catenary of equal stress' showing the optimal shape of a cable accounting for the presence of gravity loads.
By incorporating this early 19th century theory into a modern mathematical optimisation model, the team have identified bridge concepts that require the minimum possible volume of material, potentially making significantly longer spans feasible.
The mathematically optimal designs contain regions which resemble a bicycle wheel, with multiple 'spokes' in place of a single tower. But these would be very difficult to build in practice at large scale. The team therefore replaced these with split towers comprising just two or three 'spokes' as a compromise that retains most of the benefit of the optimal designs, while being a little easier to construct.
For a 5km span, which is likely to be required to build the 14km Strait of Gibraltar crossing, a traditional suspension bridge design would require far more material, making it at least 73 per cent heavier than the optimal design. In contrast, the proposed two- and three-spoke designs would be just 12 and 6 percent heavier, making them potentially much more economical to build.
The new bridge forms require less material principally because the forces from the deck are transmitted more efficiently through the bridge superstructure to the foundations. This is achieved by keeping the load paths short, and avoiding sharp corners between tensile and compressive elements.
The team emphasise that their research is just the first step, and that the ideas cannot be developed immediately for construction of a mega span bridge. The current model considers only gravity loads and does not yet consider dynamic forces arising from traffic or wind loading. Further work is also required to address construction and maintenance issues.
Co-author, Ian Firth, from COWI, said: "This is an interesting development in the search for greater material efficiency in the design of super-long span bridges. There is much more work to do, notably in devising effective and economic construction methods, but maybe one day we will see these new forms taking shape across some wide estuary or sea crossing."
|
10.1098/rspa.2017.0726
| 2,018 |
Proceedings of the Royal Society A Mathematical Physical and Engineering Sciences
|
Theoretically optimal forms for very long-span bridges under gravity loading
|
Long-span bridges have traditionally employed suspension or cable-stayed forms, comprising vertical pylons and networks of cables supporting a bridge deck. However, the optimality of such forms over very long spans appears never to have been rigorously assessed, and the theoretically optimal form for a given span carrying gravity loading has remained unknown. To address this we here describe a new numerical layout optimization procedure capable of intrinsically modelling the self-weight of the constituent structural elements, and use this to identify the form requiring the minimum volume of material for a given span. The bridge forms identified are complex and differ markedly to traditional suspension and cable-stayed bridge forms. Simplified variants incorporating split pylons are also presented. Although these would still be challenging to construct in practice, a benefit is that they are capable of spanning much greater distances for a given volume of material than traditional suspension and cable-stayed forms employing vertical pylons, particularly when very long spans (e.g. over 2 km) are involved.
|
570839
|
'Ideal biomarker' detects Alzheimer's disease before the onset of symptoms
|
Croatia, New Mexico (October, 2017): Absence of a prefrontal activation during sensory gating of simple tones detects the Alzheimer's disease (AD) before the occurrence of the first symptoms. Sanja Josef Golubic Ph.D., physicists at the Department of Physics, Faculty of Science, University of Zagreb, reveals the high potential, absolutely non-invasive biomarker of AD pathology in a new study published in the journal Human Brain Mapping. Josef Golubic found a discrete, individual biomarker of AD with "ideal" properties.
Highlights of the new biomarker:
Absolutely non-invasive
Detects the illness before the occurrence of the first symptoms (preclinical)
Discrete: localized/non-localized a prefrontal generator
Does not require estimation of uniform cut-off levels and standardization processes
Low sensitivity to individual heterogeneity and variability
Can follow the evolution of the pathophysiological process of AD
Individual
Topographic
Worldwide spread of Alzheimer's disease, a long-lasting morbid type of dementia, is one of the biggest global public health challenges facing this generation. A wealth of evidence emerged during over more than 110 years of disease research suggest that the pathological changes associated with AD start decades before the onset of clinical symptoms. This long progression of neurodegeneration that is irreversible by the stage of symptomatic disease, may account for failure to develop successful disease-modifying therapies. Currently, there is a pressing worldwide search for a marker of very early, possibly reversible, pathological changes related to AD in still cognitively intact individuals, before the occurrence of the first symptoms.
Reisa Sperling, chairman of the National Institute on Aging/Alzheimer's Association Workgroup on Preclinical AD and director of the Neuroimaging Program at Harvard Medical School, reviewing the extensive search for the biomarker of preclinical AD, emphasises: „An active line of research is the relationship of intrinsic neural networks and the "topographic" evolution of the pathophysiological process of AD. It is possible, just as in real estate, that "location, location, location" is key".*
Sanja Josef Golubic found the location of the key - it was hidden in the topography of auditory sensory gating network. She uncovered a topological biomarker of preclinical and clinical AD pathology at the individual level that shows a large effect size (0.98) and high accuracy, sensitivity and specificity (100%) in identifying symptomatic AD patients within a research sample. The new biomarker does not require estimation of cut-off levels or standardization processes what is the main problem with so far proposed AD markers. It is absolutely non-invasive, not based on the use of group means and is not associated with statistically significant changes in a continuous variable. Its strength lies in the simplicity of using a binary value, i.e. activated or not-activated a neural generator. The low sensitivity to individual heterogeneity and variability due to its binary nature is probably the most important property of the proposed biomarker.
"Three years ago we discovered the novel, third fast sensory processing pathway-gating loop, which directly links primary sensory areas to medial prefrontal cortex within first 80ms after auditory stimulation**. We provided strong evidences of the modulatory role of the medial prefrontal generator on the dynamics of generators in primary auditory cortices. We have also noticed the high sensitivity of the gating generators dynamic on AD pathology. It was inspiration to focus our AD biomarker search in the direction of prefrontal sensory gating generator activation", says Sanja Josef Golubic, who together with Cheryl Aine, Selma Supek, Julia Stephen, John Adair and Janice Knoefel form the international research team. The team was formed by the University of Zagreb, New Mexico University, Mind Research Network and New Mexico VA Healthcare System.
"In the present study, we demonstrate the use of the localization of neural sources underlying neuromagnetic fields measured outside a head to detect AD even before the onset of symptoms. The healthy controls activated a prefrontal generator in response to both the deviant and repeating tones of an oddball paradigm. To the contrary, the symptomatic AD group was lacking any medial prefrontal gating generator activation to either the deviant or repeating tones. However, we detected a sub-group of controls characterized by the absence of prefrontal gating generator activation for the repeating tone only and significantly lower scores on a mini mental status exam and delayed visual memory test - Rey-Osterreith Complex Figure Test. It is highly probable that these individuals were captured in a preclinical AD phase since they show both neuropsychological and neurophysiological impairments characteristic of an AD type of dementia, although they did not yet meet clinical criteria for the early phase of symptomatic AD", emphasises Josef Golubic.
The localization of a discrete prefrontal gating activation is a highly promising biomarker of Alzheimer's disease at the individual level with potential of following the evolution of the pathophysiological process of disease. The next steps in evolving the biomarker include the testing in a large independent samples and assessment in longitudinal clinical studies. The large effect size, absolute non-invasiveness and statistical independence, properties of an "ideal" biomarker, will certainly launch this AD biomarker promptly into clinical use.
###
Original publication:
"MEG biomarker of Alzheimer's disease: Absence of a prefrontal generator during auditory sensory gating",Sanja Josef Golubic, Cheryl J Aine, Julia M Stephen, John C Adair, Janice E Knoefel, Selma Supek. Hum Brain Mapp 38:5180-5194, (2017).
This work is funded by:
National Institutes of Health (NIH). Grant Numbers: R01 AG029495, R01 AG020302
Department of Energy. Grant Number: DE-FG02-99ER62764
National Center for Research Resources. Grant Number: 5P20RR021938
National Institute of General Medical Sciences. Grant Number: 8P20GM103472
Croatian Ministry of Science, Education and Sport. Grant Number: 199-1081870-1252
References:
*"The Evolution of Preclinical Alzheimer's Disease: Implications for Prevention Trials",
Reisa Sperling, Elizabeth Mormino, Keith Johnson.Neuron, 84(3): 608-622 (2014).
** "Modulatory role of the prefrontal generator within the auditory M50 network", Sanja Josef Golubic, Cheryl J Aine, Julia M Stephen, John C Adair, Janice E Knoefel, Selma Supek.Neuroimage 9: 120-131 (2014).
|
10.1002/hbm.23724
| 2,017 |
Human Brain Mapping
|
MEG biomarker of Alzheimer's disease: Absence of a prefrontal generator during auditory sensory gating
|
Magnetoencephalography (MEG), a direct measure of neuronal activity, is an underexplored tool in the search for biomarkers of Alzheimer's disease (AD). In this study, we used MEG source estimates of auditory gating generators, nonlinear correlations with neuropsychological results, and multivariate analyses to examine the sensitivity and specificity of gating topology modulation to detect AD. Our results demonstrated the use of MEG localization of a medial prefrontal (mPFC) gating generator as a discrete (binary) detector of AD at the individual level and resulted in recategorizing the participant categories in: (1) controls with mPFC generator localized in response to both the standard and deviant tones; (2) a possible preclinical stage of AD participants (a lower functioning group of controls) in which mPFC activation was localized to the deviant tone only; and (3) symptomatic AD in which mPFC activation was not localized to either the deviant or standard tones. This approach showed a large effect size (0.9) and high accuracy, sensitivity, and specificity (100%) in identifying symptomatic AD patients within a limited research sample. The present results demonstrate high potential of mPFC activation as a noninvasive biomarker of AD pathology during putative preclinical and clinical stages. Hum Brain Mapp 38:5180-5194, 2017. © 2017 Wiley Periodicals, Inc.
|
777390
|
Wrangling proteins gone wild
|
Proteins sometimes run amuck. All the good stuff (the useful genetic and biological material) they contain can get distorted. Mutations in specific amino acids can cause long strands of proteins to curl in on themselves (like a ball of wool a cat has played with) and refuse to break apart. These strands, known as amyloid fibrils, can be extremely toxic and are usually harmful. They attach to organs like the brain and pancreas, preventing them from functioning as they should. They are responsible for diseases as seemingly different as diabetes and Alzheimer's, to name just a couple. Developing effective medications to treat these diseases, and cause the fibrils to dissolve typically involves biochemists in a lengthy and expensive process of trial and error.
Billions of choices
But now McGill researchers, led by Prof. Jérôme Waldispühl of the School of Computer Science, have created a suite of computer programs that should speed up the process of drug discovery for diseases of this kind. The programs are designed to scan the fibrils (or misfolded proteins) looking for weak spots. The idea is to then design helpful genetic mutations to dissolve the bonds that hold the fibrils together - a bit like finding the right strand of wool to tug on to unravel a whole knotted ball. It's potentially a gargantuan task, because looking for the mutations that will prove useful in drug development involves exploring millions of possible structural combinations of genetic material.
But for the Fibrilizer, as McGill has dubbed its suite of computer tools, a name that hints at the super heroic nature of the programs they have developed, the task is of a very different order. "Within the space of a week, by using our programs and a supercomputer, we were able to look at billions of possible ways to weaken the bonds within these toxic protein strands. We narrowed it down to just 30 - 50 possibilities that can now be explored further," says Mohamed Smaoui, a McGill postdoctoral fellow and the first author on three recent papers on the research. "Typically biochemists can spend months or years in the lab trying to pinpoint these promising mutations."
Supercomputing to the rescue
The researchers tested their program on a medical compound that scientists have been trying to improve for the last couple decades. The compound is administered as part of a drug that is used by diabetes patients to boost the performance of insulin and is sold under the name Symlin. The synthetic compound is based on a version of the protein amylin, yet is known to be toxic to the pancreas over the long-term, creating amyloid fibrils. The McGill team were able to use Fibrilizer to pinpoint a limited number of possible genetic modifications to the compound that would act to reduce its toxicity.
Jérôme Waldispühl, the lead researcher on the papers, believes that computational research of this kind will play an increasingly important role in drug discovery in the future. "Computers are transforming the way that drugs are being developed," says Waldispühl. "Amyloid research has accelerated in the last 10 years. But computers may prove to be the key to finding better medications for a whole range of systemic and neurodegenerative diseases, from arthritis to Parkinson's. Without supercomputers and programs of this kind, it would take much longer and be much more expensive to do this kind of research and come up with these possible solutions to the problem."
|
10.1093/bioinformatics/btv143
| 2,015 |
Bioinformatics
|
Probing the binding affinity of amyloids to reduce toxicity of oligomers in diabetes
|
Abstract Motivation: Amyloids play a role in the degradation of β-cells in diabetes patients. In particular, short amyloid oligomers inject themselves into the membranes of these cells and create pores that disrupt the strictly controlled flow of ions through the membranes. This leads to cell death. Getting rid of the short oligomers either by a deconstruction process or by elongating them into longer fibrils will reduce this toxicity and allow the β-cells to live longer. Results: We develop a computational method to probe the binding affinity of amyloid structures and produce an amylin analog that binds to oligomers and extends their length. The binding and extension lower toxicity and β-cell death. The amylin analog is designed through a parsimonious selection of mutations and is to be administered with the pramlintide drug, but not to interact with it. The mutations (T9K L12K S28H T30K) produce a stable native structure, strong binding affinity to oligomers, and long fibrils. We present an extended mathematical model for the insulin–glucose relationship and demonstrate how affecting the concentration of oligomers with such analog is strictly coupled with insulin release and β-cell fitness. Availability and implementation: SEMBA, the tool to probe the binding affinity of amyloid proteins and generate the binding affinity scoring matrices and R-scores is available at: http://amyloid.cs.mcgill.ca Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.
|
959707
|
The gut patrol
|
LA JOLLA, CA—Cells in the gut send secret messages to the immune system. Thanks to new research from La Jolla Institute for Immunology (LJI) scientists, we can finally get a look at what they're saying.
A new study in Science Immunology reveals how the barrier cells that line the intestines send messages to the patrolling T cells that reside there. These cells communicate by expressing a protein called HVEM, which prompts T cells to survive longer and move more to stop potential infections.
"The research shows how barrier cells in the intestine, structural elements of the tissue, and resident immune cells communicate to provide host defense," says LJI Professor and Chief Scientific Officer Mitchell Kronenberg, Ph.D., senior author of the new study.
Barrier cells, or "epithelial" cells, form a one-cell thick layer that lines the gut. One can picture these cells lining up like a busy queue outside a nightclub. The epithelial cells squish together. They jostle each other and chat. Meanwhile, T cell security guards circulate around the line, looking up and down the block for signs of trouble. "These T cells move around the epithelial cells as if they are truly patrolling," says Kronenberg.
But what keeps these T cells in the epithelium to do their job?
"We've got some insight on what gets T cells to the gut, but we need to understand what keeps them there," says Kronenberg. In fact, a lot of immune cells reside long-term in specific tissues. By understanding the signals that keep T cells in certain tissues, Kronenberg hopes to shed light on conditions like inflammatory bowel disease, where far too many inflammatory T cells gather in the bowel.
In the new study, the researchers found that important signals in the gut are sent through the basement membrane, a thin layer of proteins beneath the epithelium. In our nightclub scene, the basement membrane would be the sidewalk where everyone stands.
Their experiments show that epithelial cells receive signals through HVEM proteins on their surface that stimulate synthesis of basement membrane proteins. The team found that without HVEM, the epithelial cells couldn't do their job because they produced less collagen and other structural components needed to maintain a healthy basement membrane.
T cells detect the basement membrane via adhesion molecules they express on their surface, called integrins. The interaction of the T cell integrins with the basement membrane proteins promotes messages that allow the T cells to survive and patrol in the epithelium. It is as if the epithelial cells have written messages on the sidewalk: "Stay here," "Patrol here," "Do your job." Without a sufficient basement membrane, T cells could not survive as well or go on patrol.
Using a mouse model, the researchers then showed that removing HVEM expression—only in the gut epithelial cells—was a major blow to gut health. Patrolling T cells could not survive as well and they didn't move as much. These T cells made lousy security guards. When challenged with Salmonella typhimurium, an invasive bacterium that causes gastroenteritis, the T cells allowed the infection to take over the intestines and spread to the liver and spleen. Therefore, HVEM from epithelial cells laid the groundwork for T cells to guard the gut—it was the very reason they survived in the epithelium—communicating with the T cells indirectly through the basement membrane.
These insights came from a series of experiments spearheaded by study first authors Goo-Young Seo, Ph.D., Instructor at LJI, and Daisuke Takahashi, Ph.D., formerly of LJI and now at Keio University in Tokyo. The team worked closely with the laboratory of LJI Professor Hilde Cheroutre, Ph.D., the LJI Microscopy Core, the LJI Flow Cytometry Core, and employed intra-vital imaging RNA sequencing techniques to investigate HVEM's role in the gut.
Going forward, Kronenberg and his colleagues are interested in investigating the role of HVEM in maintaining a healthy population of gut microbes. Kronenberg says there are signs that a lack of HVEM can sway the composition of the gut microbiome even in the absence of pathogenic bacteria.
Additional authors of the study, "Epithelial HVEM maintains intraepithelial T cell survival and contributes to host protection," include Qingyang Wang, Zbigniew Mikulski, Angeline Chen, Ting-Fang Chou, Paola Marcovecchio, Sara McArdle, Ashu Sethi, Jr-Wen Shui1, Masumi Takahashi, Charles D. Surh and Hilde Cheroutre.
This research was supported by the National Institutes of Health (grants P01 DK46763, R01 AI61516 and MIST U01, AI125955, MIST U01 AI125957, S10RR027366, and S10OD021831), the Crohn’s and Colitis Foundation of America (grant CCFA-254582), a Uehrara Foundation grant, and a Chan- Zuckerberg Initiative Imaging Scientist Grant.
DOI: 10.1126/sciimmunol.abm6931
|
10.1126/sciimmunol.abm6931
| 2,022 |
Science Immunology
|
Epithelial HVEM maintains intraepithelial T cell survival and contributes to host protection
|
Intraepithelial T cells (IETs) are in close contact with intestinal epithelial cells and the underlying basement membrane, and they detect invasive pathogens. How intestinal epithelial cells and basement membrane influence IET survival and function, at steady state or after infection, is unclear. The herpes virus entry mediator (HVEM), a member of the TNF receptor superfamily, is constitutively expressed by intestinal epithelial cells and is important for protection from pathogenic bacteria. Here, we showed that at steady-state LIGHT, an HVEM ligand, binding to epithelial HVEM promoted the survival of small intestine IETs. RNA-seq and addition of HVEM ligands to epithelial organoids indicated that HVEM increased epithelial synthesis of basement membrane proteins, including collagen IV, which bound to β 1 integrins expressed by IETs. Therefore, we proposed that IET survival depended on β 1 integrin binding to collagen IV and showed that β 1 integrin–collagen IV interactions supported IET survival in vitro. Moreover, the absence of β 1 integrin expression by T lymphocytes decreased TCR αβ + IETs in vivo. Intravital microscopy showed that the patrolling movement of IETs was reduced without epithelial HVEM. As likely consequences of decreased number and movement, protective responses to Salmonella enterica were reduced in mice lacking either epithelial HVEM, HVEM ligands, or β 1 integrins. Therefore, IETs, at steady state and after infection, depended on HVEM expressed by epithelial cells for the synthesis of collagen IV by epithelial cells. Collagen IV engaged β 1 integrins on IETs that were important for their maintenance and for their protective function in mucosal immunity.
|
725481
|
Physicists have developed a sensor that can be used in both industry and biomedicine
|
Magnetic field sensors are largely used in industry, medicine, as well as in applied and fundamental physics. For example, it is impossible to assemble a car without magnetic sensors. Viktor Belyaev and Valeria Rodionova, researchers at the Laboratory of Novel Magnetic Materials at the Immanuel Kant Baltic Federal University, together with colleagues at the Laboratory of Nano-Optics and Metamaterials at the Department of Physics at the Lomonosov Moscow State University, have developed a sensor that combines advances in the fields of magnetism, optics, and solid-state physics. The sensor can be applied in both industry and biomedicine. It is worth noting that the sensor was patented last year.
In April 2020, an article "Magnetic field sensor based on magnetoplasmonic crystal" was published in Scientific Reports magazine. Work on the subject of the article has been going on with colleagues from Lomonosov Moscow State University for several years. The article tells about the principles of creating a high-local and highly sensitive sensor magnetic field by strengthening the magneto-optical effects due to the concentration of the electromagnetic field of the light wave in the near-surface area of the sensor. In other words, optimal parameters for making nanostructures have been found, which allow to significantly enhance the interaction of magnetic material and light.
The sensor developed by researchers allows mapping magnetic fields from different objects, which is potentially important for flaw detection and biomedical applications.
Viktor Belyaev:
"A recent article from our laboratory in a special issue of Sensors magazine describes the principles of most currently developed magnetic field measurement techniques for biomedical applications, and we are confident that soon our sensor will also be added to such reviews due to its unique advantages. This cycle of research was conducted jointly with the Laboratory of Nano-Optics and Metamaterials of the Faculty of Physics at the Lomonosov Moscow State University. Our colleagues at Toyohashi University of Technology (Japan) have made an important contribution to our research by providing equipment for the manufacturing of nanomaterials. The published work is an important step in the joint research of magnetic and magneto-optical properties of magnetoplasmonic crystals, but it is for from nearing completion. It is always a pleasure to realize that there are many new discoveries yet to be made".
|
10.1038/s41598-020-63535-1
| 2,020 |
Scientific Reports
|
Magnetic field sensor based on magnetoplasmonic crystal
|
Abstract Here we report on designing a magnetic field sensor based on magnetoplasmonic crystal made of noble and ferromagnetic metals deposited on one-dimensional subwavelength grating. The experimental data demonstrate resonant transverse magneto-optical Kerr effect (TMOKE) at a narrow spectral region of 50 nm corresponding to the surface plasmon-polaritons excitation and maximum modulation of the reflected light intensity of 4.5% in a modulating magnetic field with the magnitude of 16 Oe. Dependences of TMOKE on external alternating current (AC) and direct current (DC) magnetic field demonstrate that it is a possibility to use the magnetoplasmonic crystal as a high-sensitive sensing probe. The achieved sensitivity to DC magnetic field is up to 10 −6 Oe at local area of 1 mm 2 .
|
962849
|
Glowing tags reveal split-second activity of pathogenic circuitry
|
HOUSTON – (Aug. 25, 2022) – Synthetic biologists at Rice University have developed the first technology for observing the real-time activity of some of most common signal-processing circuits in bacteria, including deadly pathogens that use the circuits to increase their virulence as well as to develop antibiotic drug resistance.
Two-component systems are sensory circuits bacteria use to react to their surroundings and survive. Bacteria use the circuits, which are also known as signal transduction pathways, to sense an “unrivaled range of stimuli” from light and metal ions to pH and even messages from their friends and neighbors, said Rice bioengineering professor Jeffrey Tabor.
Tabor and postdoctoral researcher Ryan Butcher’s new optical tool for observing real-time phosphorylation reactions in two-component systems is described in a study published this week in the Proceedings of the National Academy of Sciences.
“Bacteria use two-component systems to activate virulence and antibiotic resistance, colonize human and plant hosts, form biofilms and foul medical devices,” said Tabor, a professor of both bioengineering and biosciences.
Tabor’s laboratory has studied two-component systems for years. In 2019, his team unveiled a biohacking toolkit that synthetic biologists could use to mix and match tens of thousands of sensory inputs and genetic outputs from the circuits.
One of the most important uses of that toolkit was unlocking the dual mystery of two-component systems. As their name implies, the circuits have two functions: sensing a stimulus outside the cell, and changing the cell’s behavior in response to that stimulus.
The first component, known as a sensor kinase, typically protrudes through the cell’s outer wall and can only be activated by a specific chemical signal. Once triggered, it sets off a biochemical cascade, a chain reaction inside the cell that ends with the cell changing its behavior in response to the stimuli.
The first step in the cascade is a process called phosphorylation, which ultimately results in activation of the second component of the system, the response regulator.
Though phosphorylation reactions are key in the tens of thousands of two-component systems employed in bacteria, it has been very difficult to directly observe them in live bacteria. That is partly because response regulators must typically join to form pairs to carry on the biological cascade that leads to stimulus response.
“Experimental analysis of phosphorylation often requires purification of proteins from bacteria and analysis using laborious in vitro methods like gel electrophoresis,” Butcher said.
Butcher created a much simpler method that uses fluorescent protein tags and polarized fluorescent light. He engineered strains of E. coli to produce mNeonGreen fluorescent protein probes that depolarize light from an excitation laser, but only if they interact as pairs. In a variety of tests, Butcher and Tabor showed their method could be used to monitor the magnitude and speed of response regulator activation under a variety of environmental conditions.
The method is called “homotypic fluorescence resonance energy transfer,” or homo-FRET for short. Tabor said researchers can use it to follow the activation of two-component systems with much higher time resolution than previously possible.
In the study, he and Butcher demonstrated the utility of homo-FRET by observing a nitrate-activated two-component system that’s known to play a role in gastrointestinal colonization by E. coli, Salmonella and other pathogens.
“Microbiologists have known for some time that this genetic circuit is used by a number of pathogens, but we still don’t fully understand how it works,” Tabor said.
Using their method, Tabor and Butcher discovered a previously unreported pulse of activity in the circuit in response to adding nitrate. The pulse appears to arise due to rapid activation of the two-component system followed by consumption of nitrate by the bacteria and corresponding deactivation.
“That’s a window into how this circuit works, and it’s the kind of thing that would have been much more difficult to pin down using previous methods,” Tabor said. “With homo-FRET we can watch the circuit respond to changing nitrate levels as it’s happening”.
“We think homo-FRET can be used to engineer biosensors that respond 10 times faster than current alternatives, and that we and others will be able to use it to make new discoveries in a range of other bacterial pathways,” he said.
The research was supported by the Office of Naval Research (N00014-17-1-2642), the National Institutes of Health (R01AI155586) and the Welch Foundation (C-1856).
-30-
Peer-reviewed study:
“Real-time detection of response regulator phosphorylation dynamics in live bacteria” | Proceedings of the National Academy of Sciences | DOI: 10.1073/pnas.2201204119
Ryan J. Butcher and Jeffrey J. Tabor
https://doi.org/10.1073/pnas.2201204119
Image downloads:
https://news-network.rice.edu/news/files/2022/08/0829_2FACTOR-fig-lg.jpg
CAPTION: Illustration of Rice University’s “homo-FRET” method for observing real-time phosphorylation reactions in two-component sensory systems in live bacteria. Specific stimuli outside the cell (top) initiate phosphorylation (middle), which activate response regulator proteins that form pairs (bottom right) to produce a biochemical cascade that ultimately changes the cell’s behavior. To observe phosphorylation in real-time, Rice researchers engineered strains of E. coli to produce green fluorescent tags that depolarize light from an excitation laser only when they interact as pairs (bottom right). (Figure courtesy of Ryan Butcher/Rice University)
https://news-network.rice.edu/news/files/2022/08/0829_2FACTOR-jt22-lg.jpg
CAPTION: Jeffrey Tabor (Photo courtesy of Jeffrey Tabor)
https://news-network.rice.edu/news/files/2022/08/0829_2FACTOR-rb-lg.jpg
CAPTION: Ryan Butcher (Photo courtesy of Ryan Butcher)
Related stories:
Engineered organism could diagnose Crohn’s disease flareups - May 17, 2021
https://news.rice.edu/news/2021/engineered-organism-could-diagnose-crohns-disease-flareups
Light flips genetic switch in bacteria inside transparent worms - Dec. 22, 2020
https://news.rice.edu/2020/12/22/light-flips-genetic-switch-in-bacteria-inside-transparent-worms/
Synthetic biologists hack bacterial sensors - May 20, 2019
https://news2.rice.edu/2019/05/20/synthetic-biologists-hack-bacterial-sensors-2/
Rice U. unveils dual-channel biological function generator - May 8, 2017
https://news2.rice.edu/2017/05/08/rice-u-unveils-dual-channel-biological-function-generator/
This release can be found online at news.rice.edu.
Follow Rice News and Media Relations via Twitter @RiceUNews.
Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 4,240 undergraduates and 3,972 graduate students, Rice’s undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 1 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger’s Personal Finance.
Proceedings of the National Academy of Sciences
10.1073/pnas.2201204119
Experimental study
Cells
Real-time detection of response regulator phosphorylation dynamics in live bacteria
26-Aug-2022
|
10.1073/pnas.2201204119
| 2,022 |
Proceedings of the National Academy of Sciences
|
Real-time detection of response regulator phosphorylation dynamics in live bacteria
|
Bacteria utilize two-component system (TCS) signal transduction pathways to sense and adapt to changing environments. In a typical TCS, a stimulus induces a sensor histidine kinase (SHK) to phosphorylate a response regulator (RR), which then dimerizes and activates a transcriptional response. Here, we demonstrate that oligomerization-dependent depolarization of excitation light by fused mNeonGreen fluorescent protein probes enables real-time monitoring of RR dimerization dynamics in live bacteria. Using inducible promoters to independently express SHKs and RRs, we detect RR dimerization within seconds of stimulus addition in several model pathways. We go on to combine experiments with mathematical modeling to reveal that TCS phosphosignaling accelerates with SHK expression but decelerates with RR expression and SHK phosphatase activity. We further observe pulsatile activation of the SHK NarX in response to addition and depletion of the extracellular electron acceptor nitrate when the corresponding TCS is expressed from both inducible systems and the native chromosomal operon. Finally, we combine our method with polarized light microscopy to enable single-cell measurements of RR dimerization under changing stimulus conditions. Direct in vivo characterization of RR oligomerization dynamics should enable insights into the regulation of bacterial physiology.
|
778088
|
Women with preeclampsia may be at greater risk for cardiac conditions later in life
|
BOSTON - Research published online today in Journal of the American College of Cardiology, confirms that women who have gestational hypertension or preeclampsia in at least one pregnancy will have higher cardiovascular risk than women without such a history, and that this elevated risk persists at least into their 60s.
"Research over the past decade has shown there are sex-specific risk factors for cardiovascular disease among women," said lead author Michael C. Honigberg, MD, MPP, of Massachusetts General Hospital's (MGH) Cardiology Division. "But there were still some significant gaps in our understanding of those risks, and one gap is whether the elevated risk persists long-term after a hypertensive pregnancy, or whether other women 'catch up' as cardiovascular risk increases with age in the population overall."
The study looked at an average of seven years of follow-up data on more than 220,000 women who were recruited between 2006 and 2010 by the UK Biobank, a large research cohort in the United Kingdom. The study made three significant findings.
First, women with a history of hypertensive pregnancy had stiffer arteries and two to five times the rate of chronic hypertension later in life across age groups, compared to control subjects. Second, they were more likely to develop cardiovascular conditions over time, including coronary artery disease, which prior research suggested, heart failure, and two kinds of valvular heart disease -- aortic stenosis and mitral regurgitation -- that had not previously been associated with hypertensive pregnancy. Third, the study found that between half and one-third of the risk of coronary disease and heart failure was driven by chronic hypertension, which, said Honigberg, "implies that treating high blood pressure may be especially important in this population." Future studies, he said, may look at new approaches for treating hypertension or simply treating the condition more aggressively in women who have had at least one hypertensive pregnancy.
"We're still figuring out how to predict and prevent hypertensive disorders in pregnancy," said Honigberg. "But what we can do is look ahead and try to mitigate the risk of these women developing cardiovascular disease later in life." That includes common-sense heart-healthy modifications such as exercising, eating healthy, not smoking, and controlling weight. Some may additionally benefit from preventive medications.
"You'd be shocked at how few physicians who aren't obstetrician/gynecologists -- including cardiologists -- ask their female patients if they've had a hypertensive disorder of pregnancy," Honigberg said. "This research really underscores the importance of clinicians asking about this history and of women sharing it."
|
10.1016/j.jacc.2019.09.052
| 2,019 |
Journal of the American College of Cardiology
|
Long-Term Cardiovascular Risk in Women With Hypertension During Pregnancy
|
History of a hypertensive disorder of pregnancy (HDP) among women may be useful to refine atherosclerotic cardiovascular disease risk assessments. However, future risk of diverse cardiovascular conditions in asymptomatic middle-aged women with prior HDP remains unknown.The purpose of this study was to examine the long-term incidence of diverse cardiovascular conditions among middle-aged women with and without prior HDP.Women in the prospective, observational UK Biobank age 40 to 69 years who reported ≥1 live birth were included. Noninvasive arterial stiffness measurement was performed in a subset of women. Cox models were fitted to associate HDP with incident cardiovascular diseases. Causal mediation analyses estimated the contribution of conventional risk factors to observed associations.Of 220,024 women included, 2,808 (1.3%) had prior HDP. The mean age at baseline was 57.4 ± 7.8 years, and women were followed for median 7 years (interquartile range: 6.3 to 7.7 years). Women with HDP had elevated arterial stiffness indexes and greater prevalence of chronic hypertension compared with women without HDP. Overall, 7.0 versus 5.3 age-adjusted incident cardiovascular conditions occurred per 1,000 women-years for women with versus without prior HDP, respectively (p = 0.001). In analysis of time-to-first incident cardiovascular diagnosis, prior HDP was associated with a hazard ratio (HR) of 1.3 (95% CI: 1.04 to 1.60; p = 0.02). HDP was associated with greater incidence of CAD (HR: 1.8; 95% CI: 1.3 to 2.6; p < 0.001), heart failure (HR: 1.7; 95% CI: 1.04 to 2.60; p = 0.03), aortic stenosis (HR: 2.9; 95% CI: 1.5 to 5.4; p < 0.001), and mitral regurgitation (HR: 5.0; 95% CI: 1.5 to 17.1; p = 0.01). In causal mediation analyses, chronic hypertension explained 64% of HDP's association with CAD and 49% of HDP's association with heart failure.Hypertensive disorders of pregnancy are associated with accelerated cardiovascular aging and more diverse cardiovascular conditions than previously appreciated, including valvular heart disease. Cardiovascular risk after HDP is largely but incompletely mediated by development of chronic hypertension.
|
771085
|
The world's first heat-driven transistor
|
"We are the first in the world to present a logic circuit, in this case a transistor, that is controlled by a heat signal instead of an electrical signal," states Professor Xavier Crispin of the Laboratory of Organic Electronics, Linköping University.
The heat-driven transistor opens the possibility of many new applications such as detecting small temperature differences, and using functional medical dressings in which the healing process can be monitored.
It is also possible to produce circuits controlled by the heat present in infrared light, for use in heat cameras and other applications. The high sensitivity to heat, 100 times greater than traditional thermoelectric materials, means that a single connector from the heat-sensitive electrolyte, which acts as sensor, to the transistor circuit is sufficient. One sensor can be combined with one transistor to create a "smart pixel".
A matrix of smart pixels can then be used, for example, instead of the sensors that are currently used to detect infrared radiation in heat cameras. With more developments, the new technology can potentially enable a new heat camera in your mobile phone at a low cost, since the materials required are neither expensive, rare nor hazardous.
The heat-driven transistor builds on research that led to a supercapacitor being produced a year ago, charged by the sun's rays. In the capacitor, heat is converted to electricity, which can then be stored in the capacitor until it is needed.
The researchers at the Laboratory of Organic Electronics had searched among conducting polymers and produced a liquid electrolyte with a 100 times greater ability to convert a temperature gradient to electric voltage than the electrolytes previously used. The liquid electrolyte consists of ions and conducting polymer molecules. The positively charged ions are small and move rapidly, while the negatively charged polymer molecules are large and heavy. When one side is heated, the small ions move rapidly towards the cold side and a voltage difference arises.
"When we had shown that the capacitor worked, we started to look for other applications of the new electrolyte," says Xavier Crispin.
Dan Zhao, principal research engineer, and Simone Fabiano, senior lecturer, have shown, after many hours in the laboratory, that it is fully possible to build electronic circuits that are controlled by a heat signal.
|
10.1038/ncomms14214
| 2,017 |
Nature Communications
|
Ionic thermoelectric gating organic transistors
|
Abstract Temperature is one of the most important environmental stimuli to record and amplify. While traditional thermoelectric materials are attractive for temperature/heat flow sensing applications, their sensitivity is limited by their low Seebeck coefficient (∼100 μV K −1 ). Here we take advantage of the large ionic thermoelectric Seebeck coefficient found in polymer electrolytes (∼10,000 μV K −1 ) to introduce the concept of ionic thermoelectric gating a low-voltage organic transistor. The temperature sensing amplification of such ionic thermoelectric-gated devices is thousands of times superior to that of a single thermoelectric leg in traditional thermopiles. This suggests that ionic thermoelectric sensors offer a way to go beyond the limitations of traditional thermopiles and pyroelectric detectors. These findings pave the way for new infrared-gated electronic circuits with potential applications in photonics, thermography and electronic-skins.
|
766327
|
Giant electronic conductivity change driven by artificial switch of crystal dimensionality
|
The electronic properties of solid materials are highly dependent on crystal structures and their dimensionalities (i.e., whether the crystals have predominantly 2D or 3D structures). As Professor Takayoshi Katase of Tokyo Institute of Technology notes, this fact has an important corollary: "If the crystal structure dimensionality can be switched reversibly in the same material, a drastic property change may be controllable." This insight led Prof. Katase and his research team at Tokyo Institute of Technology, in partnership with collaborators at Osaka University and National Institute for Materials Science, to embark on research into the possibility of switching the crystal structure dimensionality of a lead-tin-selenide alloy semiconductor. Their results appear in a paper published in a recent issue of the peer-reviewed journal Science Advances.
The lead-tin-selenide alloy, (Pb1?xSnx)Se is an appropriate focus for such research because the lead ions (Pb2+) and tin ions (Sn2+) favor distinct crystal dimensionalities. Specifically, pure lead selenide (PbSe) has a 3D crystal structure, whereas pure tin selenide (SnSe) has a 2D crystal structure. SnSe has bandgap of 1.1 eV, similar to the conventional semiconductor Si. Meanwhile, PbSe has narrow bandgap of 0.3 eV and shows 1 order of magnitude higher carrier mobility than SnSe. In particular, the 3D (Pb1-xSnx)Se has gathered much attention as a topological insulator. That is, the substitution for Pb with Sn in the 3D PbSe reduces the band gap and finally produces a gap-less Dirac-like state. Therefore, if these crystal structure dimensionality can be switched by external stresses such as temperature, it would lead to a giant functional phase transition, such as large electronic conductivity change and topological state transition, enhanced by the distinct electronic structure changes.
The alloying PbSe and SnSe would manipulate the drastic transition in structure, and such (Pb1-xSnx)Se alloy should induce strong frustration around phase boundaries. However, there is no direct phase boundary between the 3D PbSe and the 2D SnSe phases under thermal equilibrium. Through their experiments, Prof. Katase and his research team successfully developed a method for growing the nonequilibrium lead-tin-selenide alloy crystals with equal amounts of Pb2+ and Sn2+ ions (i.e., (Pb0.5Sn0.5)Se) that underwent direct structural phase transitions between 2D and 3D forms based on temperature. At lower temperatures, the 2D crystal structure predominated, whereas at higher temperatures, the 3D structure predominated. The low-temperature 2D crystal structure was more resistant to electrical current than the high-temperature 3D crystal was, and as the alloy was heated, its resistivity levels took a sharp dive around the temperatures at which the dimensionality phase transition occurred. The present strategy facilitates different structure dimensionality switching and further functional property switching in semiconductors using artificial phase boundary.
In sum, the research team developed a form of the semiconductor alloy (Pb1?xSnx)Se that undergoes temperature-dependent crystal dimensionality phase transitions, and these transitions have major implications for the alloy's electronic properties. When asked about the importance of his team's work, Prof. Katase notes that this form of the (Pb1?xSnx)Se alloy can "serve as a platform for fundamental scientific studies as well as the development of novel function in semiconductor technologies." This specialized alloy may, therefore, lead to exciting new semiconductor technologies with myriad benefits for humanity.
|
10.1126/sciadv.abf2725
| 2,021 |
Science Advances
|
Reversible 3D-2D structural phase transition and giant electronic modulation in nonequilibrium alloy semiconductor, lead-tin-selenide
|
3D-2D structural phase transition is artificially induced to invoke giant electronic modulation in nonequilibrium (Pb 1− x Sn x )Se.
|
944913
|
Tufts University researchers investigate how opioid use affects offspring in rats
|
New research from scientists at Cummings School of Veterinary Medicine at Tufts University suggests opioid use before pregnancy—even if not used during pregnancy itself—could result in a higher likelihood that a mother’s male offspring will develop type 2 diabetes and metabolic syndrome, conditions that increase the risk of heart disease and stroke.
The current studies are in rats exposed to opioids over a 10-day period several weeks before mating, and have yet to be studied in humans. The results suggest, however, that even if moms stop opioid use before becoming pregnant, the effects on future generations could lead to significant health problems.
More than 142 million opioid prescriptions were dispensed in the U.S. in 2020, with an estimated 1 in 3 Americans using prescription opioids and 11.5 million misusing them. In 3.6 percent of U.S. counties, enough opioid prescriptions were dispensed in 2020 for every person to have one, according to the Centers for Disease Control and Prevention.
In many circumstances prescription opioids such as oxycodone, hydrocodone, and morphine can be an important component of overall pain management. But the current epidemic misuse of opioids leading to addiction has resulted in a crisis in the U.S., destroying lives and families regardless of income level, race, age, or gender.
“The percentage of the population exposed to prescription opioids has exploded in recent years,” says Cummings School professor and neuroscientist Elizabeth Byrnes. “Addiction and overdose deaths have been the biggest and most important focus of our public health efforts to combat the misuse of opioids thus far. Yet perhaps what hasn’t been as widely appreciated is that opioids may also have significant effects on the immune and neuroendocrine systems and can also affect metabolism in those who take these drugs.”
“What our research suggests is that these effects may also be passed down to future generations, even if the mother stops taking the drugs before pregnancy,” Byrnes says. “We as a society may not fully understand all the potential consequences of widespread prescription opioid use and misuse,” she says.
This recent research from Byrnes and colleagues at Cummings School and the Department of Computer Science at Tufts University was published in the journal Scientific Reports. Byrnes and computer science professor Donna Slonim were co-principal investigators, and it builds on an earlier study published in Addiction Biology in 2019. The lead author on both studies is Anika Toorie, who was a postdoctoral researcher under Byrnes and is now an assistant professor at Rhode Island College.
What Rats Tell Us About Human Biology
Researchers examine the effects of opioid use on addiction and other health consequences in rats as a gateway to better understanding the drugs’ effect in humans. They do so because rats have similar biological and reward systems as humans, and because they can look in the laboratory, under tightly controlled conditions, to discover changes in the rat brain, in metabolism, and in organ systems in response to opioid use, misuse, addiction, and withdrawal. Findings deemed worthy of further exploration can then be studied in humans.
In the studies published in Addiction Biology and Scientific Reports, the researchers looked at male rats born to mothers who were exposed to morphine (opioids) for 10 days as adolescents but who were drug free for at least three weeks prior to mating, so their male offspring were not exposed in utero. The control group of offspring were born to mothers who received a saline solution, rather than morphine.
For the Addiction Biology paper, male offspring rats from both groups of mothers were fed a high fat-sugar diet for six weeks. The males born to mothers who had been exposed to morphine consumed more food, gained more weight, and developed fasting-induced hyperglycemia (high blood sugars) and hyperinsulinemia (high circulating levels of insulin). This indicates the rats were becoming less able to regulate how their body converts food into energy, which can lead to obesity, type 2 diabetes, and a variety of other health problems that type 2 diabetes causes.
“What we essentially saw is that the limited morphine exposure in female rats prior to conception increased the risk of metabolic disorders, including type 2 diabetes in the males in the next generation,” said Byrnes.
For the paper in Scientific Reports, researchers compared both an eight and twelve-week administration of high fat-sugar diet or a controlled diet in both the males whose mothers had been exposed to morphine and those who had received only saline. The results for males whose mothers had been exposed to morphine and were given the high fat-sugar diet supported previous findings—they gained more weight and displayed higher levels of fasting blood sugars and higher levels of circulating insulin when compared to males whose mothers had received saline and were fed the high fat-sugar diet.
By extending the feeding regiment even longer, the researchers found that the male rats in both the controlled and high-fat diet group whose mothers had been exposed to morphine pre-conception also developed impaired glucose tolerance, which is an early sign of type 2 diabetes. They also had liver and other abnormalities.
“Even if the offspring are not exposed to a high fat-sugar diet, the risks for developing diabetes and other health problems are there, though they may take longer to emerge,” Byrnes said.
The researchers plan to next examine what the effects of morphine were on the female offspring, to see if the effects are similar or different than those on the male offspring.
Obesity, metabolic syndrome, and type 2 diabetes are linked to increased risks of heart disease, stroke, kidney disease, and other ailments. “With such widespread use of opioids, we need think about all the ways that these drugs are affecting not only the current generation, but how they will impact future generations,” says Byrnes.
Scientific Reports
10.1038/s41598-022-05528-w
Experimental study
Animals
Intergenerational effects of preconception opioids on glucose homeostasis and hepatic transcription in adult male rats
31-Jan-2022
The authors declare no competing interests.
|
10.1038/s41598-022-05528-w
| 2,022 |
Scientific Reports
|
Intergenerational effects of preconception opioids on glucose homeostasis and hepatic transcription in adult male rats
|
Abstract Adolescence represents a period of significant neurodevelopment during which adverse experiences can lead to prolonged effects on disease vulnerability, including effects that can impact future offspring. Adolescence is a common period for the initiation of drug use, including the use of opioids. Beyond effects on central reward, opioids also impact glucose metabolism, which can impact the risk of diabetes. Moreover, recent animal models suggest that the effects of adolescent opioids can effect glucose metabolism in future offspring. Indeed, we demonstrated that the adult male offspring of females exposed to morphine for 10 days during adolescence (referred to as MORF1 males) are predisposed to the adverse effects of an obesogenic diet. As adults, MORF1 males fed a high fat moderate sucrose diet (FSD) for just 6 weeks had increased fasting glucose and insulin levels when compared to age-matched offspring of females exposed to saline during adolescence (SALF1 males). Clinically, a similar profile of impaired fasting glucose has been associated with hepatic insulin resistance and an increased risk of non-alcoholic fatty liver disease. Thus, in the current study, we used RNA sequencing to determine whether adult MORF1 males demonstrate significant alterations in the hepatic transcriptome suggestive of alterations in metabolism. Age-matched SALF1 and MORF1 males were fed either FSD or control diet (CD) for 8 weeks. Similar to our previous observations, FSD-maintained MORF1 males gained more weight and displayed both fasting hyperglycemia and hyperinsulinemia when compared to FSD-maintained SALF1 males, with no significant effect on glucagon. No differences in bodyweight or fasting-induce glucose were observed in control diet (CD)-maintained F1 males, although there was a trend for CD MORF1 males to display elevated levels of fasting insulin. Unexpectedly, transcriptional analyses revealed profound differences in the hepatic transcriptome of CD-maintained MORF1 and SALF1 (1686 differentially expressed genes) with no significant differences between FSD-maintained MORF1 and SALF1 males. As changes in the hepatic transcriptome were not revealed under 8 weeks FSD conditions, we extended the feeding paradigm and conducted a glucose tolerance test to determine whether impaired fasting glucose observed in FSD MORF1 males was due to peripheral insulin resistance. Impaired glucose tolerance was observed in both CD and FSD MORF1 males, and to a more limited extent in FSD SALF1 males. These findings implicate intergenerational effects of adolescent morphine exposure on the risk of developing insulin resistance and associated comorbidities, even in the absence of an obesogenic diet.
|
942280
|
Small study finds Alzheimer's-like changes in some COVID patients' brains
|
10.1002/alz.12558
| 2,022 |
Alzheimer s & Dementia
|
Alzheimer's‐like signaling in brains of COVID‐19 patients
|
The mechanisms that lead to cognitive impairment associated with COVID-19 are not well understood.Brain lysates from control and COVID-19 patients were analyzed for oxidative stress and inflammatory signaling pathway markers, and measurements of Alzheimer's disease (AD)-linked signaling biochemistry. Post-translational modifications of the ryanodine receptor/calcium (Ca2+ ) release channels (RyR) on the endoplasmic reticuli (ER), known to be linked to AD, were also measured by co-immunoprecipitation/immunoblotting of the brain lysates.We provide evidence linking SARS-CoV-2 infection to activation of TGF-β signaling and oxidative overload. The neuropathological pathways causing tau hyperphosphorylation typically associated with AD were also shown to be activated in COVID-19 patients. RyR2 in COVID-19 brains demonstrated a "leaky" phenotype, which can promote cognitive and behavioral defects.COVID-19 neuropathology includes AD-like features and leaky RyR2 channels could be a therapeutic target for amelioration of some cognitive defects associated with SARS-CoV-2 infection and long COVID.
|
|
931937
|
Cancer cells mobilizing the nervous system? Let's use them to inhibit the tumor
|
Researchers at the Technion - Israel Institute of Technology have developed an innovative treatment for breast cancer, based on analgesic nanoparticles that target the nervous system. The study, published in Science Advances, was led by Professor Avi Schroeder and Ph.D. student Maya Kaduri of the Wolfson Faculty of Chemical Engineering.
Breast cancer is one of the most common cancers in women, and despite breakthroughs in diagnosis and treatment, about one thousand women in Israel die of the disease per year. Around 15% of them are under the age of 50. Worldwide, some 685,000 women die each year from breast cancer.
Prof. Schroeder has years of experience in developing innovative cancer treatments, including ones for breast cancer and specifically triple-negative breast cancer – an aggressive cancer characterized by rapid cell division with a higher risk of metastasis. Technologies developed in his lab include novel methods for encapsulating drug molecules in nanoparticles that transport the drug to the tumor and release it inside, without damaging healthy tissue.
The researchers found that cancer cells have a reciprocal relationship with the nerve cells around them: the cancer cells stimulate infiltration of nerve cells into the tumor, and this infiltration stimulates cancer cell proliferation, growth, and migration. In other words, the cancer cells recruit the nerve cells for their purposes.
Based on these findings, the researchers developed a treatment that targets the tumor through the nerve cells. This treatment is based on injecting nanoparticles containing anesthetic into the bloodstream. The nanoparticles travel through the bloodstream toward the tumor, accumulate around the nerve cells in the cancerous tissue, and paralyze the local nerves and communication between the nerve cells and the cancer cells. The result: significant inhibition of tumor development and of metastasis to the lungs, brain, and bone marrow.
The nanoparticles simulate the cell membrane and are coated with special polymers that disguise them from the immune system and enable a long circulation time in the bloodstream. Each such particle, which is around 100 nm in diameter, contains the anesthetic.
According to Maya Kaduri: "We know how to create the exact size of particles needed, and that is critical because it’s the key to penetrating the tumor. Tumors stimulate increased formation of new blood vessels around them, so that they receive oxygen and nutrients, but the structure of these blood vessels is damaged and contains nano-sized holes that enable penetration of nanoparticles. The cancerous tissue is characterized by poor lymphatic drainage, which further increases accumulation of the particles in the tissue.
Therefore, the anesthetizing particles we developed move through the bloodstream without penetrating healthy tissue. Only when they reach the damaged blood vessels of the tumor do they leak out, accumulate around the nerve cells of the cancerous tissue, and disconnect them from the cancer cells. The fact that this is a very focused and precise treatment enables us to insert significant amounts of anesthetic into the body because there is no fear that it will harm healthy and vital areas of the nervous system."
In experiments on cancer cell cultures and in treatment of mice, the new technology inhibited not only tumor development but also metastasis. The researchers estimate these findings may be relevant for treatment of breast cancer in humans.
The research is supported by the Rappaport Technion Integrated Cancer Center (RTICC) as part of the Steven & Beverly Rubenstein Charitable Foundation Fellowship Fund for Cancer Research, and by Teva, as part of its National Forum for BioInnovators. The research was conducted in cooperation with the Faculty of Medicine at Hebrew University of Jerusalem and the Institute of Pathology at the Tel Aviv Sourasky Medical Center.
Prof. Avi Schroeder is head of the Louis Family Laboratory for Targeted Drug Delivery & Personalized Medicine Technologies at the Wolfson Faculty of Chemical Engineering. Maya Kaduri, who has a B.Sc. from the Faculty of Biotechnology and Food Engineering at the Technion, began researching under the guidance of Prof. Avi Schroeder during her bachelor's degree, and this year she is expected to complete her Ph.D. (direct track).
Science Advances
10.1126/sciadv.abj5435
Experimental study
Targeting neurons in the tumor microenvironment with bupivacaine nanoparticles reduces breast cancer progression and metastases
6-Oct-2021
|
10.1126/sciadv.abj5435
| 2,021 |
Science Advances
|
Targeting neurons in the tumor microenvironment with bupivacaine nanoparticles reduces breast cancer progression and metastases
|
Targeting neurons in breast cancer with anesthetic nanoparticles inhibits nerve-cancer stimulation and tumor progression.
|
923045
|
Study identifies new target to prevent, treat alcoholism
|
New research conducted at OHSU in Portland, Oregon, identifies a gene that could provide a new target for developing medication to prevent and treat alcoholism.
Scientists at the Oregon National Primate Research Center at OHSU discovered a gene that had lower expression in the brains of nonhuman primates that voluntarily consumed heavy amounts of alcohol compared with those that drank less.
Furthermore, the research team unraveled a link between alcohol and how it modulates the levels of activity of this particular gene. Researchers discovered that when they increased the levels of the gene encoded protein in mice, they reduced alcohol consumption by almost 50 percent without affecting the total amount of fluid consumed or their overall well-being.
The study was recently published online in the journal Neuropsychopharmacology.
The study modified the levels of the protein encoded by a single gene - GPR39 - which is a zinc-binding receptor previously associated with depression. The prevalence rates of co-occurring mood and alcohol use disorders are high, with individuals with alcohol use disorder being 3.7 times more likely to have major depression than those who do not abuse alcohol. Using a commercially available substance that mimics the activity of the GPR39 protein, the researchers found that targeting this gene dramatically reduced alcohol consumption in mice.
"The study highlights the importance of using cross-species approaches to identify and test relevant drugs for the treatment of alcohol use disorder," said senior author Rita Cervera-Juanes, Ph.D., a research assistant professor in the divisions of Neuroscience and Genetics at ONPRC.
To determine whether the same mechanism affects people, this team of researchers is now examining postmortem tissue samples from the brains of people who suffered from alcoholism.
Currently, there are only a handful of treatments for alcoholism approved by the Food and Drug Administration. By testing the effect of the substance in reducing ethanol consumption in mice - in addition to its previously reported link in reducing depression-like symptoms - the findings may point the way toward developing a drug that both prevents and treats chronic alcoholism and mood disorders in people.
"We are finding novel targets for which there are drugs already available, and they can be repurposed to treat other ailments," Cervera-Juanes said. "For alcoholism, this is huge because there are currently only a handful of FDA-approved drugs."
|
10.1038/s41386-018-0308-1
| 2,019 |
Neuropsychopharmacology
|
Modulation of Gpr39, a G-protein coupled receptor associated with alcohol use in non-human primates, curbs ethanol intake in mice
|
Alcohol use disorder (AUD) is a chronic condition with devastating health and socioeconomic effects. Still, pharmacotherapies to treat AUD are scarce. In a prior study aimed at identifying novel AUD therapeutic targets, we investigated the DNA methylome of the nucleus accumbens core (NAcc) of rhesus macaques after chronic alcohol use. The G-protein coupled receptor 39 (GPR39) gene was hypermethylated and its expression downregulated in heavy alcohol drinking macaques. GPR39 encodes a Zn2+-binding metabotropic receptor known to modulate excitatory and inhibitory neurotransmission, the balance of which is altered in AUD. These prior findings suggest that a GPR39 agonist would reduce alcohol intake. Using a drinking-in-the-dark two bottle choice (DID-2BC) model, we showed that an acute 7.5 mg/kg dose of the GPR39 agonist, TC-G 1008, reduced ethanol intake in mice without affecting total fluid intake, locomotor activity or saccharin preference. Furthermore, repeated doses of the agonist prevented ethanol escalation in an intermittent access 2BC paradigm (IA-2BC). This effect was reversible, as ethanol escalation followed agonist "wash out". As observed during the DID-2BC study, a subsequent acute agonist challenge during the IA-2BC procedure reduced ethanol intake by ~47%. Finally, Gpr39 activation was associated with changes in Gpr39 and Bdnf expression, and in glutamate release in the NAcc. Together, our findings suggest that GPR39 is a promising target for the development of prevention and treatment therapies for AUD.
|
506685
|
Study sets ambitious new goals for nutrition science
|
How can nutrition science help to achieve healthy nutrition for everyone? An urgent question in a world where 795 million people are chronically undernourished (FAO) while 1.9 billion people are overweight or obese (WHO).
"To deliver successfully, nutrition research needs a bold dose of innovation," writes an international team of researchers from across the Life Sciences in the open-access journal Frontiers in Nutrition. In their study - aptly termed a "Field Grand Challenge" - they reach out to their peers with an ambitious set of research goals for nutrition science for the period 2015-2020.
"This initiative by the Field Chief Editor of Frontiers in Nutrition deals with a long-overdue issue: to bring researchers from all the scientific disciplines working on nutrition-related questions together, to think and work on trans- and interdisciplinary topics," says Professor Dietrich Knorr from the Department of Food Biotechnology and Food Process Engineering at the Technical University Berlin.
The experts identify questions that need to be answered, methods that need to be developed, and foundational data that need to be collected within the next five years along eight axes of research: (1) Sustainability in food and nutrition; (2) Identifying and mitigating methodological errors in nutrition science, to increase rigor, objectivity, reproducibility, and transparency; (3) Generation and analysis of highly dimensional "Big Data", for example in nutrigenomics; (4) Authenticity and safety of foods; (5) Food-related human behavior; (6) The molecular and physiological link between nutrition and brain health; (7) The human microbiome; and (8) Nourishing the immune system and preventing disease, for example through medical nutrition and neutraceuticals.
"We feel the topics described represent the key opportunities, but also the biggest challenges in our field," says Dr Johannes le Coutre, Senior Research Scientist and Head of Perception Physiology at the Nestlé Research Center, Lausanne, Switzerland, and Field Chief Editor of Frontiers in Nutrition. "Five years seemed long enough for a scientific program to bear measurable fruit -- yet with a clear scope and focus."
The authors stress the need for a transdisciplinary systems-science approach to nutrition research, generating and integrating data at all levels of complexity and from all relevant disciplines, including genomics, medical science, physiology, bioengineering, food science and technology, analytics, and biomathematics.
"Nutrition science is evolving from reductionist approaches centered around the study of single molecules and pathways to in-depth, quantitative, systems-wide analyses of massively interacting systems (i.e., nutrition, microbiome, immunological and metabolic networks) that delineate health outcomes. This article articulates the Grand Challenges in 21st Century Nutrition Research and Discovery and provides paradigm-shifting solutions such as informatics, data analytics and modeling approaches in combination with pre-clinical and clinical validation studies," says Prof Bassaganya-Riera, Director of the Virginia Bioinformatics Institute at Virginia Tech.
The authors hope their Grand Challenge will provoke a lively discussion among their peers about how to improve nutrition as a science, allowing it fulfil its potential and make meaningful, sustainable contributions to global nutrition.
"At Frontiers in Nutrition, we are excited to develop and share an open-science platform for this discussion. Healthy nutrition for all is an ambition too important to be handled by detached interest groups," concludes le Coutre.
|
10.3389/fnut.2015.00026
| 2,015 |
Frontiers in Nutrition
|
Goals in Nutrition Science 2015–2020
|
With the definition of goals in Nutrition Science, we are taking a brave step and a leap of faith with regard to predicting the scope and direction of nutrition science over the next five years. The content of this editorial has been discussed, refined and evaluated with great care by the Frontiers in Nutrition editorial board. We feel the topics described represent the key opportunities, but also the biggest challenges in our field. We took a clean-slate, bottom-up approach to identify and address these topics and present them in eight categories. For each category the authors listed take responsibility, and deliberately therefore this document is a collection of thoughts from active minds, rather than a complete integration or consensus.At Frontiers in Nutrition, we are excited to develop and share a platform for this discussion. Healthy Nutrition for all – an ambition too important to be handled by detached interest groups and behind closed doors.
|
936418
|
Decode genetics publishes the largest ever study of the plasma proteome
|
Reykjavik, Iceland, 2-dec-2021. In a study published today in Nature Genetics, scientists at deCODE genetics , a subsidiary of the pharmaceutical company Amgen, demonstrate how measuring the levels of a large number of proteins in plasma at population scale when combined with data on sequence diversity and RNA expression dramatically increases insights into human diseases and other phenotypes.
Scientists at deCODE genetics have used levels of five thousand proteins in plasma targeted on a multiplex platform at population scale to unravel their genetic determinants and their relationship with human disease and other traits. Previous studies of the genetics of protein levels either consisted of much fewer individuals or tested far fewer proteins than the one published today.
Using protein levels in plasma measured with the Somascan proteomics assay, scientists at deCODE genetics tested the association of 27 million sequence variants with plasma levels of 4,719 proteins in 35,559 Icelanders. They found 18,084 associations between variants in the sequence and levels of proteins, where 19% are with rare variants identified with whole-genome sequencing. Overall, 93% of the associations are novel. Additionally, they replicated 83% and 64% of the reported associations from the largest existing plasma proteomic studies, based on the Somascan method and the antibody-based Olink assay, respectively.
The levels of proteins in plasma were tested for associations with 373 diseases and other traits and yielded 257,490 such associations. They integrated associations of sequence variants with protein levels and diseases and other traits, and found that 12% of around fifty thousand variants reported to associate with diseases and other traits also associate with protein levels.
“Proteomics can assist in solving one of the major challenges in genetic studies: to determine what gene is responsible for the effect of a sequence variant on a disease. In addition the proteome provides some measure of time because levels of proteins in blood rise and they fall as a function of time to and from events,” said Kari Stefansson CEO of deCODE genetics and one of the senior authors on the paper.
Media contact:
Thora Kristin Asgeirsdottir
Decode genetics
+354 894 1909
Nature Genetics
10.1038/s41588-021-00978-w
Case study
Human tissue samples
2-Dec-2021
|
10.1038/s41588-021-00978-w
| 2,021 |
Nature Genetics
|
Large-scale integration of the plasma proteome with genetics and disease
|
The plasma proteome can help bridge the gap between the genome and diseases. Here we describe genome-wide association studies (GWASs) of plasma protein levels measured with 4,907 aptamers in 35,559 Icelanders. We found 18,084 associations between sequence variants and levels of proteins in plasma (protein quantitative trait loci; pQTL), of which 19% were with rare variants (minor allele frequency (MAF) < 1%). We tested plasma protein levels for association with 373 diseases and other traits and identified 257,490 associations. We integrated pQTL and genetic associations with diseases and other traits and found that 12% of 45,334 lead associations in the GWAS Catalog are with variants in high linkage disequilibrium with pQTL. We identified 938 genes encoding potential drug targets with variants that influence levels of possible biomarkers. Combining proteomics, genomics and transcriptomics, we provide a valuable resource that can be used to improve understanding of disease pathogenesis and to assist with drug discovery and development.
|
925968
|
New insights on mechanism that could help treat muscle-related diseases
|
BOSTON – Investigators who previously developed a recipe for turning skin cells into primitive muscle-like cells that can be maintained indefinitely in the lab without losing the potential to become mature muscle have now uncovered how this recipe works and what molecular changes it triggers within cells. The research, which was led by scientists at Massachusetts General Hospital (MGH) and is published in Genes & Development, could allow clinicians to generate patient-matched muscle cells to help treat muscle injuries, aging-related muscle degeneration, or conditions such as muscular dystrophy.
It’s known that expression of a muscle regulatory gene called MyoD is sufficient to directly convert skin cells into mature muscle cells; however, mature muscle cells do not divide and self-renew, and therefore they cannot be propagated for clinical purposes. “To address this shortcoming, we developed a system several years ago to convert skin cells into self-renewing muscle stem-like cells we coined induced myogenic progenitor cells, or iMPCs. Our system uses MyoD in combination with three chemicals we previously identified as facilitators of cell plasticity in other contexts,” explains senior author Konrad Hochedlinger, PhD, a principal investigator at the Center for Regenerative Medicine at MGH and a professor of medicine at Harvard Medical School.
In this latest study, Hochedlinger and his colleagues uncovered the details behind how this combination converts skin cells into iMPCs. They found that while MyoD expression alone causes skin cells to take on the identity of mature muscle cells, adding the three chemicals causes the skin cells to instead acquire a more primitive stem cell–like state. Importantly, iMPCs are molecularly highly similar to muscle tissue stem cells, and muscle cells derived from iMPCs are more stable and mature than muscle cells produced with MyoD expression alone. “Mechanistically, we showed that MyoD and the chemicals aid in the removal of certain marks on DNA called DNA methylation,” says lead author Masaki Yagi, PhD, a research fellow at MGH. “DNA methylation typically maintains the identity of specialized cells, and we showed that its removal is key for acquiring a muscle stem cell identity.”
Hochedlinger notes that the findings may be applicable to other tissue types besides muscle that involve different regulatory genes. Combining the expression of these genes with the three chemicals used in this study could help researchers generate different stem cell types that closely resemble a variety of tissues in the body.
|
10.1101/gad.348678.121
| 2,021 |
Genes & Development
|
Dissecting dual roles of MyoD during lineage conversion to mature myocytes and myogenic stem cells
|
The generation of myotubes from fibroblasts upon forced MyoD expression is a classic example of transcription factor-induced reprogramming. We recently discovered that additional modulation of signaling pathways with small molecules facilitates reprogramming to more primitive induced myogenic progenitor cells (iMPCs). Here, we dissected the transcriptional and epigenetic dynamics of mouse fibroblasts undergoing reprogramming to either myotubes or iMPCs using a MyoD-inducible transgenic model. Induction of MyoD in fibroblasts combined with small molecules generated Pax7 + iMPCs with high similarity to primary muscle stem cells. Analysis of intermediate stages of iMPC induction revealed that extinction of the fibroblast program preceded induction of the stem cell program. Moreover, key stem cell genes gained chromatin accessibility prior to their transcriptional activation, and these regions exhibited a marked loss of DNA methylation dependent on the Tet enzymes. In contrast, myotube generation was associated with few methylation changes, incomplete and unstable reprogramming, and an insensitivity to Tet depletion. Finally, we showed that MyoD's ability to bind to unique bHLH targets was crucial for generating iMPCs but dispensable for generating myotubes. Collectively, our analyses elucidate the role of MyoD in myogenic reprogramming and derive general principles by which transcription factors and signaling pathways cooperate to rewire cell identity.
|
896002
|
Significant increase in self-harm attempts following ankylosing spondylitis diagnosis
|
The results of a population study presented today at the Annual European Congress of Rheumatology (EULAR 2018) demonstrate a significantly increased rate of self-harm attempts in inflammatory arthritis (IA), particularly following a diagnosis of Ankylosing Spondylitis (AS).1
Results of the study showed that individuals with AS were almost twice as likely to self-harm as their comparators (adjusted hazard ratio of 1.59 (95% CI 1.16 to 2.21). Deliberate self-harm was also increased in individuals with rheumatoid arthritis (RA) but only before adjustment for baseline characteristics. The most frequent method of self-harm was poisoning (64% of attempts in AS, 81% in RA) or self-mutilation (36% in AS, 18% in RA).
"Our study is one of the first to document the risk of serious mental health outcomes following a RA or AS diagnosis and highlights the need for routine evaluation of self-harm behaviour as part of the management of patients," said Dr. Nigil Haroon, senior study author, University of Toronto.
Physical aspects of AS include pain, joint stiffness, and a gradual loss of spinal mobility, however, there is also considerable impact on mental health.2 Although a higher prevalence of psychiatric comorbidities, including depressive disorder, has been proven in patients with AS,3 until now there is limited data on the risk of serious mental health outcomes following diagnosis.
"This study is important because understanding the mechanisms that contribute to deliberate self-harm attempts will help tailor future preventative strategies to reduce morbidity associated with this serious mental health outcome," said Professor Thomas Dörner, Chairperson of the Abstract Selection Committee, EULAR.
The study evaluated population-based cohorts of RA (N=53,240) and AS (N=13,964), each matched 1:4 by age, sex, and calendar year (at diagnosis) with non-IA comparator cohorts in Ontario, Canada. Individuals with a history of mental illness or prior episode of deliberate self-harm were excluded. The outcome was a first emergency room presentation for deliberate self-harm, subsequent to RA or AS diagnosis, between April 1, 2002 and March 31, 2016. Hazard ratios were adjusted for demographic, clinical and health service utilisation variables.
This study suggests there is a link between inflammatory arthritis and the development of serious mental health consequences. These findings highlight the need for routine evaluation of self-harm behaviour as part of the management of chronic inflammatory arthritis. Understanding the mechanisms contributing to deliberate self-harm attempts will help inform risk-reduction strategies among individuals living with inflammatory arthritis.
Abstract number: OP0296
###
NOTES TO EDITORS
For further information on this study, or to request an interview with the study lead, please do not hesitate to contact the EULAR Press Office:
Email: [email protected]
Telephone: +44 (0) 20 7438 3084
Twitter: @EULAR_Press
YouTube: Eular Press Office
About Rheumatic and Musculoskeletal Diseases
Rheumatic and musculoskeletal diseases (RMDs) are a diverse group of diseases that commonly affect the joints, but can also affect the muscles, other tissues and internal
organs. There are more than 200 different RMDs, affecting both children and adults. They are usually caused by problems of the immune system, inflammation, infections or gradual deterioration of joints, muscle and bones. Many of these diseases are long term and worsen over time. They are typically painful and limit function. In severe cases, RMDs can result in significant disability, having a major impact on both quality of life and life expectancy.4
About 'Don't Delay, Connect Today!'
'Don't Delay, Connect Today!' is a EULAR initiative that unites the voices of its three pillars, patient (PARE) organisations, scientific member societies and health professional
associations - as well as its international network - with the goal of highlighting the importance of early diagnosis and access to treatment. In the European Union alone, over
120 million people are currently living with a rheumatic disease (RMD), with many cases undetected.5 The 'Don't Delay, Connect Today!' campaign aims to highlight that early
diagnosis of RMDs and access to treatment can prevent further damage, and also reduce the burden on individual life and society as a whole.
About EULAR
The European League against Rheumatism (EULAR) is the European umbrella organisation representing scientific societies, health professional associations and organisations for people with RMDs. EULAR aims to reduce the burden of RMDs on individuals and society
and to improve the treatment, prevention and rehabilitation of RMDs. To this end, EULAR fosters excellence in education and research in the field of rheumatology. It promotes the
translation of research advances into daily care and fights for the recognition of the needs of people with RMDs by the EU institutions through advocacy action.
To find out more about the activities of EULAR, visit: http://www.eular.org.
References
1 B Kuriya, J Widdifield, J Luo, et al. The risk of deliberate self-harm in rheumatoid arthritis and ankylosing
spondylitis: A population-based cohort study. EULAR 2018; Amsterdam: Abstract OP0296.
2 Dagfinrud H, Mengshoel AM, Hagen KB, et al. Health status of patients with ankylosing spondylitis: a
comparison with the general population. Ann Rheum Dis. 2004;63(12):1605-10.
3 Shen CC, Hu LY, Yang AC, et al. Risk of psychiatric disorders following ankylosing spondylitis: A nationwide
population-based retrospective cohort study. J Rheumatol. 2016;43(3):625-31.
4 van der Heijde D, et al. Common language description of the term rheumatic and musculoskeletal diseases (RMDs) for use in communication with the lay public, healthcare providers and other stakeholders endorsed by
the European League Against Rheumatism (EULAR) and the American College of Rheumatology (ACR). Annals of the Rheumatic Diseases. 2018;doi:10.1136/annrheumdis-2017-212565. [Epub ahead of print].
Rheumatic and musculoskeletal diseases (RMDs) are a diverse group of diseases that commonly affect the joints, but can also affect the muscles, other tissues and internal
organs. There are more than 200 different RMDs, affecting both children and adults. They are usually caused by problems of the immune system, inflammation, infections or gradual deterioration of joints, muscle and bones. Many of these diseases are long term and worsen over time. They are typically painful and limit function. In severe cases, RMDs can result in significant disability, having a major impact on both quality of life and life expectancy.4
About 'Don't Delay, Connect Today!'
'Don't Delay, Connect Today!' is a EULAR initiative that unites the voices of its three pillars, patient (PARE) organisations, scientific member societies and health professional
associations - as well as its international network - with the goal of highlighting the importance of early diagnosis and access to treatment. In the European Union alone, over
120 million people are currently living with a rheumatic disease (RMD), with many cases undetected.5 The 'Don't Delay, Connect Today!' campaign aims to highlight that early
diagnosis of RMDs and access to treatment can prevent further damage, and also reduce the burden on individual life and society as a whole.
About EULAR
The European League against Rheumatism (EULAR) is the European umbrella organisation representing scientific societies, health professional associations and organisations for people with RMDs. EULAR aims to reduce the burden of RMDs on individuals and society
and to improve the treatment, prevention and rehabilitation of RMDs. To this end, EULAR fosters excellence in education and research in the field of rheumatology. It promotes the
translation of research advances into daily care and fights for the recognition of the needs of people with RMDs by the EU institutions through advocacy action.
To find out more about the activities of EULAR, visit: http://www.eular.org.
References
1 B Kuriya, J Widdifield, J Luo, et al. The risk of deliberate self-harm in rheumatoid arthritis and ankylosing
spondylitis: A population-based cohort study. EULAR 2018; Amsterdam: Abstract OP0296.
2 Dagfinrud H, Mengshoel AM, Hagen KB, et al. Health status of patients with ankylosing spondylitis: a
comparison with the general population. Ann Rheum Dis. 2004;63(12):1605-10.
3 Shen CC, Hu LY, Yang AC, et al. Risk of psychiatric disorders following ankylosing spondylitis: A nationwide
population-based retrospective cohort study. J Rheumatol. 2016;43(3):625-31.
4 van der Heijde D, et al. Common language description of the term rheumatic and musculoskeletal diseases (RMDs) for use in communication with the lay public, healthcare providers and other stakeholders endorsed by
5 EULAR. 10 things you should know about rheumatic diseases fact sheet. Available at: https://www.eular.org/myUploadData/files/10%20things%20on%20RD.pdf [Last accessed April 2018].
|
10.1136/annrheumdis-2018-eular.3004
| 2,018 |
Annals of the Rheumatic Diseases
|
OP0296 The risk of deliberate self-harm in rheumatoid arthritis and ankylosing spondylitis: a population-based cohort study
|
<h3>Background</h3> Inflammatory arthritis is associated with the development of mental health disorders. However, there is limited data on the risk of serious mental health outcomes following a rheumatoid arthritis (RA) or ankylosing spondylitis (AS) diagnosis. <h3>Objectives</h3> To estimate the risk of deliberate self-harm in patients with ankylosing spondylitis or rheumatoid arthritis compared with the general population. <h3>Methods</h3> We evaluated population-based cohorts of RA (n=53,240) and AS (n=13,964), each matched 1:4 by age, sex, and calendar year (at diagnosis) with non-IA comparator cohorts in Ontario, Canada. Individuals with a history of mental illness or prior episode of deliberate self-harm were excluded. The outcome was a first emergency room presentation for deliberate self-harm, subsequent to RA or AS diagnosis, between April 1, 2002 and March 31, 2016. We estimated hazard ratios (HR) and 95% confidence intervals (95% CI) for RA and AS, separately, versus the comparator groups, adjusting for demographic, clinical and health service utilisation variables. <h3>Results</h3> Individuals with AS were more likely to deliberately self-harm (incidence rate [IR] of 6.79/10,000 person years [PY] compared to 3.19/10,000 PY in comparators, with an adjusted HR 1.82 (95% CI: 1.26 to 2.62). Deliberate self-harm was also increased for individuals with RA (IR 3.51/10,000 PY) compared to comparators (IR 2.45/10,000 PY) only before (HR 1.43, 95% CI: 1.16 to 1.75), but not after covariate adjustment (HR 1.09, 95% CI: 0.88 to 1.36). The most frequent method of self-harm was poisoning (64% of attempts in AS, 81% in RA) or self-mutilation (36% in AS, 18% in RA). <h3>Conclusions</h3> There is a significantly increased rate of self-harm attempt in inflammatory arthritis and the risk is particularly elevated following a diagnosis of AS. These findings highlight the need for routine evaluation of self-harm behaviour as part of the management of chronic inflammatory arthritis. Understanding the mechanisms contributing to deliberate self-harm attempts will help tailor preventive strategies to reduce morbidity associated with this serious mental health outcome. <h3>Acknowledgements</h3> This work was funded by the Division of Rheumatology Pfizer Research Chair, University of Toronto <h3>Disclosure of Interest</h3> None declared
|
558255
|
Is the agile wallaby man's new best friend?
|
Looking for a new pet? If so, consider the Agile Wallaby or the Asian Palm Civet.
Responding to the growing trend in keeping exotic animals as pets a team, led by Dr. Paul Koene, has developed a methodology to assess the suitability of mammals to be kept domestically in a new study published in Frontiers in Veterinary Science.
The top five animals were: the Sika Deer, Agile Wallaby, Tamar Wallaby, Llama, and Asian Palm Civet, which were all judged to be suitable pets by the scientists from the Wageningen University and Research Centre, in the Netherlands.
So, will the Sika Deer challenge the common canine for the title of man's best friend?
"The main influence of this work is methodological. In the Netherlands many mammal species are kept and for a long time the government wanted to guarantee the welfare of animals," said Dr. Koene; "Therefore the Dutch Animal Act was made stating that mammals should not be kept unless they are production animals, or are species that are suitable to be kept by anyone without special knowledge or skills."
In order to determine if this is the case for a given animal species a list of suitable candidates had to be created. Then a method was devised to place each mammal species in a rank order, ranging from suitable to unsuitable.
The team began by conducting a web-based survey to discover which animals were most frequently kept as pets in the Netherlands. Other mammals were then added to the list based on data from veterinarians and rescue centers.
In the first instance the 90 most common species were selected. Animals classed as 'production animals' such as rabbits, guinea pigs and hamsters are allowed to be kept by anyone and so were not analyzed.
A wide range of bibliographic data was sourced in order to create the one-line criteria statements that the mammals chosen for analysis were graded against. These one-liners were then assigned a score related to behavioral needs or welfare risks.
The risks were assessed on the reported one-liners of the species in both captivity and the wild. Animals with high scores had high behavioral needs and high health, welfare and human relationship risks.
Three teams worked together to produce the final pet suitability rank order. The first team selected one-line statements for each animal. The second team assessed the strength of one-line statements about behavior, health, welfare and human-animal relationship in both captivity and the wild. A third team assessed the suitability based on all assessed strengths for that animal to be a pet.
Dr. Koene explained: "A team is now completing the full list, analyzing the other 270 mammals. They are also looking at how to determine the suitability of birds and reptiles in future.
"So, the impact of the study is that there is a framework and shared database that could be further developed in a more widely used context, for instance across the EU, the US or even worldwide."
However, Dr. Koene does not envisage that Agile Wallabies will replace dogs and cats in man's affections anytime soon.
"Dogs and cats are a special kind of pets, because of their way of housing (free roaming), of variation in breeds, the vast amount of literature and of the delicacy of the subject and so were not analyzed, and wallabies will certainly not replace them."
###
|
10.3389/fvets.2016.00035
| 2,016 |
Frontiers in Veterinary Science
|
Behavioral Ecology of Captive Species: Using Bibliographic Information to Assess Pet Suitability of Mammal Species
|
Which mammal species are suitable to be kept as pet? For answering this question many factors have to be considered. Animals have many adaptations to their natural environment in which they have evolved that may cause adaptation problems and/or risks in captivity. Problems may be visible in behavior, welfare, health, and/or human-animal interaction, resulting, for example, in stereotypies, disease, and fear. A framework is developed in which bibliographic information of mammal species from the wild and captive environment is collected and assessed by three teams of animal scientists. Oneliners from literature about behavioral ecology, health, and welfare and human-animal relationship of 90 mammal species are collected by team 1 in a database and strength of behavioral needs and risks is assessed by team 2. Based on summaries of those strengths the suitability of the mammal species is assessed by team 3. Involvement of stakeholders for supplying bibliographic information and assessments was propagated. Combining the individual and subjective assessments of the scientists using statistical methods makes the final assessment of a rank order of suitability as pet of those species less biased and more objective. The framework is dynamic and produces an initial rank ordered list of the pet suitability of 90 mammal species, methods to add new mammal species to the list or remove animals from the list and a method to incorporate stakeholder assessments. A model is developed that allows for provisional classification of pet suitability. Periodical update of the pet suitability framework is expected to produce an updated list with increased reliability and accuracy. Furthermore, the framework could be further developed to assess the pet suitability of additional species of other animal groups, e.g., birds, reptiles, and amphibians.
|
966114
|
Ancient 'shark' from China is humans’ oldest jawed ancestor
|
Living sharks are often portrayed as the apex predators of the marine realm. Paleontologists have been able to identify fossils of their extinct ancestors that date back hundreds of millions of years to a time known as the Palaeozoic period. These early "sharks," known as acanthodians, bristled with spines. In contrast to modern sharks, they developed bony "armor" around their paired fins.
A recent discovery of a new species of acanthodian from China surprised scientists with its antiquity. The find predates by about 15 million years the earliest acanthodian body fossils and is the oldest undisputed jawed fish.
These findings were published in Nature on Sept. 28.
Reconstructed from thousands of tiny skeletal fragments, Fanjingshania, named after the famous UNESCO World Heritage Site Fanjingshan, is a bizarre fish with an external bony "armor" and multiple pairs of fin spines that set it apart from living jawed fish, cartilaginous sharks and rays, and bony ray- and lobe-finned fish.
Examination of Fanjingshania by a team of researchers from the Chinese Academy of Sciences, Qujing Normal University, and the University of Birmingham revealed that the species is anatomically close to groups of extinct spiny "sharks" collectively known as acanthodians. Unlike modern sharks, acanthodians have skin ossifications of the shoulder region that occur primitively in jawed fish.
The fossil remains of Fanjingshania were recovered from bone bed samples of the Rongxi Formation at a site in Shiqian County of Guizhou Province, South China.
These findings present tangible evidence of a diversification of major vertebrate groups tens of millions of years before the beginning of the so called "Age of Fishes" some 420 million years ago.
The researchers identified features that set apart Fanjingshania from any known vertebrate. It has dermal shoulder girdle plates that fuse as a unit to a number of spines—pectoral, prepectoral and prepelvic. Additionally, it was discovered that the ventral and lateral portions of the shoulder plates extend to the posterior edge of the pectoral fin spines. The species has distinct trunk scales with crowns composed of a row of tooth-like elements (odontodes) adorned by discontinuous nodose ridges. Peculiarly, dentine development is recorded in the scales but is missing in other components of the dermal skeleton such as the fin spines.
"This is the oldest jawed fish with known anatomy," said Prof. ZHU Min from the Institute of Vertebrate Paleontology and Paleoanthropology (IVPP) of the Chinese Academy of Sciences. "The new data allowed us to place Fanjingshania in the phylogenetic tree of early vertebrates and gain much needed information about the evolutionary steps leading to the origin of important vertebrate adaptations such as jaws, sensory systems, and paired appendages."
From the outset, it was clear to the scientists that Fanjingshania's shoulder girdle, with its array of fin spines, is key to pinpointing the new species' position in the evolutionary tree of early vertebrates. They found that a group of acanthodians, known as climatiids, possess the full complement of shoulder spines recognized in Fanjingshania. What is more, in contrast to normal dermal plate development, the pectoral ossifications of Fanjingshania and the climatiids are fused to modified trunk scales. This is seen as a specialization from the primitive condition of jawed vertebrates where the bony plates grow from a single ossification center.
Unexpectedly, the fossil bones of Fanjingshania show evidence of extensive resorption and remodelling that are typically associated with skeletal development in bony fish, including humans.
"This level of hard tissue modification is unprecedented in chondrichthyans, a group that includes modern cartilaginous fish and their extinct ancestors," said lead author Dr. Plamen Andreev, a researcher at Qujing Normal University. "It speaks about greater than currently understood developmental plasticity of the mineralized skeleton at the onset of jawed fish diversification."
The resorption features of Fanjingshania are most apparent in isolated trunk scales that show evidence of tooth-like shedding of crown elements and removal of dermal bone from the scale base. Thin-sectioned specimens and tomography slices show that this resorptive stage was followed by deposition of replacement crown elements. Surprisingly, the closest examples of this skeletal remodelling are found in the dentition and skin teeth (denticles) of extinct and living bony fish. In Fanjingshania, however, the resorption did not target individual teeth or denticles, as occurred in bony fish, but instead removed an area that included multiple scale denticles. This peculiar replacement mechanism more closely resembles skeletal repair than the typical tooth/denticle substitution of jawed vertebrates.
A phylogenetic hypothesis for Fanjingshania that uses a numeric matrix derived from observable characters confirmed the researchers’ initial hypothesis that the species represents an early evolutionary branch of primitive chondrichthyans. These results have profound implications for our understanding of when jawed fish originated since they align with morphological clock estimates for the age of the common ancestor of cartilaginous and bony fish, dating it to around 455 million years ago, during a period known as the Ordovician.
These results tell us that the absence of undisputed remains of jawed fish of Ordovician age might be explained by under sampling of sediment sequences of comparable antiquity. They also point towards a strong preservation bias against teeth, jaws, and articulated vertebrate fossils in strata coeval with Fanjingshania.
"The new discovery puts into question existing models of vertebrate evolution by significantly condensing the timeframe for the emergence of jawed fish from their closest jawless ancestors. This will have profound impact on how we assess evolutionary rates in early vertebrates and the relationship between morphological and molecular change in these groups," said Dr. Ivan J. Sansom from the University of Birmingham.
Nature
10.1038/s41586-022-05233-8
Spiny chondrichthyan from the lower Silurian of South China
28-Sep-2022
|
10.1038/s41586-022-05233-8
| 2,022 |
Nature
|
Spiny chondrichthyan from the lower Silurian of South China
|
Modern representatives of chondrichthyans (cartilaginous fishes) and osteichthyans (bony fishes and tetrapods) have contrasting skeletal anatomies and developmental trajectories1-4 that underscore the distant evolutionary split5-7 of the two clades. Recent work on upper Silurian and Devonian jawed vertebrates7-10 has revealed similar skeletal conditions that blur the conventional distinctions between osteichthyans, chondrichthyans and their jawed gnathostome ancestors. Here we describe the remains (dermal plates, scales and fin spines) of a chondrichthyan, Fanjingshania renovata gen. et sp. nov., from the lower Silurian of China that pre-date the earliest articulated fossils of jawed vertebrates10-12. Fanjingshania possesses dermal shoulder girdle plates and a complement of fin spines that have a striking anatomical similarity to those recorded in a subset of stem chondrichthyans5,7,13 (climatiid 'acanthodians'14). Uniquely among chondrichthyans, however, it demonstrates osteichthyan-like resorptive shedding of scale odontodes (dermal teeth) and an absence of odontogenic tissues in its spines. Our results identify independent acquisition of these conditions in the chondrichthyan stem group, adding Fanjingshania to an increasing number of taxa7,15 nested within conventionally defined acanthodians16. The discovery of Fanjingshania provides the strongest support yet for a proposed7 early Silurian radiation of jawed vertebrates before their widespread appearance5 in the fossil record in the Lower Devonian series.
|
559199
|
Cover crops in nitrogen's circle of life
|
A circle of life-and nitrogen-is playing out in farms across the United States. And researchers are trying to get the timing right.
Some cover crops, such as hairy vetch or cereal rye, are not grown to be eaten. Instead, they capture nutrients, including nitrogen, from previous crops, the air, and the soil. When cover crops decompose, these nutrients are released. Cash crops, such as corn or soybean, planted afterward can use these nutrients to grow and thrive.
But cash crops need different amounts of nutrients at different stages of growth. A new study assesses how quickly nutrients are released from two different cover crops. The goal, according to study co-author Rachel Cook, is to time nutrient release from cover crops to better match the nutrient needs of specific cash crops.
"It's like trying to time a meal to come out of the oven exactly when all the hungry dinner guests arrive," says Cook, currently a researcher at North Carolina State University.
The researchers focused on nitrogen because it "is typically the most limiting nutrient in crop production, but has the most potential for environmental impact from losses." The two cover crops, hairy vetch and cereal rye, are two of the most commonly planted cover crops in the Midwest.
They found that hairy vetch and cereal rye had significantly different nitrogen release dynamics.
"We now better understand the rate and quantity of nitrogen release from two of the more popular cover crops currently in use," says Cook. "This information can help farmers estimate how much nitrogen they might expect to get from their cover crop and when it will be available."
The study showed that hairy vetch released more nitrogen overall compared to cereal rye. Nitrogen release was also quicker from hairy vetch plants whose growth had been halted.
"Hairy vetch releases almost all available nitrogen in the first four weeks after it's terminated," says Cook. That's before the major time of nitrogen uptake by corn, which is around week eight after planting. "So, terminating hairy vetch too early could cause losses of nitrogen before the corn crop can get to it."
Cereal rye, on the other hand, released nitrogen slowly over multiple weeks. "This would be beneficial before a cash crop with low nitrogen needs," says Cook.
The study was carried out in field test sites at the Agricultural Research Center at Carbondale, Illinois. Study plots were planted with either cereal rye or hairy vetch. After terminating the cover crops with herbicide, researchers planted soybean or corn, respectively.
The researchers measured the growth of the two cover crops, how quickly they decomposed once terminated, and the ensuing quantity and rate of nitrogen they released.
Overall, hairy vetch plants released almost three times as much nitrogen compared to cereal rye plants. More than 70% of the total nitrogen released by hairy vetch occurred within the first two weeks after termination. In contrast, nitrogen release from cereal rye occurred later, with almost no net nitrogen release in the first four weeks after termination.
Cook hopes that more information on how different cover crops release nutrients will help farmers make more informed decisions. "They will be able to choose which cover crop works best for their farm and the specific cash crops they are planting," she says. "They will also know when to terminate the cover crop prior to planting the cash crop."
Cover crops also do more than release nutrients after they are terminated. They can help manage soil quality and erosion, for example.
"Long-term studies with cover crops will be really important," says Cook. "These studies can help us understand how cover crops can improve soil properties over time and how that might improve cash crop yields."
###
Read more about Cook's research in Soil Science Society of America Journal. The study was carried out by Taylor Sievers. It was funded by the Illinois Nutrient Research and Education Council.
Soil Science Society of America
|
10.2136/sssaj2017.05.0139
| 2,018 |
Soil Science Society of America Journal
|
Aboveground and Root Decomposition of Cereal Rye and Hairy Vetch Cover Crops
|
Core Ideas Hairy vetch decomposed faster than cereal rye. Hairy vetch released N within two weeks of termination. Biomass belowground decomposed more quickly than aboveground. Cover crop N release occurred earlier than maximum crop uptake due to late planting. Synchronizing cover crop decomposition and nutrient release with cash crop uptake can provide benefits to agroecosystems but can be difficult to implement. The objectives of this study were to quantify the aboveground and belowground decomposition and nutrient release of two cover crops, hairy vetch ( Vicia villosa Roth) and cereal rye ( Secale cereale L.), after termination with herbicides through a 16‐wk period during the cash crop growing season using litterbags and intact root cores. Plant Root Simulator probes monitored mineral N in the soil. Hairy vetch aboveground ( k = 0.4505) and root ( k = 0.6821) biomass decomposed at a faster rate than aboveground ( k = 0.1368) and root ( k = 0.1866) biomass of cereal rye. Hairy vetch had higher initial N content in aboveground (41.9 g kg –1 ) and root (16.5 g kg –1 ) biomass than cereal rye (11.5 and 8.3 g kg –1 , respectively). Hairy vetch had a lower C to N ratio than cereal rye in both aboveground (9.52 vs. 34.72) and root biomass (17.31 vs. 40.31) contributing to decomposition differences. Hairy vetch rapidly decomposed after cover crop termination in the spring, therefore growers should consider delaying termination of this cover crop until close to cash crop planting to decrease the risk of N loss. Cereal rye residues decompose much slower and may also immobilize N because of its high C to N ratio. A better understanding of how aboveground and belowground cover crop characteristics influence decomposition will help to optimize cover crop nutrient release with cash crop uptake.
|
939637
|
Growing liver from spleen
|
A team from Nanjing University and University of Macau has transformed the spleen to perform liver functions in mice, without transplanting cells or tissue from another body. The research is published online on 7 Jan in Gut, the official journal of the British Society of Gastroenterology.
At least two million people die each year due to liver diseases. For many patients, liver transplantation is their last hope, but the dearth of donor organs is a critical challenge worldwide. These researchers demonstrate that the abundant cells in another organ – the spleen – can take over the roles of the liver tissue, after a few steps of “transformation” (Figure 1). First, they injected silica particles to the mouse spleen, which stimulated the growth of a specific group of cells called fibroblasts. Then, they used gene vectors to overexpress three genes, namely Foxa3, Gata4 and Hnf1a, that convertedabout 2 × 106 fibroblasts into hepatocytes (iHeps) in one spleen. Next, they increased the amount of three cytokines, namely TNF-α, EGF and HGF, which further expanded iHeps by four times (Figure 2). The spleen-transformed liver tissue exerted sufficient functions to save lives of the mice in which 90% of the liver was surgically removed (Figure 3).
For patients with end-stage liver diseases, their liver tissue is severely damaged. So, it is hard to restore the liver functions in the original site but expected to grow liver cells in another place (“ectopic regeneration”). But, where to grow a “new” liver is an unsolved question. There used to be voices of growing liver cells in lymph nodes, which is interesting but has little clinical significance. Because the liver is a large organ and has a huge number of cells required for performing even the most fundamental functions, anything offered in the size of a lymph nodes is less than a drop in the ocean. To address this challenge, for the first time, this team started their serial work by considering “transforming” an existing, large organ to perform the functions of the liver. One lead author of the paper, Professor Lei Dong of Nanjing University, is optimistic about the clinical translation of this technology. “Our method is direct, efficient, and different from all existing ones in that it does not involve cells or tissue from the outside,” he said, “so, it avoids many safety issues and immune injection. You can imagine that it directly works on the patient’s own cells.” The team have already started to test the technology in pigs and monkeys.
Full-text link of the paper: https://gut.bmj.com/content/early/2022/01/06/gutjnl-2021-325018; Doi: 10.1136/gutjnl-2021-325018
Gut
10.1136/gutjnl-2021-325018
Experimental study
Animals
Reprogramming the spleen into a functioning ‘liver’ in vivo
7-Jan-2022
|
10.1136/gutjnl-2021-325018
| 2,022 |
Gut
|
Reprogramming the spleen into a functioning ‘liver’ in vivo
|
Objective Liver regeneration remains one of the biggest clinical challenges. Here, we aim to transform the spleen into a liver-like organ via directly reprogramming the splenic fibroblasts into hepatocytes in vivo. Design In the mouse spleen, the number of fibroblasts was through silica particles (SiO 2 ) stimulation, the expanded fibroblasts were converted to hepatocytes (iHeps) by lentiviral transfection of three key transcriptional factors (Foxa3, Gata4 and Hnf1a), and the iHeps were further expanded with tumour necrosis factor-α (TNF-α) and lentivirus-mediated expression of epidermal growth factor (EGF) and hepatocyte growth factor (HGF). Results SiO 2 stimulation tripled the number of activated fibroblasts. Foxa3, Gata4 and Hnf1a converted SiO 2 -remodelled spleen fibroblasts into 2×10 6 functional iHeps in one spleen. TNF-α protein and lentivirus-mediated expression of EGF and HGF further enabled the total hepatocytes to expand to 8×10 6 per spleen. iHeps possessed hepatic functions—such as glycogen storage, lipid accumulation and drug metabolism—and performed fundamental liver functions to improve the survival rate of mice with 90% hepatectomy. Conclusion Direct conversion of the spleen into a liver-like organ, without cell or tissue transplantation, establishes fundamental hepatic functions in mice, suggesting its potential value for the treatment of end-stage liver diseases.
|
517033
|
Sussex research reveals brain mechanism involved in language learning
|
Learning a new language may be more of a science than an art, a University of Sussex study finds.
Psychologists found that when we learn the names of unfamiliar objects, brain regions involved in learning actively predict the objects the names correspond to. The brain tests these predictions just as scientists would test a scientific theory.
The team found that the hippocampus - a brain region that is affected in Alzheimer's disease and some developmental language disorders - plays a key role learning the names of objects via a "propose-but-verify" strategy. Using this strategy learners actively predict which of the words they hear correspond to each of the objects they see.
Twenty-three adults looked at scenes with multiple objects whilst listening to words in an MRI scanner. The MRI scanner allows psychologists to see which brain regions are active while participants carry out tests of memory and attention. The words and objects were made-up so that they were completely new to the study participants. Because multiple unknown words and objects were presented simultaneously, it was never immediately obvious which words corresponded to which object. The correspondences could only be learned across several minutes. However, by covertly proposing name-object correspondences and testing those proposals across many scenes, all of the adults were able to learn the words for all 18 previously unfamiliar objects. The MRI scans revealed that the hippocampus was central to this propose-but-verify mechanism. Specifically, it helped adults remember the word object correspondences over time.
The findings, published today, Thursday 15 March 2018, in the journal Current Biology, shed light on how the brain supports language acquisition. The findings have implications for both language education and our understanding of what is happening in languages disorders.
Lead researcher Dr Sam Berens said: "Children have a remarkable ability to learn new languages and it is hotly debated whether they use a propose-but-verify strategy during early language development. Our experiment shows that the hippocampus can support propose-but-verify learning in adults and that this learning mechanism is favoured over other strategies."
"A logical next step would be to apply the research techniques that we used in this study to investigate language impairments in children and adults. Children are able to learn new languages effortlessly, but it is still unclear if they learn words in the same way as adults. This research technique would give us more insight into studying language development and will leave us better equipped to help those with developmental language difficulties."
Senior researcher Dr Chris Bird, who oversaw the research project, is continuing to investigate the ways in which language learning is affected in Alzheimer's disease and whether some learning strategies are less affected by the condition than others. This programme of research is funded by the European Research Council.
###
'Cross-situational learning is supported by propose-but-verify hypothesis testing', by Sam C Berens, Jessica S Horst and Chris M Bird, is published in the journal Current Biology. The work is funded by the European Research Council and the Economic and Social Research Council.
|
10.1016/j.cub.2018.02.042
| 2,018 |
Current Biology
|
Cross-Situational Learning Is Supported by Propose-but-Verify Hypothesis Testing
|
When we encounter a new word, there are often multiple objects that the word might refer to [1]. Nonetheless, because names for concrete nouns are constant, we are able to learn them across successive encounters [2, 3]. This form of "cross-situational" learning may result from either associative mechanisms that gradually accumulate evidence for each word-object association [4, 5] or rapid propose-but-verify (PbV) mechanisms where only one hypothesized referent is stored for each word, which is either subsequently verified or rejected [6, 7]. Using model-based representation similarity analyses of fMRI data acquired during learning, we find evidence for learning mediated by a PbV mechanism. This learning may be underpinned by rapid pattern-separation processes in the hippocampus. Our findings shed light on the psychological and neural processes that support word learning, suggesting that adults rely on their episodic memory to track a limited number of word-object associations.
|
694295
|
Does insulin resistance cause fibromyalgia?
|
GALVESTON, Texas - Researchers led by a team from The University of Texas Medical Branch at Galveston were able to dramatically reduce the pain of fibromyalgia patients with medication that targeted insulin resistance.
This discovery could dramatically alter the way that chronic pain can be identified and managed. Dr. Miguel Pappolla, UTMB professor of neurology, said that although the discovery is very preliminary, it may lead to a revolutionary shift on how fibromyalgia and related forms of chronic pain are treated. The new approach has the potential to save billions of dollars to the health care system and decrease many peoples' dependence on opiates for pain management.
The UTMB team of researchers, along with collaborators from across the U.S., including the National Institutes of Health, were able for the first time, to separate patients with fibromyalgia from normal individuals using a common blood test for insulin resistance, or pre-diabetes. They then treated the fibromyalgia patients with a medication targeting insulin resistance, which dramatically reduced their pain levels. The study was recently published in PlosOne.
Fibromyalgia is one of the most common conditions causing chronic pain and disability. The global economic impact of fibromyalgia is enormous - in the U.S. alone and related health care costs are about $100 billion each year. Despite extensive research the cause of fibromyalgia is unknown, so there's no specific diagnostics or therapies for this condition other than pain-reducing drugs.
"Earlier studies discovered that insulin resistance causes dysfunction within the brain's small blood vessels. Since this issue is also present in fibromyalgia, we investigated whether insulin resistance is the missing link in this disorder," Pappolla said. "We showed that most - if not all - patients with fibromyalgia can be identified by their A1c levels, which reflects average blood sugar levels over the past two to three months."
Pre-diabetics with slightly elevated A1c values carry a higher risk of developing central (brain) pain, a hallmark of fibromyalgia and other chronic pain disorders."
The researchers identified patients who were referred to a subspecialty pain medicine clinic to be treated for widespread muscular/connective tissue pain. All patients who met the criteria for fibromyalgia were separated into smaller groups by age. When compared with age-matched controls, the A1c levels of the fibromyalgia patients were significantly higher.
"Considering the extensive research on fibromyalgia, we were puzzled that prior studies had overlooked this simple connection," said Pappolla. "The main reason for this oversight is that about half of fibromyalgia patients have A1c values currently considered within the normal range. However, this is the first study to analyze these levels normalized for the person's age, as optimal A1c levels do vary throughout life. Adjustment for the patients' age was critical in highlighting the differences between patients and control subjects."
For the fibromyalgia patients, metformin, a drug developed to combat insulin resistance was added to their current medications. They showed dramatic reductions in their pain levels.
|
10.1371/journal.pone.0216079
| 2,019 |
PLoS ONE
|
Is insulin resistance the cause of fibromyalgia? A preliminary report
|
Fibromyalgia (FM) is one of the most frequent generalized pain disorders with poorly understood neurobiological mechanisms. This condition accounts for an enormous proportion of healthcare costs. Despite extensive research, the etiology of FM is unknown and thus, there is no disease modifying therapy available for this condition. We show that most (if not all) patients with FM belong to a distinct population that can be segregated from a control group by their glycated hemoglobin A1c (HbA1c) levels, a surrogate marker of insulin resistance (IR). This was demonstrated by analyzing the data after introducing an age stratification correction into a linear regression model. This strategy showed highly significant differences between FM patients and control subjects (p < 0.0001 and p = 0.0002, for two separate control populations, respectively). A subgroup of patients meeting criteria for pre-diabetes or diabetes (patients with HbA1c values of 5.7% or greater) who had undergone treatment with metformin showed dramatic improvements of their widespread myofascial pain, as shown by their scores using a pre and post-treatment numerical pain rating scale (NPRS) for evaluation. Although preliminary, these findings suggest a pathogenetic relationship between FM and IR, which may lead to a radical paradigm shift in the management of this disorder.
|
936387
|
Reshaping the plastic lifecycle into a circle
|
In 1950, 2 million metric tonnes of new plastic was produced globally. In 2018, the world produced 360 million metric tonnes of plastics. Because of their low cost, durability and versatility, plastics are everywhere–including in the environment–and only 9 percent of the plastic ever generated has been recycled. The vast majority ends up in landfills, where its slow degradation allows it to accumulate, while pervasive microplastics have been found everywhere, from inside living bodies to the bottom of the ocean.
“At our current rate of plastic waste generation, increasing waste management capacity will not be sufficient to reach plastic pollution goals alone,” said Vikas Khanna, associate professor of civil and environmental engineering at the University of Pittsburgh Swanson School of Engineering. “There is an urgent need to take actions like limiting global virgin plastic production from fossil fuels and designing products and packaging for recyclability.”
New research led by Khanna gives a bird’s-eye view of the scale of plastic creation globally, tracing where it’s produced, where it ends up, and its environmental impact.
The researchers found the greenhouse gas emissions associated with the production of plastic in 2018 staggering: 170 million metric tonnes of primary plastics were traded globally in 2018, with associated greenhouse gas emissions accounting for 350 million metric tonnes of CO2 equivalent–about the same amount produced by nations like Italy and France in a year.
“And if anything, our estimation is on the lower end. Converting primary plastic resins into end use products will result in additional greenhouse gases and other emissions,” warned Khanna.
The work was recently published in the journal ACS Sustainable Chemistry & Engineering.
“We know plastics are a problem, and we know keeping materials in a circular economy instead of the take-make-waste model we’re used to is a great solution,” said Khanna. “But if we don’t have an understanding of the current state of the system, then it’s hard to put numbers to it and understand the scale. We wanted to understand how plastics are mobilized across geographical boundaries.”
Since international trade plays such a critical role in making material goods available, including plastics, the researchers applied network theory to data from the UN Comtrade Database to understand the role of individual countries, trade relationships between countries, and structural characteristics that governed these interactions. The global primary plastic trade network (GPPTN) that they created designated each country as a “node” in the network and a trade relationship between two countries as an “edge,” allowing them to determine the critical actors (countries) and who is making the biggest impact.
The researchers examined 11 primary thermoplastic resins that make up the majority of plastic products. They found that a majority of the most influential nodes in the model are exporting more plastics than they import: Saudi Arabia is the leading exporter, followed by the U.S., South Korea, Germany and Belgium.The top five importers of primary plastic resins are China, Germany, the U.S., Italy and India.
In addition to the greenhouse gas emissions, the energy expended in the GPPTN is estimated to be the equivalent of 1.5 trillion barrels of crude oil, 230 billion cubic meters of natural gas, or 407 metric tonnes of coal. The carbon embedded in the model is estimated to be the carbon equivalent of 118 million metric tonnes of natural gas or 109 million metric tonnes of petroleum.
“The results are particularly important and timely, especially in light of the recent discussions during Conference of the Parties (COP26) in Glasgow and the importance of understanding where emissions are coming from in key sectors,” said co-author Melissa Bilec, Co-director of Mascaro Center for Sustainable Innovation and William Kepler Whiteford Professor of Civil and Environmental Engineering. “The collaboration with Dr. Khanna and his lab allows us to learn new systems-level modeling techniques as we converge towards understanding solutions to our complex challenges.”
This paper, “Quantifying Energy and Greenhouse Gas Emissions Embodied in Global Primary Plastic Trade Network,” (DOI: 10.1021/acssuschemeng.1c05236) is supported by the NSF convergence research project on the circular economy, which is led by Bilec.
Using more recycled plastics instead of creating new resins that eventually make their way to landfills would be substantially better for the environment; however, financial and behavioral barriers both need to be addressed before a true circular economy for plastics can become a reality.
“Even though emerging chemical recycling techniques promise to recover more material in an economically and environmentally sound way, we need to make it so that using recycled materials is as cost-effective as using virgin plastic resins,” said Khanna. “Our next step is to understand the interaction between the GPPTN and the plastic waste trade network to identify the opportunities where investment could encourage a circular plastics economy.”
ACS Sustainable Chemistry & Engineering
10.1021/acssuschemeng.1c05236
Quantifying Energy and Greenhouse Gas Emissions Embodied in Global Primary Plastic Trade Network
28-Oct-2021
|
10.1021/acssuschemeng.1c05236
| 2,021 |
ACS Sustainable Chemistry & Engineering
|
Quantifying Energy and Greenhouse Gas Emissions Embodied in Global Primary Plastic Trade Network
|
We present a model of the global primary plastic trade network (GPPTN) and report estimates of embodied impacts including greenhouse gas (GHG) emissions, cumulative fossil energy demand, and embedded carbon. The network is constructed for 11 thermoplastic resins that account for the majority of global primary plastic trade. A total of 170 million metric tonnes (Mt) of primary plastics were traded in 2018, responsible for 350 Mt of embodied GHG emissions, 8.9 exajoules (EJ) of cumulative fossil energy demand and 95 Mt of embedded carbon. In 2018, embodied GHG emissions for GPPTN were comparable to annual carbon dioxide emissions of developed nations like Italy and France. The cumulative fossil energy demand of GPPTN was equivalent to 1.5 trillion barrels of crude oil and the carbon embedded in GPPTN was equivalent to carbon in 118 Mt of natural gas or 109 Mt of petroleum. Statistical inference and network measures provide evidence that a few key trade relationships account for a majority of plastic flows and subsequent embodied impacts through the network. The significant embodied impacts and materials in GPPTN must be considered going forward as policies are developed to improve the circularity and environmental sustainability of the plastics industry.
|
651087
|
Football strengthens the bones of men with prostate cancer
|
Men with prostate cancer run the risk of brittle bones as a side-effect of their treatment. But one hour's football training a few times a week counters many of the negative effects of the treatment, according to University of Copenhagen scientists.
Football training is not just good for the heart and the muscles. Running around the pitch, jumping, accelerating, braking and kicking the ball also strengthen the bones.
Even older men being treated for prostate cancer get stronger bones from playing football, according to two new articles published in Osteoporosis International and European Journal of Applied Physiology. The articles were part of a recently defended PhD thesis by Jacob Uth, a physiotherapist at the University Hospitals Centre for Health Research (UCSF) at Copenhagen University in Denmark.
This is remarkable, because men with prostate cancer normally have weaker bones as a consequence of the disease and especially because of the anti-hormone treatment given to patients to lower the level of testosterone in the body.
One side-effect of this treatment is that the bones become decalcified, so the men have an increased risk of osteoporosis, just like women going through the menopause.
"Football training counters many side-effects of the treatment. It is impressive to see such big improvements in both muscular strength and bone density, despite the anti-androgen treatment," says Peter Krustrup, who is Jacob Uth's supervisor and Professor of Team Sport and Health in the Department of Nutrition, Exercise and Sports at Copenhagen University.
"Our so-called FC Prostate study showed that just 12 weeks of football training increased leg bone mass and elevated the blood-borne bone formation markers osteocalcin and P1NP by 35 and 50%, respectively. After 32 weeks of training we observed a systematic 1-2% increase in bone mineral density at the hip and upper part of the thigh bone in the football players compared to the control group, equivalent to bones 2-4 years younger, specifies Professor Krustrup.
Acceleration and braking make football effective
During the training, the players' movements were tracked precisely with GPS. The measurements show that the players' average speed was relatively low, but they performed 300 decelerations, 200 accelerations and 100 running bouts per hour of football training session. This is believed to be the reason why football is better for the bones than jumping on and off a step bench, for example.
"The changes in bone mass in the legs of the football group show a significant correlation with the number of times they accelerate and brake. This gives an indication that the effect is linked to the specific activity that we see in football, where there is interval running with a lot of accelerating and braking which place great stress on the bone tissue, and that is what makes them stronger," says Uth and continues.
"The more the bones are affected from different angles during exercise, the more complete the stimulation. When you change direction, kick and block the ball, and when you are challenged by an opponent as you are in football, there is a wide range of powerful stimuli to the bone tissue," he explains.
About the FC Prostate study
In all, 57 men aged between 43 and 76, with an average age of 67, took part. They were receiving treatment for prostate cancer. After drawing lots, the participants were divided into a football training group and a control group.
The football group trained 2-3 times a week for 32 weeks, 45 to 60 minutes at a time. Before starting and after 12 and 32 weeks' training, both groups were tested with functional tests, blood sampling and DXA scanning.
Although it is now two years since the FC Prostate trial finished, many of the men are still playing football. They meet twice a week in the Copenhagen football club Østerbro IF organised under the Danish Football Association (DBU).
|
10.1007/s00198-015-3399-0
| 2,015 |
Osteoporosis International
|
Efficacy of recreational football on bone health, body composition, and physical functioning in men with prostate cancer undergoing androgen deprivation therapy: 32-week follow-up of the FC prostate randomised controlled trial
|
Androgen deprivation therapy (ADT) for prostate cancer (PCa) impairs musculoskeletal health. We evaluated the efficacy of 32-week football training on bone mineral density (BMD) and physical functioning in men undergoing ADT for PCa. Football training improved the femoral shaft and total hip BMD and physical functioning parameters compared to control.ADT is a mainstay in PCa management. Side effects include decreased bone and muscle strength and increased fracture rates. The purpose of the present study was to evaluate the effects of 32 weeks of football training on BMD, bone turnover markers (BTMs), body composition, and physical functioning in men with PCa undergoing ADT.Men receiving ADT >6 months (n = 57) were randomly allocated to a football training group (FTG) (n = 29) practising 2-3 times per week for 45-60 min or to a standard care control group (CON) (n = 28) for 32 weeks. Outcomes were total hip, femoral shaft, femoral neck and lumbar spine (L2-L4) BMD and systemic BTMs (procollagen type 1 amino-terminal propeptide, osteocalcin, C-terminal telopeptide of type 1 collagen). Additionally, physical functioning (postural balance, jump height, repeated chair rise, stair climbing) was evaluated.Thirty-two-week follow-up measures were obtained for FTG (n = 21) and for CON (n = 20), respectively. Analysis of mean changes from baseline to 32 weeks showed significant differences between FTG and CON in right (0.015 g/cm(2)) and left (0.017 g/cm(2)) total hip and in right (0.018 g/cm(2)) and left (0.024 g/cm(2)) femoral shaft BMD, jump height (1.7 cm) and stair climbing (-0.21 s) all in favour of FTG (p < 0.05). No other significant between-group differences were observed.Compared to standard care, 32 weeks of football training improved BMD at clinically important femoral sites and parameters of physical functioning in men undergoing ADT for PCa.
|
602501
|
Supercontinuum lasers can lead to better bread and beer
|
Technologically, the supercontinuum laser has undergone extensive development since the turn of the century due to the development of the photonic crystal fibres on which the laser is based. The project Light & Food (see the facts below) investigates, among other things, how to use this super-powerful laser to analyse food.
"The supercontinuum laser has made it possible to measure very small objects rapidly and with high energy. A supercontinuum instrument can therefore potentially be used to measure whole grains and thus find grains with, for example, fungal or insect attacks, or to sort grains by baking, health or quality parameters," says Tine Ringsted, a postdoc at the Department of Food Science at the University of Copenhagen.
By measuring each grain you can more accurately observe the variation that naturally exists among grains from the same field and even from the same straw. The non-destructive and rapid measurement of individual grains can therefore be used in plant breeding to find desirable properties or in industrial grain sorting to increase quality. A possible industrial application could be to measure the content of the dietary fibre beta-glucan. Beta-glucan in barley and especially oats has health-promoting properties such as lowering of serum cholesterol, increased satiety and stabilisation of blood sugar and insulin levels after meals. Conversely, in the brewing industry, they are not interested in high concentrations of beta-glucan, as it can clog filters and create a cloudy precipitate in the finished beer called "grandmother's cough".
Measurements on barley flour and barley discs have previously shown some information rich wavelengths, but it has not been possible to measure through the barley grains at these long near-infrared wavelengths due to not having enough energy from the traditional spectrometer lamp.
"The supercontinuum laser's collimated light beam with high energy meant that we could measure through the entire barley grain at the information-rich wavelengths. By using multivariate data analysis (chemometrics) we could generate a mathematical regression model that could predict beta-glucan content from 3.0-16.8 % in barley grains with a margin of error of 1.3 % beta-glucan," explains Tine Ringsted.
Laser based seed sorting increases the value of beer and bread
"A seed sorting will mean that you can obtain some grains that have health-promoting properties for use in bread, for example, and some grains that are extra good for beer. This will give both products a higher value without doing anything, but sorting the grains," says Tine Ringsted, who believes that food analysis with supercontinuum lasers will become a new breakthrough in the food industry, but that it will take some years because the development is based to a high degree on interdisciplinary research, where needs and technology has to fit together.
"It is one thing, for example, to have an instrument that can measure very rapidly and provide accurate answers, but in order for it to be practical, you must also have a sample holder that allows you to measure a large number of grains in a short time," explains Tine Ringsted, adding that there is already a Swedish company (BoMill), which has developed a sample holder that can handle three tons of grains per hour, but they measure the grains at shorter and less informative wavelengths.
Good future perspectives
Measurement of beta-glucan in barley grains is just one example of how a supercontinuum laser can be used. In addition to single grain measurements, the Light & Food project has also examined the supercontinuum laser used in a new robust spectrometer that can potentially measure many places in a food production system. For example, this could be used in the dairy or brewing industry to follow a product from start to finish. In addition, there is a theoretical potential for using the supercontinuum laser for the rapid measurements of gases - for example, aroma compounds or ethylene which act as a gaseous plant hormone from ripening fruits.
Overall, near-infrared spectroscopy allows for measuring more often and non-destructively compared to traditional wet chemical analyses.
"A supercontinuum laser provides even more options for food measurements, so it offers great potential for improving the quality of our food in the future," says Tine Ringsted.
###
Facts
About Light & Food:
Near-infrared spectroscopy is a fast and non-destructive analysis method, making it a very useful technology for measuring the chemical composition of food. The Department of Food Science at the University of Copenhagen, Denmark, is participating in the Innovation Fund Denmark project entitled Light & Food, which aims to develop a new type of light source for near-infrared spectroscopy, the so-called supercontinuum laser. The new light source combines the collimated beam seen in a laser with a wide range of wavelengths typical for a lamp. The supercontinuum laser can also be connected to fibres and it is far brighter than a traditional light source. It therefore has the potential to shine through all kinds of foods and provide more accurate information about the content - or tell something about the content that you have not previously been able to analyse.
The Department of Food Science is working in the Light & Food research project in order to investigate which foods the supercontinuum laser can advantageously be used to analyse. In her PhD project, Tine Ringsted has used the focused supercontinuum laser to measure beta-glucan through intact barley grains with long near-infrared wavelengths. In addition, the project has also shown a theoretical potential for gas measurements of, for example, aroma compounds or ethylene, which regulates the maturation of many fruits. Light & Food has also developed a new robust spectrometer that has potential for online measurements in the food industry. The title of the PhD project is: "Near infrared spectroscopy of food systems using a supercontinuum laser," and Professor Søren Balling Engelsen is the principal supervisor.
Partners in Light & Food:
Department of Food Science (FOOD), University of Copenhagen
Aarhus University
DTU
NKT Photonics, which produces the supercontinuum laser
FOSS, which develops analytical instruments
Funded by:
Innovation Fund Denmark
|
10.1016/j.aca.2017.07.008
| 2,017 |
Analytica Chimica Acta
|
Long wavelength near-infrared transmission spectroscopy of barley seeds using a supercontinuum laser: Prediction of mixed-linkage beta-glucan content
|
A supercontinuum laser was used to perform the first transmission measurements on intact seeds with long wavelength near-infrared spectroscopy. A total of 105 barley seeds from five different barley genotypes (Bomi, lys5.f, lys5.g, lys16 and lys95) were measured from 2275 to 2375 nm. The mixed-linkage (1→3,1→4)-β-D-glucan (BG) and protein content was measured with wet chemical analysis for each single seed. A partial least squares model correlated the BG % (w/w) with the spectral measurements with a R2CV and R2PRED of 0.83 and 0.90, respectively. The predictive model for BG could be improved by averaging spectra from the same seed and by replacing the individual seed BG content with the average BG of each barley genotype.
|
580844
|
Wind farm and sleep disruption
|
As wind power generation becomes more important, experts in Australia are examining whether wind 'farm' turbine background noise in the environmental can affect sleep and wellbeing of nearby residents.
In a review of existing literature on wind turbine noise effects on sleep, the Flinders sleep researchers have weighed up the results of five prior studies. While previous studies showed no systemic effects on common sleep markers such as time taken to fall asleep and total sleep time - they did reveal some more subtle effect on sleep such as shifts in sleep stages and less time in deep sleep. "Comparing wind turbine noise to quiet background noise conditions showed no systematic effects on the most widely used objective markers of sleep, including time taken to fall asleep, total sleep time, time spent awake during the night and time spent asleep relative to overall time in bed," lead author Tessa Liebich says of a new review paper published in the international Journal of Sleep Research.
"However, some more subtle effects on sleep in some objective studies were established including shifts in sleep stages, less time spent in deep sleep and more time spent in light sleep.
Australian NHMRC funding, the Adelaide Institute for Sleep Health study at Flinders is studying sleep patterns in more than 70 volunteers in carefully controlled in-laboratory experimental study to investigate potential wind turbine noise impacts on sleep and daytime outcomes. Their final results are expected to be available around mid-2021
Senior author Dr Gorica Micic says limited knowledge and data in this area emphasises a need for further well-controlled experimental studies to provide more conclusive evidence regarding wind turbine noise effects on sleep.
"Environmental noises, such as traffic noise, are well known to impact sleep," she says. "Given wind power generation is connected with low frequency noise that can travel long distances and more readily into buildings, it is important to better understand the potential impacts of wind turbine noise on sleep."
This study aimed to comprehensively review published evidence regarding the impact of wind turbine noise on the most widely accepted objective and subjective measures of sleep time and quality.
Subjective sleep outcomes were not sufficiently uniform for combining data or comparisons between studies, researchers explain.
"Nevertheless, the available self-report data appeared to support that insomnia severity, sleep quality and daytime sleepiness can be impacted by wind turbine noise exposure in comparison to quiet background noise.
"However, firm conclusions were difficult to draw from the available studies given inconsistent study methods, variable outcome measures and limited sample sizes," researchers conclude. ###
The new research paper, A systematic review and meta-analysis of wind turbine noise effects on sleep using validated objective and subjective sleep assessments (2020) by T Liebich, L Lack, K Hansen K, B Zajamšek, N Lovato, P Catcheside and G Micic (Flinders University) has been published in the Journal of Sleep Research. DOI: 10.1111/jsr.13228
About the Flinders University Wind Farm Noise Study:
Flinders University NHMRC five-year collaborative study is looking into how the noise from wind turbines affects people's health. It also tests how much windfarm noise disturbs sleep compared to traffic noise.
|
10.1111/jsr.13228
| 2,020 |
Journal of Sleep Research
|
A systematic review and meta‐analysis of wind turbine noise effects on sleep using validated objective and subjective sleep assessments
|
Abstract Little is known about the potential impacts of wind turbine noise (WTN) on sleep. Previous research is limited to cross‐sectional studies reporting anecdotal impacts on sleep using inconsistent sleep metrics. This meta‐analysis sought to comprehensively review studies evaluating the impact of WTN using widely accepted and validated objective and subjective sleep assessments. Search terms included: “wind farm noise”, “wind turbine noise”, “wind turbine sound”, “wind turbine noise exposure” AND “sleep”. Only original articles published in English published after the year 2000 and reporting sleep outcomes in the presence of WTN using polysomnography, actigraphy or psychometrically validated sleep questionnaires were included. Uniform outcomes of the retrieved studies were meta‐analysed to examine WTN effects on objective and subjective sleep outcomes. Nine studies were eligible for review and five studies were meta‐analysed. Meta‐analyses (Hedges’ g ; 95% confidence interval [CI]) revealed no significant differences in objective sleep onset latency (0.03, 95% CI −0.34 to 0.41), total sleep time (−0.05, 95% CI −0.77 to 0.67), sleep efficiency (−0.25, 95% CI −0.71 to 0.22) or wake after sleep onset (1.25, 95% CI −2.00 to 4.50) in the presence versus absence of WTN (all p > .05). Subjective sleep estimates were not meta‐analysed because measurement outcomes were not sufficiently uniform for comparisons between studies. This systematic review and meta‐analysis suggests that WTN does not significantly impact key indicators of objective sleep. Cautious interpretation remains warranted given variable measurement methodologies, WTN interventions, limited sample sizes, and cross‐sectional study designs, where cause‐and‐effect relationships are uncertain. Well‐controlled experimental studies using ecologically valid WTN, objective and psychometrically validated sleep assessments are needed to provide conclusive evidence regarding WTN impacts on sleep.
|
636581
|
Rare deep sea Bigfin Squid sighted in Australian waters for first time
|
Rare deep sea Bigfin Squid sighted in Australian waters for first time.
|
10.1371/journal.pone.0241066
| 2,020 |
PLoS ONE
|
Multiple observations of Bigfin Squid (Magnapinna sp.) in the Great Australian Bight reveal distribution patterns, morphological characteristics, and rarely seen behaviour
|
One of the most remarkable groups of deep-sea squids is the Magnapinnidae, known for their large fins and strikingly long arm and tentacle filaments. Little is known of their biology and ecology as most specimens are damaged and juvenile, and in-situ sightings are sparse, numbering around a dozen globally. As part of a recent large-scale research programme in the Great Australian Bight, Remotely Operated Vehicles and a towed camera system were deployed in depths of 946-3258 m resulting in five Magnapinna sp. sightings. These represent the first records of Bigfin Squid in Australian waters, and more than double the known records from the southern hemisphere, bolstering a hypothesis of cosmopolitan distribution. As most previous observations have been of single Magnapinna squid these multiple sightings have been quite revealing, being found in close spatial and temporal proximity of each other. Morphological differences indicate each sighting is of an individual rather than multiple sightings of the same squid. In terms of morphology, previous in-situ measurements have been roughly based on nearby objects of known size, but this study used paired lasers visible on the body of a Magnapinna squid, providing a more accurate scaling of size. Squid of a juvenile size were also recorded and are confirmed to possess the long distal filaments which have thus far been mostly missing from specimens due to damage. We have described fine-scale habitat, in-situ colouration, and behavioural components including a horizontal example of the 'elbow' pose, and coiling of distal filaments: a behaviour not previously seen in squid. These sightings add to our knowledge of this elusive and intriguing genus, and reinforce the value of imagery as a tool in deep-sea squid research.
|
624960
|
No-till practices in vulnerable areas significantly reduce soil erosion
|
URBANA, Ill. - Soil erosion is a major challenge in agricultural production. It affects soil quality and carries nutrient sediments that pollute waterways. While soil erosion is a naturally occurring process, agricultural activities such as conventional tilling exacerbate it. Farmers implementing no-till practices can significantly reduce soil erosion rates, a new University of Illinois study shows.
Completely shifting to no-till would reduce soil loss and sediment yield by more than 70%, says Sanghyun Lee, doctoral student in the Department of Agricultural and Biological Engineering at U of I and lead author on the study, published in Journal of Environmental Management.
But even a partial change in tilling practices could have significant results, he adds.
"If we focus on the most vulnerable area in terms of soil erosion, then only 40% no-till shows almost the same reduction as 100% no-till implementation," Lee says.
The study used physical data and computer modeling to estimate soil erosion in the Drummer Creek watershed, which is part of the Upper Sangamon River watershed in Central Illinois. The area's main crops are corn and soybeans, and tillage is a predominant agricultural practice.
"The rate of soil erosion is increased and accelerated by unsustainable agricultural production. One of the main reasons is conventional tillage in the field," Lee says. "Our model provides a tool to estimate the impacts of tilling on soil erosion across the watershed."
Lee and co-authors Maria Chu, Jorge Guzman, and Alejandra Botero-Acosta developed the modeling framework, coupling a hydrological model (MIKE SHE) with the Water Erosion Prediction Project (WEPP) to examine the impacts of no-till practice in the watershed. The WEPP model provided the sediment sources from the agricultural fields under different tillage practices and the hydrologic model simulated sediment transport across the watershed.
The researchers included historical data on climate, soil properties, sediment sample data, and other relevant measures, then used the coupled model to predict how different management practices affect soil erosion rates.
"Farmers may prefer tilling because wet climate conditions cause compacted soil," Lee says. "However, soil erosion removes topsoil, which contains lots of nutrients, and this may reduce yield in the long term. Soil erosion also affects water quality, both locally over time and at a distance.
"Therefore, farmers need to weigh the benefits of tilling with the consequences of soil erosion and choose the best management strategies."
The modeling framework can help identify the most vulnerable areas, so producers can implement sustainable management practices where it matters most, Lee notes.
|
10.1016/j.jenvman.2020.111631
| 2,020 |
Journal of Environmental Management
|
A comprehensive modeling framework to evaluate soil erosion by water and tillage
|
Soil erosion is significantly increased and accelerated by unsustainable agricultural activities, resulting in one of the major threats to soil health and water quality worldwide. Quantifying soil erosion under different conservation practices is important for watershed management and a framework that can capture the spatio-temporal dynamics of soil erosion by water is required. In this paper, a modeling framework that coupled physically based models, Water Erosion Prediction Project (WEPP) and MIKE SHE/MIKE 11, was presented. Daily soil loss at a grid-scale resolution was determined using WEPP and the transport processes were simulated using a generic advection dispersion equation in MIKE SHE/MIKE 11 models. The framework facilitated the physical simulation of sediment production at the field scale and transport processes across the watershed. The coupled model was tested using an intensively managed agricultural watershed in Illinois. The impacts of no-till practice on both sediment production and sediment yield were evaluated using scenario-based simulations with different fractions of no-till and conventional tillage combinations. The results showed that if no-till were implemented for all fields throughout the watershed, 76% and 72% reductions in total soil loss and sediment yield, respectively, can be achieved. In addition, if no-till practice were implemented in the most vulnerable areas to sediment production across the watershed, a 40% no-till implementation can achieve almost the same reduction as 100% no-till implementation. Based on the simulation results, the impacts of no-till practice are more prominent if implemented where it is most needed.
|
647836
|
How the Humboldt squid's genetic past and present can secure its future
|
A group of marine biologists is pushing for more international collaboration to manage the Humboldt squid population after their study to identify its genetic stocks revealed its vulnerability to overfishing by fleets trying to feed the world’s hunger for squids.
Hiroshima University marine biologist Gustavo Sanchez led a team of researchers to find out the genetic structure of the Humboldt squid population in the Eastern Pacific Ocean using two types of DNA markers — the mitochondrial ND2 gene and nuclear microsatellite loci.
The team found that Humboldt squids could trace back their population to three historical matrilineage that spread out during the late Pleistocene and that the species has at least two contemporary genetic stocks homogeneously co-distributed in the northern and southern hemispheres.
Different genetic stocks within a species are usually defined by where they feed and breed. But in Humboldt squids, DNA markers showed no north-south divide. The equator doesn’t serve as a natural barrier to separate the different genetic stocks of these fast swimmers risking capture by different fishery fleets along their migration route.
“In our study, we identify at least two genetic stocks co-distributed in the north and southern hemisphere of the Eastern Pacific Ocean. Our results suggest that rather than independent marine policies from each country, the sustainability of this squid requires an international marine policy,” Sanchez said.
To ensure sustainable fishing, countries in South America where the squid is traditionally found have established yearly catch quotas. But the study found this approach to be ineffective, especially as catch restrictions are absent in international waters on the squid’s migration path.
“Countries fishing this squid have established catch quotas with no consideration that the total amount varies from year to year, and that the amount of squid caught influences the number of squids next year. By doing so, the genetic contribution of the offspring every year will also clearly fluctuate. In such a situation, there is a risk of having a genetic erosion with a smaller number of squids which are also less likely to adapt rapidly to the changing environment,” he remarked.
“From our study, it is also clear that the squids caught by different countries also belong at least two different populations, with likely different genetic contribution for the next generation. Catching these squids without knowing that their genetic contribution is different, is also very risky.”
A grim warning
Both warm tropical waters and the cooler Humboldt current, which runs from Tierra del Fuego at the southernmost tip of the South American mainland upwards to the northern coast of Peru, play a role in the Humboldt squid’s life cycle.
The squid seeks warm waters near the equator to spawn its clusters of neutrally buoyant eggs. But it needs nutrient-rich cool waters where they go on a feeding frenzy to grow from one-millimeter paralarvae specks to enormous predators of over 1.2 meters long.
These squids typically spawn only once during their one-year lifespan then die, making their future volatile if fishing goes unchecked. And such fears are not farfetched.
It’s eastern relatives, the Japanese flying squid, has suffered the same fate. Years of overfishing, poor regulatory oversight, and the changing climate have depleted their population at an alarming rate that yearly catch of Japanese fishermen dropped over 70% from more than 200,000 tons in 2011 to 53,000 tons in 2017. The shortage worries the fishing town of Hakodate whose identity and economy are intertwined with the squid.
“The population of the Japanese flying squids has decreased, and this is because along the distribution of this squid you have a lot of fleets from Japan, China, Korea, and Taiwan, some with high capacity for catching this squid. Countries like China with massive distant-water fishing fleets can move anywhere outside their national jurisdiction to catch this squid. If you have the technology you can go to international waters and catch anything,” Sanchez said.
He said Hakodate’s experience could be a grim warning of things to come for his country Peru.
“The Humboldt squid is the second most important economical species in Peru. That means that when we have less squid, that will affect also the economy of the country, particularly the economy of the fisherman that depends on this squid,” he said.
Historical clues
Over 90 percent of warming on Earth in the past 50 years has happened in the ocean and the speed it is heating up is accelerating. Warming oceans due to climate change have driven sea creatures toward the poles.
The Humboldt squid population itself has expanded its migratory path. It recently stretched its route farther north to Alaska and south to the tip of Chile which exposes these cephalopods that hunt in packs of up to 1,200 to fishing boats in each territory on its path as well as technologically advanced vessels waiting in international waters.
Sanchez’s team found a similar pattern of historical population expansion under extreme climate conditions when they looked at the mitochondrial DNA of the squid. They found that warming global temperatures 30,000 years ago which thawed Ice Age glaciers contributed to a sea-level rise favorable for the Humboldt squid population to spread out. The event which coincided with the decrease in the population of sperm whales, their natural predators, led to a population expansion for the squids.
Although quick to adapt, warmer temperatures mean less food, smaller maturity size, and fewer eggs to replenish its population.
Securing Humboldt squids’ future
Much, including its conservation status, is still unknown of this large squid species. But with its economic significance to fishing communities and its important role in the marine ecosystem as food for diverse species, the new knowledge of its genetic stock can help inform future marine policies to manage its population.
“The Humboldt squid is the largest squid fishery in the world and is heavily caught in the Eastern Pacific Ocean by several countries, including countries from Asia like Japan, Korea, China, and Taiwan. This squid is one of the most commercial squids in the world, and it sustains the economy of many countries.”
“Identifying genetic stocks, also known as genetically different groups, throughout population genetics is very important for implementing marine policies that control the total catch of this squid. The high migratory capacity of this squid is the main challenge to identify the exact number of genetic stocks, and more genetic resources and sampling are required to clearly reveal this number.”
###
About Hiroshima University
Since its foundation in 1949, Hiroshima University has striven to become one of the most prominent and comprehensive universities in Japan for the promotion and development of scholarship and education. Consisting of 12 schools for undergraduate level and 4 graduate schools, ranging from natural sciences to humanities and social sciences, the university has grown into one of the most distinguished comprehensive research universities in Japan. English website: https://www.hiroshima-u.ac.jp/en
|
10.1007/s11160-020-09609-9
| 2,020 |
Reviews in Fish Biology and Fisheries
|
Patterns of mitochondrial and microsatellite DNA markers describe historical and contemporary dynamics of the Humboldt squid Dosidicus gigas in the Eastern Pacific Ocean
|
Dosidicus gigas is an economically important species distributed in the Eastern Pacific Ocean. Unraveling the genetic population structure of this species is crucial to ensure its fishery sustainability and management. Mitochondrial DNA sequences and nuclear neutral loci are useful to understand how historical and contemporary factors drive the genetic population structure of species. However, most studies investigating genetic structuring of D. gigas from its northern and southern populations rely on patterns identified using mitochondrial genes. The use of both types of DNA markers is especially relevant for marine species with high dispersal capabilities such as D. gigas. Here, we describe the genetic structure of D. gigas using partial sequences of the mitochondrial gene NADH dehydrogenase subunit 2 and nuclear microsatellite loci in populations of the northern hemisphere from the Costa Rica Thermal Dome and off Ecuador; and, of the southern hemisphere from the South Equatorial Current and off Peru. Statistical parsimony network and Bayesian analyses from mitochondrial sequences revealed three historical maternal lineages in both hemispheres, with high levels of genetic differentiation and signatures of population expansion during the late Pleistocene. Use of Discriminant Analysis of Principal Component (DAPC) with microsatellite loci of mature and immature individuals showed the presence of at least two contemporary genetic stocks homogeneously co-distributed in both northern and southern hemispheres, which can be explained by the biological characteristics of D. gigas and the variable oceanographic conditions of the Eastern Pacific Ocean. Overall, our findings indicate that cooperation between countries with intensive fishing will benefit the sustainability of D. gigas.
|
469474
|
General patient infections transferred similarly to hospital-acquired infections
|
A new study shows that the networks formed by patterns of patient transfers between hospitals in France are very similar among three patient populations: those diagnosed with hospital-acquired infections (HAIs), those with suspected HAIs, and the general patient population. The research, published in PLOS Computational Biology, could help inform efforts to reduce the spread of HAIs.
Previous research has revealed the importance of studying HAIs not just in individual hospitals but also in the context of larger networks formed by transfers of the general patient population among healthcare institutions. Such networks can help predict HAI spread. However, it was unclear whether more focused networks formed by transfers of patients diagnosed with HAIs could provide new information.
In the new study, researchers from Conservatoire National des Arts et Métiers and from the École des Hautes Études en Santé Publique addressed this question. The scientists assembled data on all patient discharges from hospitals in France for the year 2014. They used social network analysis methods to analyze the structure of the networks formed by transfers of patients with HAIs, those with suspected HAIs, and the general patient population.
The team found that the only major difference between the three types of networks was their size; about 1 million transfers occurred in 2014, with about 130,000 involving patients suspected to have HAIs and 14,000 diagnosed with HAIs. Otherwise, the team was surprised to find, all three networks showed similar patient transfer patterns and similar underlying structure, often centered around university hospitals.
The analysis echoed the findings of previous studies showing that most transfers occur in regional clusters, with a small percentage of inter-regional transfers. The researchers identified key transfer patterns within regions; these insights could potentially help detect and control outbreaks in their early stages before they reach highly centralized university hospitals--increasing the risk of accelerated spread of multi-drug resistant bacteria throughout the network.
"Researchers should consider our work along with the growing literature that describes the specific nature of healthcare networks in which patient transfers are centralized toward hub healthcare centers," says study corresponding author Narimane Nekkab. Such work should "consider directionality of patient movement to construct sub-regional communities to better understand patient transfer patterns at the local level."
###
In your coverage please use this URL to provide access to the freely available article in PLOS Computational Biology: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005666
Citation: Nekkab N, Astagneau P, Temime L, Crépey P (2017) Spread of hospital-acquired infections: A comparison of healthcare networks. PLoS Comput Biol 13(8): e1005666. https://doi.org/10.1371/journal.pcbi.1005666
Funding: This work was supported by the Interdisciplinary research program on health crisis and health protection (PRINCEPS) of Sorbonne Paris Cité University, within the program "Investissements d'Avenir" launched by the French State. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests.
|
10.1371/journal.pcbi.1005666
| 2,017 |
PLoS Computational Biology
|
Spread of hospital-acquired infections: A comparison of healthcare networks
|
Hospital-acquired infections (HAIs), including emerging multi-drug resistant organisms, threaten healthcare systems worldwide. Efficient containment measures of HAIs must mobilize the entire healthcare network. Thus, to best understand how to reduce the potential scale of HAI epidemic spread, we explore patient transfer patterns in the French healthcare system. Using an exhaustive database of all hospital discharge summaries in France in 2014, we construct and analyze three patient networks based on the following: transfers of patients with HAI (HAI-specific network); patients with suspected HAI (suspected-HAI network); and all patients (general network). All three networks have heterogeneous patient flow and demonstrate small-world and scale-free characteristics. Patient populations that comprise these networks are also heterogeneous in their movement patterns. Ranking of hospitals by centrality measures and comparing community clustering using community detection algorithms shows that despite the differences in patient population, the HAI-specific and suspected-HAI networks rely on the same underlying structure as that of the general network. As a result, the general network may be more reliable in studying potential spread of HAIs. Finally, we identify transfer patterns at both the French regional and departmental (county) levels that are important in the identification of key hospital centers, patient flow trajectories, and regional clusters that may serve as a basis for novel wide-scale infection control strategies.
|
849912
|
Air pollution may be linked to heightened mouth cancer risk
|
High levels of air pollutants, especially fine particulate matter (PM2.5) and to a lesser extent, ozone, may be linked to a heightened risk of developing mouth cancer, suggests the first study of its kind, published online in the Journal of Investigative Medicine.
The number of new cases, and deaths from, mouth cancer is increasing in many parts of the world. Known risk factors include smoking, drinking, human papilloma virus, and in parts of South East Asia, the chewing of betel quid ('paan'), a mix of ingredients wrapped in betel leaf.
Exposure to heavy metals and emissions from petrochemical plants are also thought to be implicated in the development of the disease, while air pollution, especially PM2.5, is known to be harmful to respiratory and cardiovascular health.
To find out if air pollutants might have a role in the development of mouth cancer, the researchers mined national cancer, health, insurance, and air quality databases.
They drew on average levels of air pollutants (sulphur dioxide, carbon monoxide, ozone, nitrogen monoxide, nitrogen dioxide, and varying sizes of fine particulate matter), measured in 2009 at 66 air quality monitoring stations across Taiwan.
In 2012-13, they checked the health records of 482,659 men aged 40 and older who had attended preventive health services, and had provided information on smoking/betel quid chewing.
Diagnoses of mouth cancer were then linked to local area readings for air pollutants taken in 2009.
In 2012-13, 1617 cases of mouth cancer were diagnosed among the men. Unsurprisingly, smoking and frequent betel quid chewing were significantly associated with heightened risk of a diagnosis.
But so too were high levels of PM2.5. After taking account of potentially influential factors, increasing levels of PM2.5 were associated with an increasing risk of mouth cancer.
When compared with levels below 26.74 ug/m3, those above 40.37 ug/ m3 were associated with a 43 per cent heightened risk of a mouth cancer diagnosis.
A significant association was also observed for ozone levels below 28.69-30.97 parts per billion.
This is an observational study, and as such, can't establish cause. And there are certain caveats to consider, say the researchers. These include the lack of data on how much PM2.5 enters the mouth, or on long term exposure to this pollutant.
Nor is it clear how air pollutants might contribute to mouth cancer, they acknowledge, and further research would be needed to delve further into this.
But some of the components of PM2.5 include heavy metals, as well as compounds such as polycyclic aromatic hydrocarbons-known cancer causing agents-they say.
And the smaller diameter, but larger surface area, of PM2.5 means that it can be relatively easily absorbed while at the same time potentially wreaking greater havoc on the body, they suggest.
"This study, with a large sample size, is the first to associate oral cancer with PM2.5...These findings add to the growing evidence on the adverse effects of PM2.5 on human health," they conclude.
|
10.1136/jim-2016-000263
| 2,018 |
Journal of Investigative Medicine
|
Association between Fine Particulate Matter and Oral Cancer among Taiwanese Men
|
The aim of this study was to investigate the association between fine particulate matter 2.5 (PM 2.5 ) and oral cancer among Taiwanese men. Four linked data sources including the Taiwan Cancer Registry, Adult Preventive Medical Services Database, National Health Insurance Research Database, and Air Quality Monitoring Database were used. Concentrations of sulfur dioxide, carbon monoxide, ozone, NOx (nitrogen monoxide and nitrogen dioxide), coarse particulate matter (PM 10-2.5 ) and PM 2.5 in 2009 were assessed in quartiles. A total of 482 659 men aged 40 years and above were included in the analysis. Logistic regression was used to examine the association between PM 2.5 and oral cancer diagnosed from 2012 to 2013. After adjusting for potential confounders, the ORs of oral cancer were 0.91 (95% CI 0.75 to 1.11) for 26.74≤PM 2.5 <32.37, 1.01 (95% CI 0.84 to 1.20) for 32.37≤PM 2.5 <40.37 μg/m 3 and 1.43 (95% CI 1.17 to 1.74) for PM 2.5 ≥40.37 μg/m 3 compared with PM 2.5 <26.74 μg/m 3 . In this study, there was an increased risk of oral cancer among Taiwanese men who were exposed to higher concentrations of PM 2.5 .
|
798647
|
Gene study boosts bid to keep British bees safe from disease
|
Efforts to protect the UK's native honey bees could be helped by research that maps their entire genetic make-up.
Experts also analysed the genetic profile of bacteria and other organisms that live inside bees, to shed new light on emerging diseases that threaten bee colonies.
Researchers say their findings could help to safeguard native bee populations from the effects of infectious diseases through improved health monitoring.
Bees play a vital role in helping to pollinate crops and wild plants, so minimising risks to them is crucial.
A team led by the University of Edinburgh analysed the entire genetic makeup of bee colonies from across the UK and compared them with recently imported bees.
They found that bees from some hives in Scotland were genetically very similar to the UK's native dark honey bee, even though southern European strains have been imported for many years
The researchers from the University's Roslin Institute say this is good news as native bees were thought to be endangered in the UK. They suggest this could mean that native bees survive better in cooler climates than their relatives from southern Europe.
The team also analysed the genetic makeup of bacteria and other organisms that live inside bees - the so-called metagenome.
The findings uncovered organisms that had not been seen before in honey bees and that may cause disease. Hives that are infected with these organisms may also be more susceptible to other infections.
|
10.1038/s41467-018-07426-0
| 2,018 |
Nature Communications
|
Characterisation of the British honey bee metagenome
|
The European honey bee (Apis mellifera) plays a major role in pollination and food production. Honey bee health is a complex product of the environment, host genetics and associated microbes (commensal, opportunistic and pathogenic). Improved understanding of these factors will help manage modern challenges to bee health. Here we used DNA sequencing to characterise the genomes and metagenomes of 19 honey bee colonies from across Britain. Low heterozygosity was observed in many Scottish colonies which had high similarity to the native dark bee. Colonies exhibited high diversity in composition and relative abundance of individual microbiome taxa. Most non-bee sequences were derived from known honey bee commensal bacteria or pathogens. However, DNA was also detected from additional fungal, protozoan and metazoan species. To classify cobionts lacking genomic information, we developed a novel network analysis approach for clustering orphan DNA contigs. Our analyses shed light on microbial communities associated with honey bees and demonstrate the power of high-throughput, directed metagenomics for identifying novel biological threats in agroecosystems.
|
611713
|
Data analysis finds lower risk of infection with LASIK than with contacts over time
|
Memphis, Tenn. - Which causes fewer eye infections - contact lens wear or LASIK surgery? While traditionally contacts were thought to be safer than a surgical procedure, an analysis by ophthalmologists from the Hamilton Eye Institute at the University of Tennessee Health Science Center indicates otherwise.
A meta-data analysis comparing the incidence of microbial keratitis, an infection of the cornea caused by bacteria or a virus, for contact lens wearers versus post-LASIK (laser-assisted in situ keratomileusis) patients indicates that over time the infection rate for the contact lens wearers was higher than for those who had LASIK to correct their vision. An article on the findings was published in the Journal of Cataract & Refractive Surgery, a high-impact, peer-reviewed scientific journal.
"Microbial keratitis is a relatively rare complication associated with contact lens use and LASIK postoperatively," the article said. The authors were Jordan Masters, MD; Mehmet Kocak, PhD; and Aaron Waite, MD. "The risk for microbial keratitis was similar between patients using contact lenses at one year, compared with LASIK. Over time, the risk for microbial keratitis was higher for contact lens use than for LASIK, specifically with extended-wear lenses."
Literature in the PubMed database between December 2014 and July 2015 was analyzed. The results showed that after one year of daily soft-contact lens wear, there were fewer microbial keratitis cases than after LASIK, approximately two fewer cases per 10,000. If the surgery is assumed to have essentially a one-time risk for infection, after five years of extrapolation, contact lens wearers would show 11 more cases per 10,000 than those with surgery.
"Most contact lens wearers use them for decades, which means they have a much higher risk of corneal infection compared to the risk with LASIK," said Dr. Waite, director of the Cornea, Cataract, and Refractive Surgery Program at the Hamilton Eye Institute and associate professor in the Department of Ophthalmology at UT Health Science Center.
Microbial keratitis can be devastating, since it can lead to vision loss. It can also be expensive. Contact lens wear has been associated as a risk factor in the development of the condition. Factors, including hygiene, lens type, and history of use, contribute to the risk. According to the analysis, the approximately 38 million contact lens wearers in the United States accounted for an estimated 1 million clinical visits related to microbial keratitis at a cost of about $174.9 million in 2010.
"We did this analysis to directly compare the rate for corneal infections between contact lens use and LASIK," Dr. Waite said. "Contact lenses carry a real risk of infection. In our experience with contact lens infections, some patients have lost vision and have needed a corneal transplant, or even lost the eye. There are cases where LASIK could have prevented this vision loss. LASIK does carry a rare risk of infection, however, it is a one-time risk compared to a continuous risk for infection in contact lens users. We wanted to compare the rates to get hard numbers."
This is believed to be the first meta-analysis comparing the rates of microbial keratitis in contact lens wearers to those who have had LASIK surgery. "It is difficult to compare complications from contact lens use to LASIK, because the complication rate of both is so rare, but our analysis definitely shows that the infection rate is higher with contact lens use compared to LASIK," Dr. Waite said.
More studies are needed to focus on other complications, such as vision loss and dry eye, to further explore the safety and risk of complications.
###
As Tennessee's only public, statewide, academic health system, the mission of the University of Tennessee Health Science Center (UTHSC) is to bring the benefits of the health sciences to the achievement and maintenance of human health, with a focus on the citizens of Tennessee and the region, by pursuing an integrated program of education, research, clinical care, and public service. Offering a broad range of postgraduate and selected baccalaureate training opportunities, the main UTHSC campus is located in Memphis and includes six colleges: Dentistry, Graduate Health Sciences, Health Professions, Medicine, Nursing and Pharmacy. UTHSC also educates and trains cohorts of medicine, pharmacy and/or health professions students -- in addition to medical residents and fellows -- at its major sites in Knoxville, Chattanooga and Nashville. Founded in 1911, during its more than 100 years, UT Health Science Center has educated and trained more than 57,000 health care professionals in academic settings and health care facilities across the state. For more information, visit http://www.uthsc.edu. Follow us on Facebook: facebook.com/uthsc, on Twitter: twitter.com/uthsc and on Instagram: instagram.com/uthsc.
References:
Cope JR, Collier SA, Srinivasan K, et al. Contact Lens-Related Corneal Infections -- United States, 2005-2015. MMWR Morb Mortal Wkly Rep 2016;65:817-820. DOI: http://dx.doi.org/10.15585/mmwr.mm6532a2
|
10.1016/j.jcrs.2016.10.022
| 2,017 |
Journal of Cataract & Refractive Surgery
|
Risk for microbial keratitis: Comparative metaanalysis of contact lens wearers and post-laser in situ keratomileusis patients
|
To compare the risk for microbial keratitis in contact lens wearers stratified by wear schedule with the risk after laser in situ keratomileusis (LASIK).Hamilton Eye Institute and Department of Ophthalmology, University of Tennessee Health Science Center, Memphis, Tennessee, USA.Comparative metaanalysis and literature review.An extensive literature search was performed in the PubMed database between December 2014 and July 2015. This was followed by a metaanalysis using a mixed-effects modeling approach.After 1 year of daily soft contact lens wear, there were fewer microbial keratitis cases than after LASIK, or approximately 2 cases fewer cases per 10 000 (P = .0609). If LASIK were assumed to have essentially a 1-time risk for microbial keratitis, 5 years of extrapolation would yield 11 more cases per 10 000 with daily soft contact lens wear than with LASIK, or approximately 3 times as many cases (P < .0001). The extended use of soft contact lenses led to 12 more cases at 1 year than LASIK, or approximately 3 times as many cases (P < .0001), and 81 more cases at 5 years (P < .0001). When incorporating an estimated 10% retreatment rate for LASIK, these results changed very little.Microbial keratitis is a relatively rare complication associated with contact lens use and LASIK postoperatively. The risk for microbial keratitis was similar between patients using contact lenses for 1 year compared with LASIK. Over time, the risk for microbial keratitis was higher for contact lens use than for LASIK, specifically with extended-wear lenses.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.