eurekaalert_id
stringlengths
6
6
eurekaalert_title
stringlengths
0
254
eurekaalert_text
stringlengths
0
37.9k
doi
stringlengths
12
42
publication_year
int64
1.99k
2.02k
publication_source
stringlengths
3
123
publication_title
stringlengths
4
702
publication_abstract
stringlengths
1
50.7k
955763
Ideal nodal rings of one-dimensional photonic crystals in the visible region
Photonic Dirac cone is a special kind of degenerate state with linear dispersion, which is ubiquitous in various two-dimensional photonic crystals. By introducing the spatial inversion symmetry breaking, photonic Dirac cone will transit to valley states. Such photonic crystals are called valley photonic crystals. In the band gap of the valley photonic crystal, there are valley-protected edge states, which have been verified in silicon photonic crystal slabs. Based on these edge states, many micro-nano photonic integrated devices such as sharp bend waveguides and microcavity lasing have been realized. Correspondingly, photonic nodal ring is a kind of energy band degeneracy that exists in the three-dimensional structure, and it appears as a closed ring in the energy band. By introducing symmetry breaking, the nodal ring can transit to ridge states. Compared with the valley state, the ridge state has more abundant optical behaviors, such as negative refraction and surface-dependent Goos–Hänchen shift. However, the complex three-dimensional structure hinders the realization of the above behaviors in the optical region, and even further to obtain novel functional photonic devices. Therefore, how to realize the ridge state by a simple optical structure has become an urgent problem to be solved in this field. In a new paper published in Light Science & Applications, a team led by Professor Jianwen Dong from School of Physics and State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-sen University, China have realized an ideal nodal ring and ridge states in visible region using simple 1D photonic crystals. They also experimentally observed the surface states in the bandgap of the ridge photonic crystal. The research highlights of this work include the following three aspects:     1. One-dimensional nodal ring photonic crystal It is generally believed that one-dimensional photonic crystals can only realize topological states in one-dimensional momentum space. Nodal ring is degenerate state in three-dimensional momentum space. It seems impossible to achieve nodal ring by one-dimensional photonic crystals. The research team studied a one-dimensional photonic crystal composed of silicon dioxide and silicon nitride materials (Fig. 1(a)). By creatively taking into account the momenta along aperiodic directions, and utilizing the rotational symmetry, they finally realized the ideal nodal ring (Fig. 1(b)). The research team prepared samples using inductance coupled plasma-enhanced chemical vapor deposition (ICP-CVD). By measuring angle-resolved reflectance spectra in the wavelength range of 500-1100 nm, they confirmed the existence of the nodal ring (Fig. 1(c)).     2. One-dimensional ridge photonic crystal Further, the research team replaced a layer of silicon nitride (n=2) with silicon-rich nitride (n=3) in the primary cell of the nodal ring photonic crystal, thereby breaking the spatial inversion symmetry of the structure (Fig. 2(a)). In this case, the nodal ring degeneracy transits to a ridge state with a bandgap. They called this kind of photonic crystal as ridge photonic crystal. Similar to the valley photonic crystal, the calculation results show that a toroidal-shaped Berry flux is formed near the position of the original nodal ring, implying the existence of topologically protected interface states in this bandgap (Fig. 2(c)). Using the low-loss silicon-rich nitride film growth process developed earlier, the research team fabricated a one-dimensional ridge photonic crystal, and observed the interface states by measuring the angle-resolved reflectance spectrum in the wavelength range of 600-1100 nm (Fig. 2(d)).     3. Intrinsic relationship between optical Tamm state and nodal ring At the beginning of this century, people discovered that there may be surface states at the interface of one-dimensional photonic crystal and the metal, which is called optical Tamm state. In this work, the research team found that the nodal ring just locates the singularity of the photonic crystal’s reflection phase, which makes the existence condition of the optical Tamm state always satisfied. Therefore, there must be optical Tamm states protected by phase singularities at the interface of metal and the nodal ring photonic crystal, which provides theoretical guidance for deterministic design of optical Tamm states. The research team deposited a silver film on the surface of the one-dimensional nodal ring photonic crystal using the electron beam evaporation method. The existence of the optical Tamm states was confirmed by measuring the angle-resolved reflectance spectra in the wavelength range of 600-1000 nm. “Our work paves the way for realizing optical phenomena such as negative refraction of surface states and surface-dependent Goos–Hänchen shift in the optical region. Furthermore, nodal ring can also transit to Weyl point degeneracy by introducing other types of symmetry breaking. Therefore, the one-dimensional nodal ring photonic crystal proposed in this work provides the possibility to explore the applications of Weyl point and its associated topological surface states in micro-nano optics.” The scientists forecast. Light Science & Applications 10.1038/s41377-022-00821-9
10.1038/s41377-022-00821-9
2,022
Light Science & Applications
Ideal nodal rings of one-dimensional photonic crystals in the visible region
Three-dimensional (3D) artificial metacrystals host rich topological phases, such as Weyl points, nodal rings, and 3D photonic topological insulators. These topological states enable a wide range of applications, including 3D robust waveguides, one-way fiber, and negative refraction of the surface wave. However, these carefully designed metacrystals are usually very complex, hindering their extension to nanoscale photonic systems. Here, we theoretically proposed and experimentally realized an ideal nodal ring in the visible region using a simple 1D photonic crystal. The π-Berry phase around the ring is manifested by a 2π reflection phase's winding and the resultant drumhead surface states. By breaking the inversion symmetry, the nodal ring can be gapped and the π-Berry phase would diffuse into a toroidal-shaped Berry flux, resulting in photonic ridge states (the 3D extension of quantum valley Hall states). Our results provide a simple and feasible platform for exploring 3D topological physics and its potential applications in nanophotonics.
843676
Identification of RNA editing profiles and their clinical relevance in lung adenocarcinoma
The incidence rate of lung adenocarcinoma (LUAD) is increasing gradually and the mortality is still high. Recent advances in the genomic profile of LUAD have identified a number of driver alterations in specific genes, enabling molecular classification and targeted therapy accordingly. However, only a fraction of LUAD patients with those driver mutations could benefit from targeted therapy, and the remaining large numbers of patients were unclassified. RNA editing events are those nucleotide changes in the RNA. Currently, the role of RNA editing events in tumorigenesis and their potential clinical utility have been reported in a series of studies. However, the profiles of the RNA editing events and their clinical relevance in LUAD remained largely unknown. "We describe a comprehensive landscape of RNA editing events in LUAD by integrating transcriptomic and genomic data from our NJLCC project and TCGA project. We find that the global RNA editing level is significantly increased in tumor tissues and is highly heterogeneous across LUAD patients. The high RNA editing level in tumors can be attributed to both RNA and DNA alterations." said Dr. Cheng Wang, the first author for this work. The results indicated that the pattern of RNA editing events could represent the global characteristics of lung adenocarcinoma. "We then define a new molecular subtype, EC3, based on most variable RNA editing sites. The patients of this subtype show the poorest prognosis. Importantly, the subtype is independent of classic molecular subtypes based on gene expression or DNA methylation. We further propose a simplified prediction model including eight RNA editing sites to accurately distinguish EC3 subtype. " said Dr. Wang. Molecular typing based on a few RNA editing sites may have enormous potential in the clinics. "By applying the simplified model, we find that the EC3 subtype is associated with the sensitivity of specific chemotherapy drugs." said Dr. Wang. "Our study comprehensively describes the general pattern of RNA editing in LUAD. More importantly, we propose a novel molecular subtyping strategy of LUAD based on RNA editing that could predict the prognosis of patients. A simplified model with a few editing sites makes the strategy potentially available in the clinics." said Professor Hongbing Shen, the corresponding author.
10.1007/s11427-020-1928-0
2,021
Science China Life Sciences
Identification of A-to-I RNA editing profiles and their clinical relevance in lung adenocarcinoma
Adenosine-to-inosine (A-to-I) RNA editing is a widespread posttranscriptional modification that has been shown to play an important role in tumorigenesis. Here, we evaluated a total of 19,316 RNA editing sites in the tissues of 80 lung adenocarcinoma (LUAD) patients from our Nanjing Lung Cancer Cohort (NJLCC) and 486 LUAD patients from the TCGA database. The global RNA editing level was significantly increased in tumor tissues and was highly heterogeneous across patients. The high RNA editing level in tumors was attributed to both RNA (ADAR1 expression) and DNA alterations (mutation load). Consensus clustering on RNA editing sites revealed a new molecular subtype (EC3) that was associated with the poorest prognosis of LUAD patients. Importantly, the new classification was independent of classic molecular subtypes based on gene expression or DNA methylation. We further proposed a simplified model including eight RNA editing sites to accurately distinguish the EC3 subtype in our patients. The model was further validated in the TCGA dataset and had an area under the curve (AUC) of the receiver operating characteristic curve of 0.93 (95%CI: 0.91-0.95). In addition, we found that LUAD cell lines with the EC3 subtype were sensitive to four chemotherapy drugs. These findings highlighted the importance of RNA editing events in the tumorigenesis of LUAD and provided insight into the application of RNA editing in the molecular subtyping and clinical treatment of cancer.
832779
Focus on context diminishes memory of negative events, researchers report
In a new study, researchers report they can manipulate how the brain encodes and retains emotional memories. The scientists found that focusing on the neutral details of a disturbing scene can weaken a person's later memories - and negative impressions - of that scene. The findings, reported in the journal Neuropsychologia, could lead to the development of methods to increase psychological resilience in people who are likely to experience traumatic events - like soldiers, police officers or firefighters. Those plagued by depression or anxiety might also benefit from this kind of strategy, the researchers said. "We were interested in different properties of memories that are typically enhanced by emotion," said Florin Dolcos, a professor of psychology at the University of Illinois at Urbana-Champaign who led the study with psychology professor Sanda Dolcos. "The idea was to see whether by engaging in an emotional-regulation strategy we can influence those types of memory properties." There are two categories of memory retrieval. A person may recall a lot of details about an event or experience, a process the researchers call "recollection." Or an individual may have a sense of familiarity with the subject matter but retain no specifics. "We and others showed a while ago that emotion tends to boost our memories," Florin Dolcos said. "We have also known that emotion specifically boosts recollected memories." This memory-enhancing quality of emotion is useful, but it can be problematic for those who recall - again and again - the details of a disturbing or traumatic event, he said. "Negative memories could lead to clinical conditions such as post-traumatic stress disorder, where something that is really traumatic stays with specific details in people's minds," he said. In the study, 19 participants had their brains scanned while they looked at photos with negative or neutral content - a bloody face or a tree, for example - superimposed on a neutral background. Functional MRI signaled which brain areas were activated during the task. An eye-tracker recorded where participants looked. Before each photo, participants were asked to focus their attention either on the foreground or on the background of the image. After viewing it for four seconds, they rated how negatively the photo made them feel (not at all, very, or somewhere in between). Participants returned to lab three to five days later to view the same photos - and a few new ones. They were asked to indicate whether the images were entirely new; familiar, but with no remembered specifics; or recollected in more detail. Not surprisingly, when they focused on the foreground of photos with negative content, participants rated the photos as more negative. When they focused instead on the neutral backgrounds of photos with negative content, they still evaluated the photos as negative, but rated them less negatively. They also retained fewer detailed memories of the negative photos a few days later, the team found. "This is the first example that we know of that focusing on the context of an emotional event while it is occurring can directly influence memory formation in the moment - and one's recall of the event a few days later," Sanda Dolcos said. The fMRI scans revealed that brain regions known to be associated with emotional memory formation were most active when participants focused on the foregrounds of negative photos. Brain activity differed, however, when participants focused on the backgrounds of negative images. "The background-focus condition was associated with decreased activity in the amygdala, hippocampus and anterior parahippocampal gyrus," the researchers wrote. These brain regions play a role in encoding memory and processing emotional information. A statistical analysis "also showed that reduced activity in these regions predicted greater reduction in emotional recollection." "It might seem counterintuitive that we are looking for ways to reduce people's memories," Sanda Dolcos said. "Usually, people are interested in improving their memories. But we are finding that strategies like this, that can be employed when we are exposed to certain distressing situations, can help a lot." Florin and Sanda Dolcos are affiliates of the Beckman Institute for Advanced Science and Technology at the U. of I. ### Editor's notes: To reach Florin Dolcos, call 217-418-3992; email [email protected]. To reach Sanda Dolcos, call 217-418-3995; email [email protected]. The paper "The impact of focused attention on subsequent emotional recollection: A functional MRI investigation" is available online and from the U. of I. News Bureau.
10.1016/j.neuropsychologia.2020.107338
2,020
Neuropsychologia
The impact of focused attention on subsequent emotional recollection: A functional MRI investigation
In his seminal works, Endel Tulving argued that functionally distinct memory systems give rise to subjective experiences of remembering and knowing (i.e., recollection- vs. familiarity-based memory, respectively). Evidence shows that emotion specifically enhances recollection, and this effect is subserved by a synergistic mechanism involving the amygdala (AMY) and hippocampus (HC). In extreme circumstances, however, uncontrolled recollection of highly distressing memories may lead to symptoms of affective disorders. Therefore, it is important to understand the factors that can diminish such detrimental effects. Here, we investigated the effects of Focused Attention (FA) on emotional recollection. FA is an emotion regulation strategy that has been proven quite effective in reducing the impact of emotional responses associated with the recollection of distressing autobiographical memories, but its impact during emotional memory encoding is not known. Functional MRI and eye-tracking data were recorded while participants viewed a series of composite negative and neutral images with distinguishable foreground (FG) and background (BG) areas. Participants were instructed to focus either on the FG or BG content of the images and to rate their emotional responses. About 4 days later, participants' memory was assessed using the R/K procedure, to indicate whether they Recollected specific contextual details about the encoded images or the images were just familiar to them - i.e., participants only Knew that they saw the pictures without being able to remember specific contextual details. First, results revealed that FA was successful in decreasing memory for emotional pictures viewed in BG Focus condition, and this effect was driven by recollection-based retrieval. Second, the BG Focus condition was associated with decreased activity in the AMY, HC, and anterior parahippocampal gyrus for subsequently recollected emotional items. Moreover, correlation analyses also showed that reduced activity in these regions predicted greater reduction in emotional recollection following FA. These results demonstrate the effectiveness of FA in mitigating emotional experiences and emotional recollection associated with unpleasant emotional events.
667182
New childhood dementia insight
Is the eye a window to the brain in Sanfilippo syndrome, an untreatable form of childhood-onset dementia, Australian researchers ask in a new publication. The findings of the NHMRC-funded project, just published in international journal Acta Neuropathologica Communications, highlight the potential for using widely available retinal imaging techniques to learn more about brain disease and monitor treatment efficacy. Sanfilippo syndrome is one of a group of about 70 inherited conditions which collectively affect 1 in 2800 children in Australia, and is more common than cystic fibrosis and better known diseases. Around the world 700,000 children and young people are living with childhood dementia. Researchers from Flinders University, with collaborators at the South Australian Health and Medical Research Institute (SAHMRI) and The University of Adelaide, studied Sanfilippo syndrome in mouse models, discovering for the first time that advancement of retinal disease parallels that occurring in the brain. "This means the retina may provide an easily accessible neural tissue via which brain disease development and its amelioration with treatment can be monitored," says Associate Professor Kim Hemsley who leads the Childhood Dementia Research Group at the Flinders Health and Medical Research Institute (FHMRI) at Flinders University. First author Helen Beard, from the Childhood Dementia Research Group at Flinders University, says there's an urgent need to find treatments and methods to monitor disease progression. Disorders that cause childhood dementia are neurodegenerative (debilitating and progressive) and impair mental function, according to the Childhood Dementia Initiative. "This study offers new hope of using the progression of lesions in the retina - which is part of the central nervous system - as a 'window to the brain'," says senior research officer Ms Beard. "We were able to show that disease lesions appear in the retina very early in the disease course, in fact much earlier than previously thought," she says. This means that in addition to ensuring potential treatments reach the brain, researchers must also confirm that they get into the retina to give patients maximum quality of life. "Our findings suggest that retinal imaging may provide a strategy for monitoring therapeutic efficacy, given that some treatments currently being trialled in children with Sanfilippo syndrome are able to access both brain and retina," Associate Professor Hemsley says. Therapeutic strategies currently being evaluated in human clinical trials include IV delivery of an AAV9-based gene therapy, and a non-invasive, quantitative measure of neurodegeneration would support development of effective treatments, she says.
10.1186/s40478-020-01070-w
2,020
Acta Neuropathologica Communications
Is the eye a window to the brain in Sanfilippo syndrome?
Abstract Sanfilippo syndrome is an untreatable form of childhood-onset dementia. Whilst several therapeutic strategies are being evaluated in human clinical trials including i.v. delivery of AAV9-based gene therapy, an urgent unmet need is the availability of non-invasive, quantitative measures of neurodegeneration. We hypothesise that as part of the central nervous system, the retina may provide a window through which to ‘visualise’ degenerative lesions in brain and amelioration of them following treatment. This is reliant on the age of onset and the rate of disease progression being equivalent in retina and brain. For the first time we have assessed in parallel, the nature, age of onset and rate of retinal and brain degeneration in a mouse model of Sanfilippo syndrome. Significant accumulation of heparan sulphate and expansion of the endo/lysosomal system was observed in both retina and brain pre-symptomatically (by 3 weeks of age). Robust and early activation of micro- and macroglia was also observed in both tissues. There was substantial thinning of retina and loss of rod and cone photoreceptors by ~ 12 weeks of age, a time at which cognitive symptoms are noted. Intravenous delivery of a clinically relevant AAV9-human sulphamidase vector to neonatal mice prevented disease lesion appearance in retina and most areas of brain when assessed 6 weeks later. Collectively, the findings highlight the previously unrecognised early and significant involvement of retina in the Sanfilippo disease process, lesions that are preventable by neonatal treatment with AAV9-sulphamidase. Critically, our data demonstrate for the first time that the advancement of retinal disease parallels that occurring in brain in Sanfilippo syndrome, thus retina may provide an easily accessible neural tissue via which brain disease development and its amelioration with treatment can be monitored.
917471
Feeding 10 billion people by 2050 within planetary limits may be achievable
A global shift towards healthy and more plant-based diets, halving food loss and waste, and improving farming practices and technologies are required to feed 10 billion people sustainably by 2050, a new study finds. Adopting these options reduces the risk of crossing global environmental limits related to climate change, the use of agricultural land, the extraction of freshwater resources, and the pollution of ecosystems through overapplication of fertilizers, according to the researchers. The study, published in the journal Nature, is the first to quantify how food production and consumption affects the planetary boundaries that describe a safe operating space for humanity beyond which Earth's vital systems could become unstable. "No single solution is enough to avoid crossing planetary boundaries. But when the solutions are implemented together, our research indicates that it may be possible to feed the growing population sustainably," says Dr Marco Springmann of the Oxford Martin Programme on the Future of Food and the Nuffield Department of Population Health at the University of Oxford, who led the study. "Without concerted action, we found that the environmental impacts of the food system could increase by 50-90% by 2050 as a result of population growth and the rise of diets high in fats, sugars and meat. In that case, all planetary boundaries related to food production would be surpassed, some of them by more than twofold." The study, funded by EAT as part of the EAT-Lancet Commission for Food, Planet and Health and by Wellcome's "Our Planet, Our Health" partnership on Livestock Environment and People, combined detailed environmental accounts with a model of the global food system that tracks the production and consumption of food across the world. With this model, the researchers analysed several options that could keep the food system within environmental limits. They found: Climate change cannot be sufficiently mitigated without dietary changes towards more plant-based diets. Adopting more plant-based "flexitarian" diets globally could reduce greenhouse gas emissions by more than half, and also reduce other environmental impacts, such as fertilizer application and the use of cropland and freshwater, by a tenth to a quarter. In addition to dietary changes, improving management practices and technologies in agriculture is required to limit pressures on agricultural land, freshwater extraction, and fertilizer use. Increasing agricultural yields from existing cropland, balancing application and recycling of fertilizers, and improving water management, could, along with other measures, reduce those impacts by around half. Finally, halving food loss and waste is needed for keeping the food system within environmental limits. Halving food loss and waste could, if globally achieved, reduce environmental impacts by up to a sixth (16%). "Many of the solutions we analysed are being implemented in some parts of the world, but it will need strong global co-ordination and rapid upscale to make their effects felt," says Springmann. "Improving farming technologies and management practices will require increasing investment in research and public infrastructure, the right incentive schemes for farmers, including support mechanisms to adopt best available practices, and better regulation, for example of fertilizer use and water quality," says Line Gordon, executive director of the Stockholm Resilience Centre and an author on the report. Fabrice de Clerck, director of science at EAT says, "Tackling food loss and waste will require measures across the entire food chain, from storage, and transport, over food packaging and labelling to changes in legislation and business behaviour that promote zero-waste supply chains." "When it comes to diets, comprehensive policy and business approaches are essential to make dietary changes towards healthy and more plant-based diets possible and attractive for a large number of people. Important aspects include school and workplace programmes, economic incentives and labelling, and aligning national dietary guidelines with the current scientific evidence on healthy eating and the environmental impacts of our diet," adds Springmann. ### The paper, Options for keeping the food system within environmental limits, will be published by Nature on 10th October 2018 at http://dx.doi.org/10.1038/s41586-018-0594-0 (the URL will go live after the embargo ends). Notes to the editor: EAT is a non-profit science-based global platform for food system transformation founded by the Stordalen Foundation, Stockholm Resilience Centre and Wellcome. The EAT-Lancet report will be published in January 2019. Wellcome is a global charitable foundation, both politically and financially independent, that supports scientists and researchers, take on big problems, fuel imaginations, and spark debate. Wellcome's "Our Planet, Our Health" partnership on Livestock Environment and People (LEAP) is a research programme based at the Oxford Martin School, University of Oxford, that aims to understand the health, environmental, social and economic effects of meat and dairy consumption to provide evidence and tools for decision makers to promote healthy and sustainable diets. The Oxford Martin School at the University of Oxford is a world-leading centre of pioneering research that addresses global challenges. It invests in research that cuts across disciplines to tackle a wide range of issues such as climate change, disease and inequality. The School supports novel, high risk and multidisciplinary projects that may not fit within conventional funding channels, because breaking boundaries can produce results that could dramatically improve the wellbeing of this and future generations. Underpinning all our research is the need to translate academic excellence into impact - from innovations in science, medicine and technology, through to providing expert advice and policy recommendations. For enquiries and interviews, please contact: Dr Marco Springmann, Senior Researcher on Environmental Sustainability and Public Health and lead author of the study, Oxford Martin Programme on the Future of Food and Nuffield Department of Population Health, University of Oxford T/M: +44 7460202512 Email: [email protected] For further information, please contact: Sally-Anne Stewart, Communication and Media Manager, Oxford Martin School T: 01865 287429 M: 07972 284146 Email: [email protected] Owen Gaffney, media and communications, Stockholm Resilience Centre M: +46 734604833 Email: [email protected]
10.1038/s41586-018-0594-0
2,018
Nature
Options for keeping the food system within environmental limits
The food system is a major driver of climate change, changes in land use, depletion of freshwater resources, and pollution of aquatic and terrestrial ecosystems through excessive nitrogen and phosphorus inputs. Here we show that between 2010 and 2050, as a result of expected changes in population and income levels, the environmental effects of the food system could increase by 50-90% in the absence of technological changes and dedicated mitigation measures, reaching levels that are beyond the planetary boundaries that define a safe operating space for humanity. We analyse several options for reducing the environmental effects of the food system, including dietary changes towards healthier, more plant-based diets, improvements in technologies and management, and reductions in food loss and waste. We find that no single measure is enough to keep these effects within all planetary boundaries simultaneously, and that a synergistic combination of measures will be needed to sufficiently mitigate the projected increase in environmental pressures.
515982
Systems pharmacology modelers accelerate drug discovery in Alzheimer's
Alzheimer's is a chronic neurodegenerative disease which leads to the senile cognitive impairment and memory loss. Every third person older than 70 years suffers from it. Such changes are caused by functional disorders and subsequent death of neurons. However triggers of processes resulting in brain cell death are still remain unknown. That's why there is no effective therapy for Alzheimer's disease. At the moment, the most common hypothesis is a theory of the toxic effect of the beta-amyloid protein, which accumulates in the brain with age, aggregating into insoluble amyloid plaques. The presence of these plaques in the brain is the main marker of Alzheimer's disease (unfortunately, often found postmortem). Soluble (not aggregated into plaques) forms of the protein are considered to be toxic too. All modern therapies act in one of the three ways: they can block production of soluble beta-amyloid, destruct protein before transformation into insoluble form, or to stimulate the plaque degradation. "Clinical trials for Alzheimer's therapies have got one significant feature - their short duration. They last for no more than 5 years, whereas the disease can progress for decades. And early Phase I-II tests last for only few weeks. With such experiment design one can affect only on the processes of distribution and degradation of the soluble beta-amyloid forms. Therefore we developed this part of our model to analyze and predict the dynamics of the new generation of drugs, for instance the inhibitors of amyloid production", says Tatiana Karelina , the head of the neurodegenerative disease modeling group, InSysBio LLC. The first difficulty encountered by drug developers is the interpretation of the results obtained in animal tests. In general, most studies of the distribution of amyloid are carried out on mice: scientists inject a labeled protein into the mouse brain and observe the distribution of the radioactive label. Alternatively, the dynamics of amyloid in the presence of drugs is studied. Based on the data obtained, researchers can calculate the "therapeutic window" for the medication - a range of doses from the minimum effective to the maximum non-toxic. Then doses for human or monkey are calculated by using mass or volume scaling (for the body, the parameters change as many times as its mass or volume is greater than the mass or volume of the mouse). The project team collected the data from the literature and derived a system of equations that fully described the existing results. Firstly the model was calibrated (i.e. the missing parameters were estimated) for the mouse, and then for the human and monkey. It turned out that one cannot use the scaling method to transfer results from rodents to primates (as it's often done). The deduced mathematical equations have shown that not only the rate of production of beta amyloid (as activity of corresponding genes) differs, but moreover the blood-brain barrier is different in rodents and higher primates. At the same time, there was no significant difference between the human and the monkey, and the standard scaling can be used to translate predictions between them. The next big question in Alzheimer's clinical trials is how to understand if the drug affects specific target on the short term. It is impossible to observe the processes that occur in the human brain directly. Usually a cerebrospinal fluid probe is taken for analysis of the change in the concentration of beta-amyloid. Actually, these data strongly differ from the values of amyloid concentration in the brain, since the cerebrospinal fluid is strongly influenced by the processes taking place in the blood plasma, and amyloid demonstrates another dynamics. "If there is such a big structural model calibrated on the big amount of data one can easily match the results of cerebrospinal fluid sample analysis with the real processes in the patient brain. This will greatly accelerate the development of new drugs and improve the accuracy of the therapy selection", explains Tatiana Karelina. Scientists report that their model allows to predict how these new drugs must be administered. Total daily dose can be diminished but should be split for several parts during the day, providing optimal brain efficacy. InSysBio team is confident that the systems-pharmacological modeling can greatly improve the development of drugs from Alzheimer's disease and are already negotiating the introduction of technology with their partners in the pharmaceutical industry.
10.1002/psp4.12211
2,017
CPT Pharmacometrics & Systems Pharmacology
A Translational Systems Pharmacology Model for Aβ Kinetics in Mouse, Monkey, and Human
A mechanistic model of amyloid beta production, degradation, and distribution was constructed for mouse, monkey, and human, calibrated and externally verified across multiple datasets. Simulations of single-dose avagacestat treatment demonstrate that the Aβ42 brain inhibition may exceed that in cerebrospinal fluid (CSF). The dose that achieves 50% CSF Aβ40 inhibition for humans (both healthy and with Alzheimer's disease (AD)) is about 1 mpk, one order of magnitude lower than for mouse (10 mpk), mainly because of differences in pharmacokinetics. The predicted maximal percent of brain Aβ42 inhibition after single-dose avagacestat is higher for AD subjects (about 60%) than for healthy individuals (about 45%). The probability of achieving a normal physiological level for Aβ42 in brain (1 nM) during multiple avagacestat dosing can be increased by using a dosing regimen that achieves higher exposure. The proposed model allows prediction of brain pharmacodynamics for different species given differing dosing regimens.
975218
New study models the transmission of foreshock waves towards Earth
An international team of scientists led by Lucile Turc, an Academy Research Fellow at the University of Helsinki and supported by the International Space Science Institute in Bern has studied the propagation of electromagnetic waves in near-Earth space for three years. The team has studied the waves in the area where the solar wind collides with Earth’s magnetic field called foreshock region, and how the waves are transmitted to the other side of the shock. The results of the study are now published in Nature Physics. “How the waves would survive passing through the shock has remained a mystery since the waves were first discovered in the 1970s. No evidence of those waves has ever been found on the other side of the shock”, says Turc. The team has used a cutting-edge computer model, Vlasiator, developed at the University of Helsinki by a group led by professor Minna Palmroth, to recreate and understand the physical processes at play in the wave transmission. A careful analysis of the simulation revealed the presence of waves on the other side of the shock, with almost identical properties as in the foreshock. “Once it was known what and where to look for, clear signatures of the waves were found in satellite data, confirming the numerical results”, says Lucile Turc. Around our planet is a magnetic bubble, the magnetosphere, which shields us from the solar wind, a stream of charged particles coming from the Sun. Electromagnetic waves, appearing as small oscillations of the Earth's magnetic field, are frequently recorded by scientific observatories in space and on the ground. These waves can be caused by the impact of the changing solar wind or come from the outside of the magnetosphere. The electromagnetic waves play an important role in creating adverse space weather around our planet: they can for example accelerate particles to high energies, which can then damage spacecraft electronics, and cause these particles to fall into the atmosphere. On the side of Earth facing the Sun, scientific observatories frequently record oscillations at the same period as those waves that form ahead of the Earth’s magnetosphere, singing a clear magnetic song in a region of space called the foreshock. This has led space scientists to think that there is a connection between the two, and that the waves in the foreshock can enter the Earth’s magnetosphere and travel all the way to the Earth’s surface. However, one major obstacle lies in their way: the waves must cross the shock before reaching the magnetosphere. “At first, we thought that the initial theory proposed in the 1970s was correct: the waves could cross the shock unchanged. But there was an inconsistency in the wave properties that this theory could not reconcile, so we investigated further”, says Turc. “Eventually, it became clear that things were much more complicated than it seemed. The waves we saw behind the shock were not the same as those in the foreshock, but new waves created at the shock by the periodic impact of foreshock waves.” When the solar wind flows through the shock, it is compressed and heated. The shock strength determines how much compression and heating take place. Turc and her colleagues showed that foreshock waves are able to tune the shock, making it alternatively stronger or weaker when wave troughs or crests arrive at the shock. As a result, the solar wind behind the shock changes periodically and creates new waves, in concert with the foreshock waves. The numerical model also pinpointed that these waves could only be detected in a narrow region behind the shock, and that they could easily be hidden by the turbulence in this region. This likely explains why they had not been observed before. While the waves originating from the foreshock only play a limited role in space weather at Earth, they are of great importance to understand the fundamental physics of our universe. Lucile Turc, Academy Research Fellow, University of Helsinki [email protected] Tel. +358 50 311 9499 Nature Physics 10.1038/s41567-022-01837-z Computational simulation/modeling Not applicable Transmission of foreshock waves through Earth’s bow shock
10.1038/s41567-022-01837-z
2,022
Nature Physics
Transmission of foreshock waves through Earth’s bow shock
The Earth's magnetosphere and its bow shock, which is formed by the interaction of the supersonic solar wind with the terrestrial magnetic field, constitute a rich natural laboratory enabling in situ investigations of universal plasma processes. Under suitable interplanetary magnetic field conditions, a foreshock with intense wave activity forms upstream of the bow shock. So-called 30 s waves, named after their typical period at Earth, are the dominant wave mode in the foreshock and play an important role in modulating the shape of the shock front and affect particle reflection at the shock. These waves are also observed inside the magnetosphere and down to the Earth's surface, but how they are transmitted through the bow shock remains unknown. By combining state-of-the-art global numerical simulations and spacecraft observations, we demonstrate that the interaction of foreshock waves with the shock generates earthward-propagating, fast-mode waves, which reach the magnetosphere. These findings give crucial insight into the interaction of waves with collisionless shocks in general and their impact on the downstream medium.
912619
Tungsten offers nano-interconnects a path of least resistance
As microchips become ever smaller and therefore faster, the shrinking size of their copper interconnects leads to increased electrical resistivity at the nanoscale. Finding a solution to this impending technical bottleneck is a major problem for the semiconductor industry. One promising possibility involves reducing the resistivity size effect by altering the crystalline orientation of interconnect materials. A pair of researchers from Rensselaer Polytechnic Institute conducted electron transport measurements in epitaxial single-crystal layers of tungsten (W) as one such potential interconnect solution. They performed first-principles simulations, finding a definite orientation-dependent effect. The anisotropic resistivity effect they found was most marked between layers with two particular orientations of the lattice structure, namely W(001) and W(110). The work is published this week in the Journal of Applied Physics, from AIP Publishing. Author Pengyuan Zheng noted that both the 2013 and 2015 International Technology Roadmap for Semiconductors (ITRS) called for new materials to replace copper as interconnect material to limit resistance increase at reduced scale and minimize both power consumption and signal delay. In their study, Zheng and co-author Daniel Gall chose tungsten because of its asymmetric Fermi surface--its electron energy structure. This made it a good candidate to demonstrate the anisotropic resistivity effect at the small scales of interest. "The bulk material is completely isotropic, so the resistivity is the same in all directions," Gall said. "But if we have thin films, then the resistivity varies considerably." To test the most promising orientations, the researchers grew epitaxial W(001) and W(110) films on substrates and conducted resistivity measurements of both while immersed in liquid nitrogen at 77 Kelvin (about -196 degrees Celsius) and at room temperature, or 295 Kelvin. "We had roughly a factor of 2 difference in the resistivity between the 001 oriented tungsten and 110 oriented tungsten," Gall said, but they found considerably smaller resistivity in the W(011) layers. Although the measured anisotropic resistance effect was in good agreement with what they expected from calculations, the effective mean free path--the average distance electrons can move before scattering against a boundary--in the thin film experiments was much larger than the theoretical value for bulk tungsten. "An electron travels through a wire on a diagonal, it hits a surface, gets scattered, and then continues traveling until it hits something else, maybe the other side of the wire or a lattice vibration," Gall said. "But this model looks wrong for small wires." The experimenters believe this may be explained by quantum mechanical processes of the electrons that arise at these limited scales. Electrons may be simultaneously touching both sides of the wire or experiencing increased electron-phonon (lattice vibrations) coupling as the layer thickness decreases, phenomena that could affect the search for another metal to replace copper interconnects. "The envisioned conductivity advantages of rhodium, iridium, and nickel may be smaller than predicted," said Zheng. Findings like these will prove increasingly important as quantum mechanical scales become more commonplace for the demands of interconnects. The research team is continuing to explore the anisotropic size effect in other metals with nonspherical Fermi surfaces, such as molybdenum. They found that the orientation of the surface relative to the layer orientation and transport direction is vital, as it determines the actual increase in resistivity at these reduced dimensions. "The results presented in this paper clearly demonstrate that the correct choice of crystalline orientation has the potential to reduce nanowire resistance," said Zheng. The importance of the work extends beyond current nanoelectronics to new and developing technologies, including transparent flexible conductors, thermoelectrics and memristors that can potentially store information. "It's the problem that defines what you can do in the next technology," Gall said. ### The article, "The anisotropic size effect of the electrical resistivity of metal thin films: Tungsten," is authored by Pengyuan Zheng and Daniel Gall. The article appeared in Applied Physics Letters Oct. 3, 2017 (DOI: 10.1063/1.5004118) and can be accessed at: http://aip.scitation.org/doi/full/10.1063/1.5004118. ABOUT THE JOURNAL Journal of Applied Physics features full length reports on significant new findings in applied physics. The journal covers new experimental and theoretical research on applications of physics phenomena related to all branches of science, engineering, and modern technology. See http://jap.aip.org.
10.1063/1.5004118
2,017
Journal of Applied Physics
The anisotropic size effect of the electrical resistivity of metal thin films: Tungsten
The resistivity of nanoscale metallic conductors is orientation dependent, even if the bulk resistivity is isotropic and electron scattering cross-sections are independent of momentum, surface orientation, and transport direction. This is demonstrated using a combination of electron transport measurements on epitaxial tungsten layers in combination with transport simulations based on the ab initio predicted electronic structure, showing that the primary reason for the anisotropic size effect is the non-spherical Fermi surface. Electron surface scattering causes the resistivity of epitaxial W(110) and W(001) layers measured at 295 and 77 K to increase as the layer thickness decreases from 320 to 4.5 nm. However, the resistivity is larger for W(001) than W(110) which, if describing the data with the classical Fuchs-Sondheimer model, yields an effective electron mean free path λ* for bulk electron-phonon scattering that is nearly a factor of two smaller for the 110 vs the 001-oriented layers, with λ(011)*= 18.8 ± 0.3 nm vs λ(001)* = 33 ± 0.4 nm at 295 K. Boltzmann transport simulations are done by integration over real and reciprocal space of the thin film and the Brillouin zone, respectively, describing electron-phonon scattering by momentum-independent constant relaxation-time or mean-free-path approximations, and electron-surface scattering as a boundary condition which is independent of electron momentum and surface orientation. The simulations quantify the resistivity increase at the reduced film thickness and predict a smaller resistivity for W(110) than W(001) layers with a simulated ratio λ(011)*/λ(001)* = 0.59 ± 0.01, in excellent agreement with 0.57 ± 0.01 from the experiment. This agreement suggests that the resistivity anisotropy in thin films of metals with isotropic bulk electron transport is fully explained by the non-spherical Fermi surface and velocity distribution, while electron scattering at phonons and surfaces can be kept isotropic and independent of the surface orientation. The simulations correctly predict the anisotropy of the resistivity size effect, but underestimate its absolute magnitude. Quantitative analyses suggest that this may be due to (i) a two-fold increase in the electron-phonon scattering cross-section as the layer thickness is reduced to 5 nm or (ii) a variable wave-vector dependent relaxation time for electron-phonon scattering.
504278
Sunfleck use research needs appropriate experimental leaves
"All the roads of learning begin in the darkness and go out into the light." This quote is often attributed to Hippocrates and exhibits a double level of relevance in photosynthesis research. The use of light by plant leaves to drive photosynthesis is often studied in steady state environments, but most plant leaves are required to adjust to fluctuations in incident light every day. The research into use of fluctuating light by plant leaves has expanded in recent decades. A study from the Western Pacific Tropical Research Center at the University of Guam has shown that accurate results in this subdiscipline of plant physiology can only be obtained when methods employ leaves that were grown in fluctuating light prior to experimental methods. The results have been published in a recent issue of the journal Plants (doi: 10.3390/plants9070905). The experimental results confirmed that leaves which were constructed under homogeneous shade such as commercial shade fabric did not respond to fluctuating light in a manner that was similar to leaves which were constructed under fluctuating light. To expand the applicability of the results, three model species were employed for this study. Soybean represented eudicot angiosperms, corn represented monocot angiosperms, and the native cycad species in Guam represented gymnosperms. The experimental approach called on traditional response variables to ensure applicability of the results to the established literature. One response variable was the speed of increase in photosynthesis when a leaf that is acclimated to deep shade is suddenly challenged with saturating incident light, a response that physiologists call induction. A second response variable was the influence of a short sunfleck on photosynthetic induction during a subsequent sunfleck, a response that physiologists call priming. "As expected, the leaves that developed under fluctuating light exhibited more rapid photosynthetic induction and more successful priming than the leaves that developed in homogeneous shade," said Thomas Marler, author of the paper. This new knowledge indicates a substantial percentage of the established leaf physiology literature concerning use of sunflecks includes results that are dubious because the sunfleck methods used experimental leaves that were grown under shadecloth. The study also reveals the value of off-site conservation germplasm collections. "Ubiquitous invasive insect herbivores in Guam create difficulties for research on the native cycad species," said Marler. "The ex situ germplasm collections in several countries allow scientists to sustain relevant research on this important cycad species." This study, for example, was conducted in one of these managed gardens in the Philippines where the plants are not threatened by the insects. When new knowledge illuminates a fallacy in established experimental methods, a search for an empirical approach for salvaging the published information is appropriate. If a universal conversion factor could be identified, for example, then the published data could be corrected with that conversion. Unfortunately, there were quantitative differences among the three model species with regard to how the homogeneous shade leaves behaved compared to the heterogeneous shade leaves. Therefore, the published sunfleck use literature based on methods that employed homogeneous shade-grown leaves should be interpreted with caution.
10.3390/plants9070905
2,020
Plants
Artifleck: The Study of Artifactual Responses to Light Flecks with Inappropriate Leaves
Methods in sunfleck research commonly employ the use of experimental leaves which were constructed in homogeneous light. These experimental organs may behave unnaturally when they are challenged with fluctuating light. Photosynthetic responses to heterogeneous light and leaf macronutrient relations were determined for Cycas micronesica, Glycine max, and Zea mays leaves that were grown in homogeneous shade, heterogeneous shade, or full sun. The speed of priming where one light fleck increased the photosynthesis during a subsequent light fleck was greatest for the leaves grown in heterogeneous shade. The rate of induction and the ultimate steady-state photosynthesis were greater for the leaves that were grown in heterogeneous shade versus the leaves grown in homogeneous shade. The leaf mass per area, macronutrient concentration, and macronutrient stoichiometry were also influenced by the shade treatments. The amplitude and direction in which the three developmental light treatments influenced the response variables were not universal among the three model species. The results indicate that the historical practice of using experimental leaves which were constructed under homogeneous light to study leaf responses to fluctuating light may produce artifacts that generate dubious interpretations.
891834
If cancer were easy, every cell would do it
A new Scientific Reports paper puts an evolutionary twist on a classic question. Instead of asking why we get cancer, Leonardo Oña of Osnabrück University and Michael Lachmann of the Santa Fe Institute use signaling theory to explore how our bodies have evolved to keep us from getting more cancer. It isn't obvious why, when any cancer arises, it doesn't very quickly learn to take advantage of the body's own signaling mechanisms for quick growth. After all, unlike an infection, cancers can easily use the body's own chemical language. "Any signal that the body uses, an infection has to evolve to make," says Lachmann. "If a thief wants to unlock your house, they have to figure out how to pick the lock on the door. But cancer cells have the keys to your house. How do you protect against that? How do you protect against an intruder who knows everything you know, and has all the tools and keys you have?" Their answer: You make the keys very costly to use. Oña and Lachmann's evolutionary model reveals two factors in our cellular architecture that thwart cancer: the expense of manufacturing growth factors ("keys") and the range of benefits delivered to cells nearby. Individual cancer cells are kept in check when there's a high energetic cost for creating growth factors that signal cell growth. To understand the evolutionary dynamics in the model, the authors emphasize the importance of thinking about the competition between a mutant cancerous cell and surrounding cells. When a mutant cell arises and puts out a signal for growth, that signal also provides resources to adjacent, non-mutated cells. Thus, when the benefits are distributed to a radius around the signaling cell, the mutant cells have a hard time out-competing their neighbors and can't get established. The cancer loses the ability to give the signal. The work represents a novel application of evolutionary biology toward a big-picture understanding of cancer. Oña and Lachmann draw from the late biologist Amos Zahavi's handicap principle, which explains how evolutionary systems are stabilized against "cheaters" when dishonest signals are costlier to produce than the benefit they provide. The male peacock's elaborate tail is the classic example of a costly signal - an unhealthy bird would not have the energetic resources to grow an elaborate tail, and thus could not "fake" a signal of their evolutionary fitness. By the handicap principle, a cancer cell would be analogous to the unhealthy peacock that can't afford to signal for attention. So how do some cancer cells overcome these evolutionary constraints? The authors point out that their model only addresses the scenario of an individual cancer trying to invade a healthy population. Once a cancer has overcome the odds of extinction and reached a certain critical size, other dynamics prevail. "Many mechanisms seem to have evolved to prevent cancer -- from immune system control, cell death, limits on cell proliferation, to tissue architecture," the authors write. "Our model only studies the reduced chance for invasion." "Cancer is incredibly complex," Lachmann says, "and our model is relatively simple. Still, we believe it's an important step toward understanding cancer and cancer prevention in evolutionary terms."
10.1038/s41598-020-57494-w
2,020
Scientific Reports
Signalling architectures can prevent cancer evolution
Abstract Cooperation between cells in multicellular organisms is preserved by an active regulation of growth through the control of cell division. Molecular signals used by cells for tissue growth are usually present during developmental stages, angiogenesis, wound healing and other processes. In this context, the use of molecular signals triggering cell division is a puzzle, because any molecule inducing and aiding growth can be exploited by a cancer cell, disrupting cellular cooperation. A significant difference is that normal cells in a multicellular organism have evolved in competition between high-level organisms to be altruistic, being able to send signals even if it is to their detriment. Conversely, cancer cells evolve their abuse over the cancer’s lifespan by out-competing their neighbours. A successful mutation leading to cancer must evolve to be adaptive, enabling a cancer cell to send a signal that results in higher chances to be selected. Using a mathematical model of such molecular signalling mechanism, this paper argues that a signal mechanism would be effective against abuse by cancer if it affects the cell that generates the signal as well as neighbouring cells that would receive a benefit without any cost, resulting in a selective disadvantage for a cancer signalling cell. We find that such molecular signalling mechanisms normally operate in cells as exemplified by growth factors. In scenarios of global and local competition between cells, we calculate how this process affects the fixation probability of a mutant cell generating such a signal, and find that this process can play a key role in limiting the emergence of cancer.
954216
New light-powered catalysts could aid in manufacturing
Chemical reactions that are driven by light offer a powerful tool for chemists who are designing new ways to manufacture pharmaceuticals and other useful compounds. Harnessing this light energy requires photoredox catalysts, which can absorb light and transfer the energy to a chemical reaction. MIT chemists have now designed a new type of photoredox catalyst that could make it easier to incorporate light-driven reactions into manufacturing processes. Unlike most existing photoredox catalysts, the new class of materials is insoluble, so it can be used over and over again. Such catalysts could be used to coat tubing and perform chemical transformations on reactants as they flow through the tube. “Being able to recycle the catalyst is one of the biggest challenges to overcome in terms of being able to use photoredox catalysis in manufacturing. We hope that by being able to do flow chemistry with an immobilized catalyst, we can provide a new way to do photoredox catalysis on larger scales,” says Richard Liu, an MIT postdoc and the joint lead author of the new study. The new catalysts, which can be tuned to perform many different types of reactions, could also be incorporated into other materials including textiles or particles. Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT, is the senior author of the paper, which appears today in Nature Communications. Sheng Guo, an MIT research scientist, and Shao-Xiong Lennon Luo, an MIT graduate student, are also authors of the paper. Hybrid materials Photoredox catalysts work by absorbing photons and then using that light energy to power a chemical reaction, analogous to how chlorophyll in plant cells absorbs energy from the sun and uses it to build sugar molecules. Chemists have developed two main classes of photoredox catalysts, which are known as homogenous and heterogenous catalysts. Homogenous catalysts usually consist of organic dyes or light-absorbing metal complexes. These catalysts are easy to tune to perform a specific reaction, but the downside is that they dissolve in the solution where the reaction takes place. This means they can’t be easily removed and used again. Heterogenous catalysts, on the other hand, are solid minerals or crystalline materials that form sheets or 3D structures. These materials do not dissolve, so they can be used more than once. However, these catalysts are more difficult to tune to achieve a desired reaction. To combine the benefits of both of these types of catalysts, the researchers decided to embed the dyes that make up homogenous catalysts into a solid polymer. For this application, the researchers adapted a plastic-like polymer with tiny pores that they had previously developed for performing gas separations. In this study, the researchers demonstrated that they could incorporate about a dozen different homogenous catalysts into their new hybrid material, but they believe it could work more many more. “These hybrid catalysts have the recyclability and durability of heterogeneous catalysts, but also the precise tunability of homogeneous catalysts,” Liu says. “You can incorporate the dye without losing its chemical activity, so, you can more or less pick from the tens of thousands of photoredox reactions that are already known and get an insoluble equivalent of the catalyst you need.” The researchers found that incorporating the catalysts into polymers also helped them to become more efficient. One reason is that reactant molecules can be held in the polymer’s pores, ready to react. Additionally, light energy can easily travel along the polymer to find the waiting reactants. “The new polymers bind molecules from solution and effectively preconcentrate them for reaction,” Swager says. “Also, the excited states can rapidly migrate throughout the polymer. The combined mobility of the excited state and partitioning of the reactants in the polymer make for faster and more efficient reactions than are possible in pure solution processes.” Higher efficiency The researchers also showed that they could tune the physical properties of the polymer backbone, including its thickness and porosity, based on what application they want to use the catalyst for. As one example, they showed that they could make fluorinated polymers that would stick to fluorinated tubing, which is often used for continuous flow manufacturing. During this type of manufacturing, chemical reactants flow through a series of tubes while new ingredients are added, or other steps such as purification or separation are performed. Currently, it is challenging to incorporate photoredox reactions into continuous flow processes because the catalysts are used up quickly, so they have to be continuously added to the solution. Incorporating the new MIT-designed catalysts into the tubing used for this kind of manufacturing could allow photoredox reactions to be performed during continuous flow. The tubing is clear, allowing light from an LED to reach the catalysts and activate them. “The idea is to have the catalyst coating a tube, so you can flow your reaction through the tube while the catalyst stays put. In that way, you never get the catalyst ending up in the product, and you can also get a lot higher efficiency,” Liu says. The catalysts could also be used to coat magnetic beads, making them easier to pull out of a solution once the reaction is finished, or to coat reaction vials or textiles. The researchers are now working on incorporating a wider variety of catalysts into their polymers, and on engineering the polymers to optimize them for different possible applications.
10.1038/s41467-022-29811-6
2,022
Nature Communications
Solution-processable microporous polymer platform for heterogenization of diverse photoredox catalysts
In contemporary organic synthesis, substances that access strongly oxidizing and/or reducing states upon irradiation have been exploited to facilitate powerful and unprecedented transformations. However, the implementation of light-driven reactions in large-scale processes remains uncommon, limited by the lack of general technologies for the immobilization, separation, and reuse of these diverse catalysts. Here, we report a new class of photoactive organic polymers that combine the flexibility of small-molecule dyes with the operational advantages and recyclability of solid-phase catalysts. The solubility of these polymers in select non-polar organic solvents supports their facile processing into a wide range of heterogeneous modalities. The active sites, embedded within porous microstructures, display elevated reactivity, further enhanced by the mobility of excited states and charged species within the polymers. The independent tunability of the physical and photochemical properties of these materials affords a convenient, generalizable platform for the metamorphosis of modern photoredox catalysts into active heterogeneous equivalents.
726114
New genetic knowledge on the causes of severe COVID-19
Worldwide, otherwise healthy adolescents and young people without underlying conditions are sometimes severely affected by COVID-19, with the viral infection in the worst cases quickly becoming life-threatening. But why is this happening? A world-wide consortium of researchers is determined to investigate this - and they have now made so much progress that Science has just published two scientific articles describing some of their results. Professor Trine Mogensen from the Department of Biomedicine at Aarhus University is co-author on the two research articles in Science. She conducts research into rare immunodeficiencies that lead to increased susceptibility to viral infections and, together with her research group, participates in the steering committee of the research consortium Covid human genetic effort (covidhge) as the only Danish representativ. She explains that in the vast majority of people, infection with the COVID-19 causing coronavirus leads to an anti-viral response in which interferon plays a crucial role. Interferon is an importantimmune signaling hormone that slows the division of the virus and prevents it from penetrating the surrounding cells. In the event of a viral infection, the body normally quickly begins producing interferon, and the virus can be brought under control withing a few hours. In popular terms, interferon is our first safeguard against an infection. "However, if there are defects in the interferon signalling pathways, there is nothing to inhibit the virus dividing, and while the coronavirus usually remains in the cells in the throat, it can in this case also infect other parts of the body such as the lungs, kidneys and perhaps even the brain," explains Trine Mogensen, who also is Medical Specialist at the Department of Infectious Diseases, Aarhus University Hospital, Denmark Genetic and immunological analyses of blood samples from 650 patients from all over the world with severe COVID-19 show that some of these patients have an inherited immunodeficiency which leads to the anti-viral interferon either not being produced or not working on the body's cells. Blood samples from 1,226 healthy individuals have functioned as a control group - with all of the samples being taken prior to the COVID-19 pandemic. The researchers have obtained consent to collect blood samples and carry out a genetic analysis from hospitalized and severely ill COVID-19 patients. From the blood samples, the researchers have purified immune cells from the 650 patients and subsequently infected these immune cells with coronavirus, which enabled them to ascertain that the immune system was not properly activated. In addition, a genetic sequencing of DNA from the 650 patients has been carried out, with some of this work being carried out at Aarhus University Hospital. "Our DNA consists of approximately 20,000 genes, and we have found defects in thirteen different genes. This means that the proteins which the genes encode become defective and therefore cannot perform their role in the immune system. We're already aware of some of these genetic defects from patients affected by severe influenza, but some are new and specific to COVID-19," says Trine Mogensen. The next task for the international research consortium is to translate - i.e. transfer - the basic immunological findings to the treatment of patients, and the first clinical trials are on the way. Medical doctors will be able to measure whether the patients have autoantibodies in their blood as these are relatively easy to measure, and if they are, they can be filtered from the blood. It will also be possible to screen for the thirteen critical genes identified and in this way have the ability to identify particularly vulnerable individuals. This group will then be able to receive preventative medical treatment and a vaccine once this is available. "The goal is to prevent the very severe cases of COVID-19 with high mortality rates," summarizes Trine Mogensen, who is optimistic and hopes that the clinical trials will demonstrate positive results - perhaps already within a year. She does not only base her optimism on the unique international collaboration which exists in the COVID Human Genetic Effort, as the international research consortium is named. "I've never experienced anything like it before in my field of immunology and infectious diseases. We share knowledge and work together in a very altruistic spirit," she adds. The consortium comprises more than 250 researchers under the overall leadership of Professor Jean-Laurent Casanova from The Rockefeller University in the United States - with the professor also serving as an Honorary Skou professor at Aarhus University since 2019. ###
10.1126/science.abd4570
2,020
Science
Inborn errors of type I IFN immunity in patients with life-threatening COVID-19
The genetics underlying severe COVID-19 The immune system is complex and involves many genes, including those that encode cytokines known as interferons (IFNs). Individuals that lack specific IFNs can be more susceptible to infectious diseases. Furthermore, the autoantibody system dampens IFN response to prevent damage from pathogen-induced inflammation. Two studies now examine the likelihood that genetics affects the risk of severe coronavirus disease 2019 (COVID-19) through components of this system (see the Perspective by Beck and Aksentijevich). Q. Zhang et al. used a candidate gene approach and identified patients with severe COVID-19 who have mutations in genes involved in the regulation of type I and III IFN immunity. They found enrichment of these genes in patients and conclude that genetics may determine the clinical course of the infection. Bastard et al. identified individuals with high titers of neutralizing autoantibodies against type I IFN-α2 and IFN-ω in about 10% of patients with severe COVID-19 pneumonia. These autoantibodies were not found either in infected people who were asymptomatic or had milder phenotype or in healthy individuals. Together, these studies identify a means by which individuals at highest risk of life-threatening COVID-19 can be identified. Science , this issue p. eabd4570 , p. eabd4585 ; see also p. 404
864005
RUDN mathematicians confirmed the possibility of data transfer via gravitational waves
RUDN mathematicians analyzed the properties of gravitational waves in a generalized affine- metrical space (an algebraic construction operating the notions of a vector and a point) similarly to the properties of electromagnetic waves in Minkowski space-time. It turned out that there is the possibility of transmitting information with the help of nonmetricity waves and transferring it spatially without distortions. The discovery can help the scientists master new means of data transfer in space, e.g. between space stations. The article of the scientists was published in the Classical and Quantum Gravity journal. The recently discovered gravitational waves are waves of curvature of space-time, which according to Einstein's general relativity theory is completely determined by the space-time itself. However, currently there are reasons to consider space-time as a more complex structure with additional geometrical characteristics such as torsion and nonmetricity. In this case, geometrically speaking, space-time turns from a Riemannian space envisaged by the General Relativity (GR) into a generalized affine - metrical space. Respective gravitational field equations that generalize Einstein's equations show that torsion and nonmetricity can also spread in the form of waves (in particular plane waves at a great distance from wave sources). In order to describe gravitational waves RUDN researchers used mathematical abstraction - an affine space, i.e. a usual vector space but without an origin of coordinates. It was proved that in such mathematical representation of gravitational waves there are functions which remain invariable in process of distribution of a wave. It is possible to set arbitrary function so that it encoded any information - in approximately same way electromagnetic waves transfer a radio signal. It means that if to find a way to set these constructions in a wave source, they will be reach any point of space without changes. Thus, gravitational waves could be used for data transfer. The study consisted of three stages. On the first one RUDN mathematicians calculated the Lie derivative - a function that binds the properties of bodies in two different spaces: an affine space and a Minkowski space. It allowed them to pass from the description of waves in the real space to their mathematical interpretation. At the second stage the researchers determined five arbitrary functions of time, i.e. the constructions which are not changing in process of distribution of a wave. With their help, the characteristics of a wave can be set in a source, encoding thus any information. In other point of space this information can be decoded back. These two phenomena also provide possibility of information transfer. At last, at the third stage the researchers proved the theorem on the structure of plane nonmetricity gravitational waves. It turned out that from four dimensions of a wave (three spatial ones and one time dimension) three can be used to encode an informational signal using only one function, and the fourth dimension - with use of two functions. "We found out that waves of this type (nonmetricity waves) are able to transmit data, similarly to the recently discovered curvature waves, because their description contains arbitrary functions of delayed time which can be encoded in the source of such waves (in a perfect analogy to electromagnetic waves). With this circumstance it is connected possible prospect of our research which can be however realized only if the nonmetricity will be open as the physical phenomenon, and not just as mathematical generalization of Einstein theory of relativity," says Nina V. Markova, a co-author of the work, candidate of physical and mathematical sciences, assistant professor of C.M. Nikolsky Mathematical Institute, and a staff member of RUDN. ### The work was carried out in collaboration with scientists from MCPU and MARCU.
10.1088/1361-6382/aace79
2,018
Classical and Quantum Gravity
Structure of plane gravitational waves of nonmetricity in affine-metric space
A definition of an affine-metric space of the plane wave type is given using the analogy with the properties of plane electromagnetic waves in Minkowski space. The action of the Lie derivative on the 40 components of the nonmetricity 1-form in the 4-dimensional affine-metric space leads to the conclusion that the nonmetricity of a plane wave type is determined by five arbitrary functions of delayed time. A theorem is proved that parts of the nonmetricity 1-form irreducible with respect to the Lorentz transformations of the tangent space, such as the Weyl 1-form, the trace 1-form, and the spin 3 1-form, are defined by one arbitrary function each, and the spin 2 1-form is defined by two arbitrary functions. This proves the possibility of transmitting information with the help of nonmetricity waves.
781128
Compression garments reduce strength loss after training
Regular training enhances your strength, but recovery is equally important. Elastic bandages and compression garments are widely used in sports to facilitate recovery and prevent injuries. Now, a research team from Tohoku University has determined that compression garments also reduce strength loss after strenuous exercise. Their research findings were published in the European Journal of Applied Physiology. The team - led by assistant professor János Négyesi and professor Ryoichi Nagatomi from the Graduate School of Biomedical Engineering - used a computerized dynamometer to train healthy subjects until they became fatigued. The same equipment was used to detect changes in the maximal strength and knee joint position sense straight after, 24 hours after and one week after the training. Their results revealed that using a below-knee compression garment during training compensates for fatigue effects on maximal strength immediately following the exercise and once 24 hours has elapsed. In other words, one can begin the next maximal intensity strength training earlier if one has used a below-knee compression garment in the previous workout. Although compression garments reduce strength loss, their findings reaffirmed that they afford no protection against knee joint position sense errors. "Our previous studies focused only on the effects of compression garments on joint position sense," said Dr. Négyesi. "The present study found such garments to have the potential to reduce strength loss after a fatiguing exercise, which may help us better understand how applying a compression garment during exercise can decrease the risk of musculoskeletal injuries during sports activities." The researchers believe wearing a below-knee compression garment during regular workouts is beneficial because of the mechanical support and tissue compression it provides. Looking ahead, the team aims to detect whether maximal intensity programs that last for weeks produce different outcomes than the current findings to determine the longitudinal effects of compression garments.
10.1007/s00421-020-04507-1
2,020
European Journal of Applied Physiology
A below-knee compression garment reduces fatigue-induced strength loss but not knee joint position sense errors
We examined the possibility that wearing a below-knee compression garment (CG) reduces fatigue-induced strength loss and joint position sense (JPS) errors in healthy adults.Subjects (n = 24, age = 25.5 ± 4 years) were allocated to either one of the treatment groups that performed 100 maximal isokinetic eccentric contractions at 30°-1 with the right-dominant knee extensors: (1) with (EXPCG) or (2) without CG (EXP) or to (3) a control group (CONCG: CG, no exercise). Changes in JPS errors, and maximal voluntary isometric contraction (MVIC) torque were measured immediately post-, 24 h post-, and 1 week post-intervention in each leg. All testing was done without the CG.CG afforded no protection against JPS errors. Mixed analysis of variance (ANOVA) revealed that absolute JPS errors increased post-intervention in EXPCG and EXP not only in the right-exercised (52%, p = 0.013; 57%, p = 0.007, respectively) but also in the left non-exercised (55%, p = 0.001; 58%, p = 0.040, respectively) leg. Subjects tended to underestimate the target position more in the flexed vs. extended knee positions (75-61°: - 4.6 ± 3.6°, 60-50°: - 4.2 ± 4.3°, 50-25°: - 2.9 ± 4.2°), irrespective of group and time. Moreover, MVIC decreased in EXP but not in EXPCG and CONCG at immediately post-intervention (p = 0.026, d = 0.52) and 24 h post-intervention (p = 0.013, d = 0.45) compared to baseline.Altogether, a below-knee CG reduced fatigue-induced strength loss at 80° knee joint position but not JPS errors in healthy younger adults.
941664
Simpler and reliable ALS diagnosis with blood tests
Blood tests may enable more accurate diagnosis of ALS at an earlier stage of the disease. As described in a study by researchers at University of Gothenburg and Umeå University, it involves measuring the blood level of a substance that, as they have also shown, varies in concentration depending on which variant of ALS the patient has. The study, published in Scientific Reports, include Fani Pujol-Calderón, postdoctoral fellow at Sahlgrenska Academy, University of Gothenburg, and Arvin Behzadi, doctoral student at Umeå University and medical intern at Örnsköldsvik Hospital, as shared first authors. Currently, it is difficult to diagnose amyotrophic lateral sclerosis (ALS), the most common form of motor neuron disease, early in the course of the disease. Even after a prolonged investigation, there is a risk of misdiagnosis due to other diseases that may resemble ALS in early stages. Much would be gained from earlier correct diagnosis and, according to the researchers, the current findings look promising. Neurofilaments — proteins with a special role in the cells and fibers of nerves — are the substances of interest. When the nervous system is damaged, neurofilaments leak into the cerebrospinal fluid (CSF) and in lower concetrations in blood compared to CSF. In their study, scientists at Umeå University and the University Hospital of Umeå, as well as at the University of Gothenburg and Sahlgrenska University Hospital in Gothenburg, demonstrated that CSF and blood levels of neurofilaments can differentiate ALS from other diseases that may resemble early ALS. More sensitive methods of analysis Compared with several other neurological diseases, previous studies have shown higher concentrations of neurofilaments in CSF in ALS. Measuring neurofilament levels in the blood has previously been difficult since they occur at much lower concentrations compared to CSF. In recent years, however, new and more sensitive analytical methods have generated new scope for doing so. The current study shows a strong association, in patients with ALS, between the quantity of neurofilaments in the blood and in CSF. The study is based on blood and CSF samples collected from 287 patients who had been referred to the Department of Neurology at the University Hospital of Umeå for investigation of possible motor neuron disease. After extensive investigation, 234 of these patients were diagnosed with ALS. These had significantly higher levels of neurofilaments in CSF and blood compared to patients who were not diagnosed with ALS. Higher concentrations Differences among various subgroups of ALS were also investigated and detected. Patients whose pathological symptoms started in the head and neck region had higher neurofilament concentrations in the blood and worse survival than patients whose disease onset began in an arm or a leg. The study has also succeeded in quantifying differences in blood levels of neurofilaments and survival for the two most common mutations associated to ALS. “Finding suspected cases of ALS through a blood test opens up completely new opportunities for screening and measuring neurofilaments in blood collected longitudinally enables easier quantification of treatment effects in clinical drug trials compared to longitudinal collection of CSF. Finding ALS early in the disease course may facilitate earlier administration of pharmaceutical treatment, before the muscles have atrophied,” Arvin Behzadi says. ALS is a neurodegenerative syndrome that leads to loss of nerve cells in both the brain and the spinal cord, resulting in muscle weakness and atrophy. Most of these patients die within two to four years after the symptom onset, but roughly one in ten survive more than ten years after the symptoms first appeared. Several genetic mutations have been associated to ALS. At present, there is no curative treatment. Nevertheless, the current drug available has been shown to prolong the survival in some ALS patients if it is administered in time. Scientific Reports 10.1038/s41598-021-01499-6 Case study People Neurofilaments can differentiate ALS subgroups and ALS from common diagnostic mimics 11-Nov-2021
10.1038/s41598-021-01499-6
2,021
Scientific Reports
Neurofilaments can differentiate ALS subgroups and ALS from common diagnostic mimics
Abstract Delayed diagnosis and misdiagnosis are frequent in people with amyotrophic lateral sclerosis (ALS), the most common form of motor neuron disease (MND). Neurofilament light chain (NFL) and phosphorylated neurofilament heavy chain (pNFH) are elevated in ALS patients. We retrospectively quantified cerebrospinal fluid (CSF) NFL, CSF pNFH and plasma NFL in stored samples that were collected at the diagnostic work-up of ALS patients (n = 234), ALS mimics (n = 44) and controls (n = 9). We assessed the diagnostic performance, prognostication value and relationship to the site of onset and genotype. CSF NFL, CSF pNFH and plasma NFL levels were significantly increased in ALS patients compared to patients with neuropathies & myelopathies, patients with myopathies and controls. Furthermore, CSF pNFH and plasma NFL levels were significantly higher in ALS patients than in patients with other MNDs. Bulbar onset ALS patients had significantly higher plasma NFL levels than spinal onset ALS patients. ALS patients with C9orf72HRE mutations had significantly higher plasma NFL levels than patients with SOD1 mutations. Survival was negatively correlated with all three biomarkers. Receiver operating characteristics showed the highest area under the curve for CSF pNFH for differentiating ALS from ALS mimics and for plasma NFL for estimating ALS short and long survival. All three biomarkers have diagnostic value in differentiating ALS from clinically relevant ALS mimics. Plasma NFL levels can be used to differentiate between clinical and genetic ALS subgroups.
951771
Quantum mechanics could explain why DNA can spontaneously mutate
The molecules of life, DNA, replicate with astounding precision, yet this process is not immune to mistakes and can lead to mutations. Using sophisticated computer modelling, a team of physicists and chemists at the University of Surrey have shown that such errors in copying can arise due to the strange rules of the quantum world.   The two strands of the famous DNA double helix are linked together by subatomic particles called protons – the nuclei of atoms of hydrogen – which provide the glue that bonds molecules called bases together. These so-called hydrogen bonds are like the rungs of a twisted ladder that makes up the double helix structure discovered in 1952 by James Watson and Francis Crick based on the work of Rosalind Franklin and Maurice Wilkins.   Normally, these DNA bases (called A, C, T and G) follow strict rules on how they bond together: A always bonds to T and C always to G. This strict pairing is determined by the molecules' shape, fitting them together like pieces in a jigsaw, but if the nature of the hydrogen bonds changes slightly, this can cause the pairing rule to break down, leading to the wrong bases being linked and hence a mutation. Although predicted by Crick and Watson, it is only now that sophisticated computational modelling has been able to quantify the process accurately.  The team, part of Surrey's research programme in the exciting new field of quantum biology, have shown that this modification in the bonds between the DNA strands is far more prevalent than has hitherto been thought. The protons can easily jump from their usual site on one side of an energy barrier to land on the other side. If this happens just before the two strands are unzipped in the first step of the copying process, then the error can pass through the replication machinery in the cell, leading to what is called a DNA mismatch and, potentially, a mutation.   In a paper published this week in the journal Nature Communications Physics, the Surrey team based in the Leverhulme Quantum Biology Doctoral Training Centre used an approach called open quantum systems to determine the physical mechanisms that might cause the protons to jump across between the DNA strands. But, most intriguingly, it is thanks to a well-known yet almost magical quantum mechanism called tunnelling – akin to a phantom passing through a solid wall – that they manage to get across.   It had previously been thought that such quantum behaviour could not occur inside a living cell's warm, wet and complex environment. However, the Austrian physicist Erwin Schrödinger had suggested in his 1944 book What is Life? that quantum mechanics can play a role in living systems since they behave rather differently from inanimate matter. This latest work seems to confirm Schrödinger's theory.    In their study, the authors determine that the local cellular environment causes the protons, which behave like spread out waves, to be thermally activated and encouraged through the energy barrier. In fact, the protons are found to be continuously and very rapidly tunnelling back and forth between the two strands. Then, when the DNA is cleaved into its separate strands, some of the protons are caught on the wrong side, leading to an error.  Dr Louie Slocombe, who performed these calculations during his PhD, explains that:  “ The protons in the DNA can tunnel along the hydrogen bonds in DNA and modify the bases which encode the genetic information. The modified bases are called "tautomers" and can survive the DNA cleavage and replication processes, causing "transcription errors" or mutations.”  Dr Slocombe's work at the Surrey's Leverhulme Quantum Biology Doctoral Training Centre was supervised by Prof Jim Al-Khalili (Physics, Surrey) and Dr Marco Sacchi (Chemistry, Surrey) and published in Communications Physics.  Prof Al-Khalili comments:   “Watson and Crick speculated about the existence and importance of quantum mechanical effects in DNA well over 50 years ago, however, the mechanism has been largely overlooked.”   Dr Sacchi continues:   “Biologists would typically expect tunnelling to play a significant role only at low temperatures and in relatively simple systems. Therefore, they tended to discount quantum effects in DNA. With our study, we believe we have proved that these assumptions do not hold.”  Click here to read the full report.  Click here for more information on the Leverhulme Quantum Biology Doctoral Training Centre.  The Leverhulme Trust was established by the Will of William Hesketh Lever, the founder of Lever Brothers. Since 1925 the Trust has provided grants and scholarships for research and education. Today, it is one of the largest all-subject providers of research funding in the UK, distributing approximately £100m a year. For more information about the Trust, please visit www.leverhulme.ac.uk and follow the Trust on Twitter @LeverhulmeTrust  <ends>  Professor Jim Al-Khalili and Dr Louie Slocombe are available for interview upon request  Contact the University of Surrey Press Office via [email protected]  Communications Physics 10.1038/s42005-022-00881-8 Computational simulation/modeling Cells An open quantum systems approach to proton tunnelling in DNA 5-May-2022
10.1038/s42005-022-00881-8
2,022
Communications Physics
An open quantum systems approach to proton tunnelling in DNA
Abstract One of the most important topics in molecular biology is the genetic stability of DNA. One threat to this stability is proton transfer along the hydrogen bonds of DNA that could lead to tautomerisation, hence creating point mutations. We present a theoretical analysis of the hydrogen bonds between the Guanine-Cytosine (G-C) nucleotide, which includes an accurate model of the structure of the base pairs, the quantum dynamics of the hydrogen bond proton, and the influence of the decoherent and dissipative cellular environment. We determine that the quantum tunnelling contribution to the proton transfer rate is several orders of magnitude larger than the classical over-the-barrier hopping. Due to the significance of the quantum tunnelling even at biological temperatures, we find that the canonical and tautomeric forms of G-C inter-convert over timescales far shorter than biological ones and hence thermal equilibrium is rapidly reached. Furthermore, we find a large tautomeric occupation probability of 1.73 × 10 −4 , suggesting that such proton transfer may well play a far more important role in DNA mutation than has hitherto been suggested. Our results could have far-reaching consequences for current models of genetic mutations.
691436
Dynamic model helps understand healthy lakes to heal sick ones
Development of a dynamic model for microbial populations in healthy lakes could help scientists understand what's wrong with sick lakes, prescribe cures and predict what may happen as environmental conditions change. Those are among the benefits expected from an ambitious project to model the interactions of some 18,000 species in a well-studied Wisconsin lake. The research produced what is believed to be largest dynamic model of microbial species interactions ever created. Analyzing long-term data from Lake Mendota near Madison, Wisconsin, a Georgia Tech research team identified and modeled interactions among 14 sub-communities, that is, collections of different species that become dominant at specific times of the year. Key environmental factors affecting these sub-communities included water temperature and the levels of two nutrient classes: ammonia/phosphorus and nitrates/nitrites. The effects of these factors on the individual species were, in general, more pronounced than those of species-species interactions. Beyond understanding what's happening in aquatic microbial environments, the model might also be used to study other microbial populations - perhaps even human microbiomes. The research was reported on March 24 in the journal Systems Biology and Applications, a Nature partner journal. The work was sponsored by the National Science Foundation's Dimensions of Biodiversity program. "Ultimately, we want to understand why some microbial populations are declining and why some are increasing at certain times of the year," said Eberhard Voit, the paper's corresponding author and The David D. Flanagan Chair Professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "We want to know why these populations are changing - whether it is because of environmental conditions alone, or interactions between the different species. Importantly, we also look at the temporal development: how interactions change over time." Because of the large number of different microorganisms involved, creating such a model was a monumental task. To make it more manageable, the researchers segmented the most abundant species into groups that had significant interactions at specific times of the year. Georgia Tech Research Scientist Phuongan Dam created 14 such categories or sub-communities - corresponding to roughly one per month - and mapped the relationships between them during different times of the year. Two of the 14 groups had two population peaks per year. "The exciting part about this work is that we are now able to model hundreds of species," said Kostas Konstantinidis, a co-author on the paper and the Carlton S. Wilder associate professor in Georgia Tech's School of Civil and Environmental Engineering. "The ability to dynamically model microbial communities containing hundreds or even thousands of species as those interactions change over time or after environmental perturbations will have numerous implications and applications for other research areas." In the past, researchers have created static models of interactions between large numbers of microorganisms, but those provided only snapshots in time and couldn't be used to model interactions as they change throughout the year. Scientists might want to know, for example, what would happen if a community lost one species, if a flood of nutrients hit the lake or if the temperature rose. As with many communities, the lake includes organisms from different species and families that are highly interconnected, playing a variety of interrelated roles, such as fixing nitrogen, carrying out photosynthesis, degrading pollutants and providing metabolic services used by other organisms. Information about the microbes came from a long-term data set compiled by other scientists who study the lake on a regular basis. Voit, a bio-mathematician, said the model, although itself nonlinear, uses algorithms based on linear regression, which can be analyzed using standard computer clusters. Using their 14 sub-communities, the researchers found 196 interactions that could describe the species interactions - a far easier task than analyzing the 300 million potential interactions between the full 18,642 species in the lake. Reducing the number of potential interactions was possible only due to the strategy of defining sub-communities and a clever modeling approach. The researchers initially tried to organize the microbes into genetically related organisms, but that strategy failed. "At any time of the year, the lake needs species that can do certain tasks," said Voit. "Closely-related species tend to play essentially the same roles, so that putting them all together into the same group results in having many organisms doing the same things - but not executing other tasks that are needed at a specific time. By looking at the 14 sub-communities, we were able to get a smorgasbord of every task that needed to be done using different combinations of the microorganisms at each time." By looking at sub-communities present at specific times of the year, the research team was able to study interactions that occurred naturally - and avoided having to study interactions that rarely took place. The model examines interactions at two levels: among the 14 sub-communities, and between the sub-communities and individual species. The research depended heavily on metagenomics, the use of genomic analysis to identify the microorganisms present. Only 1 percent of microbial species can be cultured in the laboratory, but metagenomics allows scientists to obtain the complete inventory of species present by identifying specific sections of their DNA. Because they are not fully characterized species, the components of genomic data are termed "operational taxonomic units" (OTUs), which the team used as a "proxy" for species. The next step in the research will be to complete a similar study of Lake Lanier, located north of Atlanta. In addition to the information studied for Lake Mendota, that study will gather data about the enzymatic and metabolic activities of the microorganism communities. Lake Lanier feeds the Chattahoochee River and a series of other lakes, and the researchers hope to study the entire river system to assess how different environments and human activities affect the microbial populations. The work could lead to a better understanding of what interactions are necessary for a healthy lake, which may help scientists determine what might be needed to address problems in sick lakes. The modeling technique might also help scientists with other complex microbial systems. "Our work right now is with the lake community, but the methods could be applicable to other microbial communities, including the human microbiome," said Konstantinidis. "As with sick lakes, understanding what is healthy might one day allow scientists to diagnose microbiome-related disease conditions and address them by adjusting the populations of different microorganism sub-communities." ### This material is based upon work supported by the National Science Foundation under Grant No. DEB-1241046. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. CITATION: Phuongan Dam, Luis L. Fonseca, Konstantinos T. Konstantinidis and Eberhard O. Voit, "Dynamic models of the complex microbial metapopulation of Lake Mendota," (Nature Partner Journal Systems Biology and Applications, 2016). http://dx.doi.org/10.1038/npjsba.2016.7
10.1038/npjsba.2016.7
2,016
npj Systems Biology and Applications
Dynamic models of the complex microbial metapopulation of lake mendota
Like many other environments, Lake Mendota, WI, USA, is populated by many thousand microbial species. Only about 1,000 of these constitute between 80 and 99% of the total microbial community, depending on the season, whereas the remaining species are rare. The functioning and resilience of the lake ecosystem depend on these microorganisms, and it is therefore important to understand their dynamics throughout the year. We propose a two-layered set of dynamic mathematical models that capture and interpret the yearly abundance patterns of the species within the metapopulation. The first layer analyzes the interactions between 14 subcommunities (SCs) that peak at different times of the year and together contain all species whereas the second layer focuses on interactions between individual species and SCs. Each SC contains species from numerous families, genera, and phyla in strikingly different abundances. The dynamic models quantify the importance of environmental factors in shaping the dynamics of the lake's metapopulation and reveal positive or negative interactions between species and SCs. Three environmental factors, namely temperature, ammonia/phosphorus, and nitrate+nitrite, positively affect almost all SCs, whereas by far the most interactions between SCs are inhibitory. As far as the interactions can be independently validated, they are supported by literature information. The models are quite robust and permit predictions of species abundances over many years both, under the assumption that conditions do not change drastically, or in response to environmental perturbations. A lake microbe population model developed by US researchers reveals how environmental factors affect community dyamics. Metagenomic sequencing now provides huge datasets on the abundances of species in an ecosystem, but computational models are needed to understand how all the species interact. Eberhard Voit and co-workers at Georgia Institute of Technology used 11 years worth of data collected from Lake Mendota, Wisconsin, to inform a new model based on the famous Lotka-Volterra equations for predator-prey interactions. To make their task manageable, they grouped the 1140 most abundant microbe species into 14 sub-communities. Their model can predict the effects of annual cycles of temperature and nutrients on community dynamics, as well as quantify interactions between sub-communities. It will be a useful tool for assessing the health of lake ecosystems now and in the future.
589555
Quality of life with those with advanced cancer improved through walking
Walking for just 30 minutes three times per week could improve the quality of life for those with advanced cancer, a new study published in the BMJ Open journal has found. Researchers from the University of Surrey collaborated with those form the Florence Nightingale Faculty of Nursing & Midwifery at King's College London to explore the impact of walking on the quality of life and symptom severity in patients with advanced cancer. Despite growing evidence of significant health benefits of exercise to cancer patients, physical activity commonly declines considerably during treatment and remains low afterwards. Initiatives in place to promote physical activity for those suffering with cancer are normally supervised and require travel to specialist facilities, placing an additional burden on patients. During this study 42 cancer patients were split into two groups. Group one received coaching from an initiative by Macmillan Cancer which included a short motivational interview, the recommendation to walk for at least 30 minutes on alternate days and attend a volunteer-led group walk weekly. The health benefits of walking are well documented, with improved cardio vascular strength and increased energy levels. Group two were encouraged to maintain their current level of activity. Researchers found that those in group one reported an improvement in physical, emotional and psychological wellbeing having completed the programme. Many participants noted that walking provided an improved positive attitude towards their illness and spoke of the social benefits of participating in group walks. One of the participants commented: "The impact has been immense! It gave me the motivation to not only increase walking activity from minutes to 3-4 hours per week but also to reduce weight by altering diet, reducing sweets/sugars. Great boost to morale. No longer dwell on being terminal - I'm just on getting on with making life as enjoyable as possible, greatly helped by friends made on regular 'walks for life'." Professor Emma Ream, co-author of the paper and Professor of Supportive Cancer Care and Director of Research in the School of Health Sciences at the University of Surrey, said: "The importance of exercise in preventing cancer recurrence and managing other chronic illnesses is becoming clear. "Findings from this important study show that exercise is valued by, suitable for, and beneficial to people with advanced cancer. "Rather than shying away from exercise people with advanced disease should be encouraged to be more active and incorporate exercise into their daily lives where possible." Dr Jo Armes, lead researcher and Senior Lecturer at the Florence Nightingale Faculty of Nursing & Midwifery, King's College London, said: "This study is a first step towards exploring how walking can help people living with advanced cancer. Walking is a free and accessible form of physical activity, and patients reported that it made a real difference to their quality of life. "Further research is needed with a larger number of people to provide definitive evidence that walking improves both health outcomes and social and emotional wellbeing in this group of people." ### This study was funded by Dimbleby Cancer Care.
10.1136/bmjopen-2016-013719
2,017
BMJ Open
CanWalk: a feasibility study with embedded randomised controlled trial pilot of a walking intervention for people with recurrent or metastatic cancer
Objectives Walking is an adaptable, inexpensive and accessible form of physical activity. However, its impact on quality of life (QoL) and symptom severity in people with advanced cancer is unknown. This study aimed to assess the feasibility and acceptability of a randomised controlled trial (RCT) of a community-based walking intervention to enhance QoL in people with recurrent/metastatic cancer. Design We used a mixed-methods design comprising a 2-centre RCT and nested qualitative interviews. Participants Patients with advanced breast, prostate, gynaecological or haematological cancers randomised 1:1 between intervention and usual care. Intervention The intervention comprised Macmillan's ‘Move More’ information, a short motivational interview with a recommendation to walk for at least 30 min on alternate days and attend a volunteer-led group walk weekly. Outcomes We assessed feasibility and acceptability of the intervention and RCT by evaluating study processes (rates of recruitment, consent, retention, adherence and adverse events), and using end-of-study questionnaires and qualitative interviews. Patient-reported outcome measures (PROMs) assessing QoL, activity, fatigue, mood and self-efficacy were completed at baseline and 6, 12 and 24 weeks. Results We recruited 42 (38%) eligible participants. Recruitment was lower than anticipated (goal n=60), the most commonly reported reason being unable to commit to walking groups (n=19). Randomisation procedures worked well with groups evenly matched for age, sex and activity. By week 24, there was a 45% attrition rate. Most PROMs while acceptable were not sensitive to change and did not capture key benefits. Conclusions The intervention was acceptable, well tolerated and the study design was judged acceptable and feasible. Results are encouraging and demonstrate that exercise was popular and conveyed benefit to participants. Consequently, an effectiveness RCT is warranted, with some modifications to the intervention to include greater tailoring and more appropriate PROMs selected. Trial registration number ISRCTN42072606 .
646613
New research shows effectiveness of laws for protecting imperiled species, remaining gaps
New research from the Center for Conservation Innovation (CCI) at Defenders of Wildlife, published in the journal Nature Communications, shows for the first time the importance of expert agencies to protecting imperiled species. This paper, "Data Indicate the Importance of Expert Agencies in Conservation Policy," empirically supports the need for strong oversight of federal activities. It also suggests data-driven ways to improve efficiency without sacrificing protections. This is critical at a time when conservation laws and policies are under attack: understanding what works in conservation is essential in combatting the global biodiversity crisis. The data analyzed by Defenders of Wildlife included every Endangered Species Act section 7 consultation between federal agencies and the National Marine Fisheries Service (NMFS) from 2000 through 2017. The analysis showed that agencies and NMFS agreed on how proposed federal projects would affect listed species most of the time, and that the consultation process rarely stops projects. Importantly, however, federal agencies underestimated the effects of their actions on listed species in 15% of consultations, relative to what species experts at NMFS concluded. This included 22 extreme cases where NMFS concluded the action would jeopardize the very existence of 14 species after the agency had determined its action would do no harm. In 6% of cases, agencies overestimated the effects of their actions, which meant additional resources may have been unnecessarily spent in analyses. "This study emphasizes the critical role that the expert biologists at the Services play in assessing the impacts of proposed federal actions on threatened and endangered species," said Michael Evans, CCI Senior Conservation Data Scientist and lead author on the study. "Our findings show that limiting or removing the Services from the consultation process could have disastrous consequences for imperiled species. And at the same time, we were able to identify areas where the consultation process could be made more efficient, without sacrificing protections to listed species." "Recent proposals to 'streamline' consultations by removing the species experts in the National Marine Fisheries Service from the process could be devastating to the species who need protection the most," said Jacob Malcom, Director of the Center for Conservation Innovation at Defenders of Wildlife and an author on the study. "Rather than try to cut protections, Congress should be strengthening and fully funding the expert agencies--National Marine Fisheries Service and the U.S. Fish and Wildlife Service--who ensure the protections for threatened and endangered species." ### The data and analyses can be explored using an interactive web app hosted on the CCI webpage: https://defenders-cci.org/shiny/open/NMFS_s7/ Background The U.S. Endangered Species Act (ESA), passed with overwhelming bipartisan support under the Nixon administration in 1973, is widely considered the strongest wildlife protection law in the world. The law is incredibly successful: more than 95% of listed species are still with us today and hundreds are on the path to recovery. Section 7 of the ESA requires federal agencies to conserve listed species by not taking, funding, or authorizing any actions that would jeopardize their existence. They consult with either the U.S. Fish and Wildlife Service or National Marine Fisheries Services on any proposed actions that may affect listed species to fulfill this obligation. If the Services determine an action may jeopardize a listed species, the Services must suggest "reasonable and prudent alternatives" that agencies can implement to reduce, or offset harm caused by the proposed action. If adopted, the agencies may legally proceed with the action. By asking "How often do federal agencies overestimate or underestimate the effects of their actions on listed species?" this research evaluates whether proposals to reduce the role of the Services in consultations. Future research measuring the outcomes of consultation in terms of actions taken and species status would help determine the effectiveness of the program. The 14 species for which NMFS issued jeopardy determinations after federal agencies determined their actions would not detrimentally affect the species were: boulder star coral, elkhorn coral, lobed star coral, mountainous star coral, pillar coral, rough cactus coral, staghorn coral, Nassau grouper, Chinook salmon, chum salmon, coho salmon, sockeye salmon, steelhead and southern resident killer whale. Figures Figures from the report can be downloaded for reporter use at https://defenders.org/newsroom/new-research-shows-effectiveness-of-laws-protecting-imperiled-species-remaining-gaps: U.S. Endangered Species Act section 7 consultation outcomes, Frequencies of determinations proposed by action agencies vs. final determinations made by NMFS, and pairs of species that received jeopardy determinations from the same proposed federal action more or less than expected. Defenders of Wildlife is dedicated to the protection of all native animals and plants in their natural communities. With over 1.8 million members and activists, Defenders of Wildlife is a leading advocate for innovative solutions to safeguard our wildlife heritage for generations to come. For more information, visit defenders.org/newsroom and follow us on Twitter @DefendersNews.
10.1038/s41467-019-11462-9
2,019
Nature Communications
Novel data show expert wildlife agencies are important to endangered species protection
Abstract To protect biodiversity, conservation laws should be evaluated and improved using data. We provide a comprehensive assessment of how a key provision of the U.S. Endangered Species Act (ESA) is implemented: consultation to ensure federal actions do not jeopardize the existence of listed species. Data from all 24,893 consultations recorded by the National Marine Fisheries Service (NMFS) from 2000–2017 show federal agencies and NMFS frequently agreed (79%) on how federal actions would affect listed species. In cases of disagreement, agencies most often (71%) underestimated effects relative to the conclusions of species experts at NMFS. Such instances can have deleterious consequences for imperiled species. In 22 consultations covering 14 species, agencies concluded that an action would not harm species while NMFS determined the action would jeopardize species’ existence. These results affirm the importance of the role of NMFS in preventing federal actions from jeopardizing listed species. Excluding expert agencies from consultation compromises biodiversity conservation, but we identify approaches that improve consultation efficiency without sacrificing species protections.
968908
Study shows inexpensive, readily available chemical may limit impact of COVID-19
Preclinical studies in mice that model human COVID-19 suggest that an inexpensive, readily available amino acid might limit the effects of the disease and provide a new off-the-shelf therapeutic option for infections with SARS-CoV-2 variants and perhaps future novel coronaviruses. A team led by researchers at the David Geffen School of Medicine at UCLA report that an amino acid called GABA, which is available over-the-counter in many countries, reduced disease severity, viral load in the lungs, and death rates in SARS-CoV-2-infected mice. This follows up on their previous finding that GABA consumption also protected mice from another lethal mouse coronavirus called MHV-1. In both cases, GABA treatment was effective when given just after infection or several days later near the peak of virus production. The protective effects of GABA against two different types of coronaviruses suggest that GABA may provide a generalizable therapy to help treat diseases induced by new SARS-CoV-2 variants and novel beta-coronaviruses.   “SARS-CoV-2 variants and novel coronaviruses will continue to arise, and they may not be efficiently controlled by available vaccines and antiviral medications. Furthermore, the generation of new vaccines is likely to be much slower than the spread of new variants,” said senior author Daniel L. Kaufman, a researcher and professor in Molecular and Medical Pharmacology at the David Geffen School of Medicine at UCLA. Accordingly, new therapeutic options are needed to limit the severity of these infections. Their previous studies showed that GABA administration protected mice from developing severe disease after infection with a mouse coronavirus called MHV-1. To more stringently test the potential of GABA as a therapy for COVID-19, they studied transgenic mice that when infected with SARS-CoV-2 develop severe pneumonia with a high mortality rate. “If our observations of the protective effects of GABA therapy in SARS-CoV-2-infected mice are confirmed in clinical trials, GABA could provide an off-the-shelf treatment to help ameliorate infections with SARS-CoV-2 variants. GABA is inexpensive and stable at room temperature, which could make it widely and easily accessible, and especially beneficial in developing countries.” The researchers said that GABA and GABA receptors are most often thought of as a major neurotransmitter system in the brain. Years ago, they, as well as other researchers, found that cells of the immune system also possessed GABA receptors and that the activation of these receptors inhibited the inflammatory actions of immune cells. Taking advantage of this property, the authors reported in a series of studies that GABA administration inhibited autoimmune diseases such as type 1 diabetes, multiple sclerosis, and rheumatoid arthritis in mouse models of these ailments. Other scientists who study gas anesthetics have found that lung epithelial cells also possess GABA receptors and that drugs that activate these receptors could limit lung injuries and inflammation in the lung. The dual actions of GABA in inflammatory immune cells and lung epithelial cells, along with its safety for clinical use, made GABA a theoretically appealing candidate for limiting the overreactive immune responses and lung damage due to coronavirus infection. Working with colleagues at the University of Southern California, the UCLA research team in this study administered GABA to the mice just after infection with SARS-CoV-2, or two days later when the virus levels are near their peak in the mouse lungs. While the vast majority of untreated mice did not survive this infection, those given GABA just after infection, or two days later, had less illness severity and a lower mortality rate over the course of the study. Treated mice also displayed reduced levels of virus in their lungs and changes in circulating immune signaling molecules, known as cytokines and chemokines, toward patterns that were associated with better outcomes in COVID-19 patients. Thus, GABA receptor activation had multiple beneficial effects in this mouse model that are also desirable for the treatment of COVID-19. The authors hope that their new findings will provide a springboard for testing the efficacy of GABA treatment in clinical trials with COVID-19 patients. Since GABA has an excellent safety record, is inexpensive and available worldwide, clinical trials of GABA treatment for COVID-19 can be initiated rapidly. The authors also suspect that the anti-inflammatory properties of GABA-receptor activating drugs may also be useful for limiting inflammation in the central nervous system that is associated with long-COVID. Indeed, this approach was very successful in their previous studies of therapeutics for multiple sclerosis in mice, a disease which is caused by an inflammatory autoimmune response in the brain. The authors speculate that such drugs may reduce both the deleterious effects of coronavirus infection in the periphery and limit inflammation in the central nervous system. Unfortunately, there has been no pharmaceutical interest pursuing GABA therapy for COVID-19, presumably because it is not patentable and widely available as a dietary supplement. The authors hope for federal funding to continue this line of study. The researchers emphasize that unless clinical trials are conducted and GABA is approved for treating COVID-19 by relevant governing bodies, it should not be consumed for the treatment of COVID-19 since it could pose health risks, such as dampening beneficial immune or physiological responses. Article: A GABA-receptor agonist reduces pneumonitis severity, viral load, and death rate in SARS-CoV-2-infected mice Front. Immunol. Sec. Viral Immunology. DOI: 10.3389/fimmu.2022.1007955 Additional authors include Jide Tian, Barbara Dillion, High Containment Program at UCLA, and Jill Henley and Lucio Comai, Keck School of Medicine at USC. Funding: This work was supported by a grant to DLK from the UCLA DGSOM-Broad Stem Cell Research Center, the Department of Molecular and Medical Pharmacology, and the Immunotherapeutics Research Fund. Work at USC was supported by a grant from the COVID-19 Keck Research Fund to LC.  Conflicts of interests: DLK and JT are inventors of GABA-related patents. DLK serves on the Scientific Advisory Board of Diamyd Medical. BD, LC and JH have no financial conflicts of interest. Frontiers in Immunology 10.3389/fimmu.2022.1007955 Experimental study Animals A GABA-receptor agonist reduces pneumonitis severity, viral load, and death rate in SARS-CoV-2-infected mice 24-Oct-2022 DLK and JT are inventors of GABA-related patents. DLK serves on the Scientific Advisory Board of Diamyd Medical. BD, LC and JH have no financial conflicts of interest.
10.3389/fimmu.2022.1007955
2,022
Frontiers in Immunology
A GABA-receptor agonist reduces pneumonitis severity, viral load, and death rate in SARS-CoV-2-infected mice
Gamma-aminobutyric acid (GABA) and GABA-receptors (GABA-Rs) form a major neurotransmitter system in the brain. GABA-Rs are also expressed by 1) cells of the innate and adaptive immune system and act to inhibit their inflammatory activities, and 2) lung epithelial cells and GABA-R agonists/potentiators have been observed to limit acute lung injuries. These biological properties suggest that GABA-R agonists may have potential for treating COVID-19. We previously reported that GABA-R agonist treatments protected mice from severe disease induced by infection with a lethal mouse coronavirus (MHV-1). Because MHV-1 targets different cellular receptors and is biologically distinct from SARS-CoV-2, we sought to test GABA therapy in K18-hACE2 mice which develop severe pneumonitis with high lethality following SARS-CoV-2 infection. We observed that GABA treatment initiated immediately after SARS-CoV-2 infection, or 2 days later near the peak of lung viral load, reduced pneumonitis severity and death rates in K18-hACE2 mice. GABA-treated mice had reduced lung viral loads and displayed shifts in their serum cytokine/chemokine levels that are associated with better outcomes in COVID-19 patients. Thus, GABA-R activation had multiple effects that are also desirable for the treatment of COVID-19. The protective effects of GABA against two very different beta coronaviruses (SARS-CoV-2 and MHV-1) suggest that it may provide a generalizable off-the-shelf therapy to help treat diseases induced by new SARS-CoV-2 variants and novel coronaviruses that evade immune responses and antiviral medications. GABA is inexpensive, safe for human use, and stable at room temperature, making it an attractive candidate for testing in clinical trials. We also discuss the potential of GABA-R agonists for limiting COVID-19-associated neuroinflammation.
514157
Rare congenital heart defect rescued by protease inhibition
Greenwood, SC (October 15, 2020) - A research team at the Greenwood Genetic Center (GGC) has successfully used small molecules to restore normal heart and valve development in an animal model for Mucolipidosis II (ML II), a rare genetic disorder. Progressive heart disease is commonly associated with ML II. The study is reported in this month's JCI Insight. The small molecules included the cathepsin protease K inhibitor, odanacatib, and an inhibitor of TGFß growth factor signaling. Cathepsin proteases have been associated with later-onset heart disease including atherosclerosis, cardiac hypertrophy, and valvular stenosis, but their role in congenital heart defects has been unclear. The current study offers new insight into how mislocalizing proteases like cathepsin K alter embryonic heart development in a zebrafish model of ML II. "Mutations in GNPTAB, the gene responsible for ML II, alter the localization and increase the activity of cathepsin proteases. This disturbs growth factor signaling and disrupts heart and valve development in our GNPTAB deficient zebrafish embryos," said Heather Flanagan-Steet, PhD, Director of the Hazel and Bill Allin Aquaculture Facility and Director of Functional Studies at GGC. "By inhibiting this process, normal cardiac development was restored. This finding highlights the potential of small molecules and validates the need for further studies into their efficacy." Flanagan-Steet noted that she hopes the current work with ML II zebrafish will provide the basis to move one step closer to a treatment.
10.1172/jci.insight.133019
2,020
JCI Insight
Inappropriate cathepsin K secretion promotes its enzymatic activation driving heart and valve malformation
Although congenital heart defects (CHDs) represent the most common birth defect, a comprehensive understanding of disease etiology remains unknown. This is further complicated since CHDs can occur in isolation or as a feature of another disorder. Analyzing disorders with associated CHDs provides a powerful platform to identify primary pathogenic mechanisms driving disease. Aberrant localization and expression of cathepsin proteases can perpetuate later-stage heart diseases, but their contribution toward CHDs is unclear. To investigate the contribution of cathepsins during cardiovascular development and congenital disease, we analyzed the pathogenesis of cardiac defects in zebrafish models of the lysosomal storage disorder mucolipidosis II (MLII). MLII is caused by mutations in the GlcNAc-1-phosphotransferase enzyme (Gnptab) that disrupt carbohydrate-dependent sorting of lysosomal enzymes. Without Gnptab, lysosomal hydrolases, including cathepsin proteases, are inappropriately secreted. Analyses of heart development in gnptab-deficient zebrafish show cathepsin K secretion increases its activity, disrupts TGF-β-related signaling, and alters myocardial and valvular formation. Importantly, cathepsin K inhibition restored normal heart and valve development in MLII embryos. Collectively, these data identify mislocalized cathepsin K as an initiator of cardiac disease in this lysosomal disorder and establish cathepsin inhibition as a viable therapeutic strategy.
644655
Half of vision impairment in first world is preventable
Around half of vision impairment in Western Europe is preventable, according to a new study published in the British Journal of Ophthalmology. The study was carried out by the Vision Loss Expert Group, led by Professor Rupert Bourne of Anglia Ruskin University, and shows the prevalence and causes of vision loss in high income countries worldwide as well as other European nations in 2015, based on a systematic review of medical literature over the previous 25 years. A comparison of countries in the study shows that, based on the available data, UK has the fifth lowest prevalence of blindness in the over 50s out of the 50 countries surveyed, with 0.52% of men and women in that age group affected. Belgium had the lowest prevalence at 0.46%. However, in terms of the percentage of population with moderate to severe vision impairment (MSVI), the UK ranked in the bottom half of the table with 6.1%, a higher prevalence than non-EU countries such as Andorra, Serbia and Switzerland. Cataract was found to be the most common cause of blindness in Western Europe in 2015 (21.9%), followed by age-related macular degeneration (16.3%) and glaucoma (13.5%), but the main cause of MSVI was uncorrected refractive error - which is a condition that can be treated simply by wearing glasses. This condition made up 49.6% of all MSVI in Western Europe. Cataract was the next main cause in this region, with 15.5%, followed by age-related macular degeneration. The research also predicts that the contribution of the surveyed countries to the world's vision impaired is expected to lessen slightly by 2020, although the number of people in these nations with impaired sight will rise overall to 69 million due to a rising overall population. Professor Bourne, Professor of Ophthalmology at Anglia Ruskin University's Vision and Eye Research Unit, said: "Vision impairment is of great importance for quality of life and for the socioeconomics and public health of societies and countries. "Overcoming barriers to services which would address uncorrected refractive error could reduce the burden of vision impairment in high-income countries by around half. This is an important public health issue even in the wealthiest of countries and more research is required into better treatments, better implementation of the tools we already have, and ongoing surveillance of the problem. "This work has exposed gaps in the global data, given that many countries have not formally surveyed their populations for eye disease. That is the case for the UK and a more robust understanding of people's needs would help bring solutions." The work by the study team contributes to the wider Global Burden of Disease (GBD) Study, a comprehensive regional and global research program of disease burden that assesses mortality and disability from major diseases, injuries, and risk factors. ###
10.1136/bjophthalmol-2017-311258
2,018
British Journal of Ophthalmology
Prevalence and causes of vision loss in high-income countries and in Eastern and Central Europe in 2015: magnitude, temporal trends and projections
Background Within a surveillance of the prevalence and causes of vision impairment in high-income regions and Central/Eastern Europe, we update figures through 2015 and forecast expected values in 2020. Methods Based on a systematic review of medical literature, prevalence of blindness, moderate and severe vision impairment (MSVI), mild vision impairment and presbyopia was estimated for 1990, 2010, 2015, and 2020. Results Age-standardised prevalence of blindness and MSVI for all ages decreased from 1990 to 2015 from 0.26% (0.10–0.46) to 0.15% (0.06–0.26) and from 1.74% (0.76–2.94) to 1.27% (0.55–2.17), respectively. In 2015, the number of individuals affected by blindness, MSVI and mild vision impairment ranged from 70 000, 630 000 and 610 000, respectively, in Australasia to 980 000, 7.46 million and 7.25 million, respectively, in North America and 1.16 million, 9.61 million and 9.47 million, respectively, in Western Europe. In 2015, cataract was the most common cause for blindness, followed by age-related macular degeneration (AMD), glaucoma, uncorrected refractive error, diabetic retinopathy and cornea-related disorders, with declining burden from cataract and AMD over time. Uncorrected refractive error was the leading cause of MSVI. Conclusions While continuing to advance control of cataract and AMD as the leading causes of blindness remains a high priority, overcoming barriers to uptake of refractive error services would address approximately half of the MSVI burden. New data on burden of presbyopia identify this entity as an important public health problem in this population. Additional research on better treatments, better implementation with existing tools and ongoing surveillance of the problem is needed.
562952
Neurons: 'String of lights' indicates excitation propagation
A type of novel molecular voltage sensor makes it possible to watch nerve cells at work. The principle of the method has been known for some time. However, researchers at the University of Bonn and the University of California in Los Angeles have now succeeded in significantly improving it. It allows the propagation of electrical signals in living nerve cells to be observed with high temporal and spatial resolution. This enables investigations into completely new questions that were previously closed to research. The study has now been published in the journal PNAS. When we smell a bottle of suntan lotion, electrical pulses are generated in the sensory cells of the nose. Via the olfactory bulb in the brain, they enter the primary olfactory cortex, which then distributes them to various brain centers. Memories such as summer vacations by the sea long ago are then conjured up in the hippocampus and other regions. In recent decades, brain researchers have gained an increasingly precise idea of how stimuli are processed in the brain and which path the electrical excitation takes in the process. However, in many aspects these insights are still very approximate. The method now presented by researchers at the University of Bonn and the University of California in Los Angeles may help solve this problem. Nerve cells transmit electrical signals to other nerve cells via biological "cables" known as axons. Each nerve cell is encased in a thin membrane that separates it from its environment. In the resting state, there are many positively charged ions on the outside of this membrane, significantly more than on the inside. There is therefore an electrical voltage between the inside and the outside. Neuroscientists also speak of a membrane potential. Light chain for nerve cells When a signal passes a certain point on the axon, this potential changes there for a short time. "And we can make this change visible," explains Prof. Dr. Istvan Mody of the Institute for Experimental Epileptology and Cognition Research (IEECR) at the University of Bonn Medical Center. To do this, the researchers drape a chain of lights around the nerve cells, so to speak. The special thing about it: Each lamp of this chain carries a voltage-dependent dimmer. This means that it gets darker when the membrane potential at the location of the lamp changes. This makes excitation propagation visible as a kind of "dark drop" running along the axon. The researchers use fluorescent proteins as a light chain. "We introduced the gene for this into the cells," Mody explains. The researchers also tagged the genetic makeup with a kind of shipping label. "This label ensures that the fluorescent dyes are transported to the outside of the membrane immediately after they are produced. A kind of anchor then ensures they stay put." The dimmer is not part of the nano lamp, but another molecule: a so-called "dark quencher". This is normally located on the inside of the membrane. However, due to the voltage change during signal forwarding, it changes to the outside. There it meets the fluorescent proteins and shields them. The nano lamp becomes darker as a result. As soon as the potential normalizes, the dark quencher moves back to the inside, and the luminosity increases again. "This method is not really anything new," Mody says. "However, we have fundamentally improved it in two respects." Until now, the fluorescent proteins were integrated directly into the membrane, which significantly disrupted the function of the neurons. The new nano lamps, in contrast, sit outside the membrane. They also do not fade as quickly, but retain their luminosity for 40 minutes, four times as long as conventional fluorescent dyes. Highly explosive dimmer The second change concerns the dark quencher: The compound normally used for this purpose is toxic and also highly combustible. It was even used as an explosive during the Second World War. "Our quencher, on the other hand, is completely harmless," Mody emphasizes. "It also reacts even faster and more sensitively to the smallest changes in potential. This allows our method to visualize up to 100 electrical pulses per second." The method permits the function of nerve cells to be observed without disturbing them. This makes it possible, for instance, to gain a more precise insight into the associated malfunctions in certain neuronal diseases. It is ultimately a promising new tool to better understand the workings of the brain.
10.1073/pnas.2020235118
2,021
Proceedings of the National Academy of Sciences
A dark quencher genetically encodable voltage indicator (dqGEVI) exhibits high fidelity and speed
Significance Voltage sensing with genetically expressed optical probes is highly desirable for large-scale recordings of neuronal activity and detection of localized voltage signals in single neurons. Here we describe a method for a two-component (hybrid) genetically encodable fluorescent voltage sensing in neurons. The approach uses a glycosylphosphatidylinositol-tagged fluorescent protein (enhanced green fluorescent protein) that ensures the fluorescence to be specifically confined to the outside of the plasma membrane and D3, a voltage-dependent quencher. Previous hybrid genetically encoded voltage sensing approaches relied on a single quenching molecule, dipycrilamine (DPA), which is toxic, increases membrane capacitance, interferes with neurotransmitters, and is explosive. Our method uses a nontoxic and nonexplosive compound that performs better than DPA in all aspects of fluorescent voltage sensing.
974327
Producing ‘green’ energy — literally — from living plant ‘bio-solar cells’
Though plants can serve as a source of food, oxygen and décor, they’re not often considered to be a good source of electricity. But by collecting electrons naturally transported within plant cells, scientists can generate electricity as part of a “green,” biological solar cell. Now, researchers reporting in ACS Applied Materials & Interfaces have, for the first time, used a succulent plant to create a living “bio-solar cell” that runs on photosynthesis. In all living cells, from bacteria and fungi to plants and animals, electrons are shuttled around as part of natural, biochemical processes. But if electrodes are present, the cells can actually generate electricity that can be used externally. Previous researchers have created fuel cells in this way with bacteria, but the microbes had to be constantly fed. Instead, scientists, including Noam Adir’s team, have turned to photosynthesis to generate current. During this process, light drives a flow of electrons from water that ultimately results in the generation of oxygen and sugar. This means that living photosynthetic cells are constantly producing a flow of electrons that can be pulled away as a “photocurrent” and used to power an external circuit, just like a solar cell. Certain plants — like the succulents found in arid environments — have thick cuticles to keep water and nutrients within their leaves. Yaniv Shlosberg, Gadi Schuster and Adir wanted to test, for the first time, whether photosynthesis in succulents could create power for living solar cells using their internal water and nutrients as the electrolyte solution of an electrochemical cell. The researchers created a living solar cell using the succulent Corpuscularia lehmannii, also called the “ice plant.” They inserted an iron anode and platinum cathode into one of the plant’s leaves and found that its voltage was 0.28V. When connected into a circuit, it produced up to 20 µA/cm2 of photocurrent density, when exposed to light and could continue producing current for over a day. Though these numbers are less than that of a traditional alkaline battery, they are representative of just a single leaf. Previous studies on similar organic devices suggest that connecting multiple leaves in series could increase the voltage. The team specifically designed the living solar cell so that protons within the internal leaf solution could be combined to form hydrogen gas at the cathode, and this hydrogen could be collected and used in other applications. The researchers say that their method could enable the development of future sustainable, multifunctional green energy technologies. The authors acknowledge funding from a “Nevet” grant from the Grand Technion Energy Program (GTEP) and a Technion VPR Berman Grant for Energy Research and support from the Technion’s Hydrogen Technologies Research Laboratory (HTRL). The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio. To automatically receive news releases from the American Chemical Society, contact [email protected]. Follow us: Twitter | Facebook | LinkedIn | Instagram ACS Applied Materials & Interfaces 10.1021/acsami.2c15123 “Self-Enclosed Bio-Photoelectrochemical Cell in Succulent Plants” 23-Nov-2022
10.1021/acsami.2c15123
2,022
ACS Applied Materials & Interfaces
Self-Enclosed Bio-Photoelectrochemical Cell in Succulent Plants
Harvesting an electrical current from biological photosynthetic systems (live cells or isolated complexes) is typically achieved by immersion of the system into an electrolyte solution. In this study, we show that the aqueous solution found in the tissues of succulent plants can be used directly as a natural bio-photo electrochemical cell. Here, the thick water-preserving outer cuticle of the succulent Corpuscularia lehmannii serves as the electrochemical container, the inner water content as the electrolyte into which an iron anode and platinum cathode are introduced. We produce up to 20 μA/cm2 bias-free photocurrent. When 0.5 V bias is added to the iron anode, the current density increases ∼10-fold, and evolved hydrogen gas can be collected with a Faradaic efficiency of 2.1 and 3.5% in dark or light, respectively. The addition of the photosystem II inhibitor 3-(3,4-dichlorophenyl)-1,1-dimethylurea inhibits the photocurrent, indicating that water oxidation is the primary source of electrons in the light. Two-dimensional fluorescence measurements show that NADH and NADPH serve as the major mediating electron transfer molecules, functionally connecting photosynthesis to metal electrodes. This work presents a method to simultaneously absorb CO2 while producing an electrical current with minimal engineering requirements.
477513
Potential new therapy takes aim at a lethal esophageal cancer's glutamine addiction
Researchers at the Medical University of South Carolina (MUSC) have found a way to target drug-resistant esophageal cancer cells by exploiting the different energy needs of cancerous versus healthy cells. This breakthrough is now opening the doorway to new treatments for an otherwise lethal cancer. The findings of the National Institutes of Health (NIH)-funded study are reported in Nature Communications. Only about 20 percent of patients diagnosed with esophageal squamous cell carcinoma (ESCC) are still alive five years later, according to the American Cancer Society. Unfortunately, this disease is usually found at a late or advanced stage, meaning that, for many patients with ESCC, the cancer has already spread to other parts of their bodies. The severity of the disease is compounded by its high rate of recurrence. "[It's] an aggressive, lethal cancer," says Shuo Qie, M.D., Ph.D., a postdoctoral fellow at MUSC Hollings Cancer Center and first author on the article. "[S]urgery is the only and the best choice. But some patients, especially patients with metastasis, need chemotherapy or other additional treatments." For the study, Qie aimed to further characterize and ideally address the cancer-driving pathway previously discovered by J. Alan Diehl, Ph.D., his mentor and the senior author on the article. Diehl is the SmartState Endowed Chair in Lipidomics and Pathobiology and Associate Director of Basic Science at MUSC Hollngs Cancer Center. This pathway, the Cyclin D1 axis, is an intersection at which several cancer-promoting changes occur. The protein Fbxo4, which usually prevents cancer by controlling cyclin D1 degradation, no longer exerts its protective effects. This allow cells to spiral out of control. Qie discovered that the axis activates a metabolic switch that causes ESCC cells to depend much more on glutamine than glucose. Healthy cells break down both glucose and glutamine for their energy needs, but ESCC cells are virtually addicted to glutamine. "The cancer cells have to have glutamine. You can bathe them in glucose and they're still going to die without glutamine," explains Diehl. These findings point to a vulnerability in these cancer cells and suggest a new therapeutic possibility--the use of glutaminase inhibitors. Glutaminase is an enzyme required for the cellular digestion of glutamine. Inhibiting it effectively blocks the cell's ability to process glutamine. The MUSC researchers tested the efficacy of a combination regimen that included a glutaminase inhibitor (Telaglenastat; Calithera, San Francisco, CA) and metformin in cancer cell lines and mice. They found that the combination regimen effectively treated tumors with the molecular signature that Diehl had previously described. Importantly, the treatment was effective even against tumors that had developed resistance to CDK4/6 inhibitors. Indeed, the resistant cancer cells were even more vulnerable to this treatment than non-resistant ones. "It's quite remarkable that the tumor cells that we have that are resistant to CDK4/6 inhibitors are actually five-, six-fold more sensitive to this combination therapy than they were before they developed resistance," says Diehl. The promising findings for this combination regimen in both cellular and animal models suggest that it could have therapeutic potential for patients diagnosed with this traditionally dangerous and difficult cancer. Having moved this treatment from concept to reality in the laboratory, Qie and Diehl hope to move forward with clinical trials for their combination treatment and are currently seeking funding to do so. The MUSC researchers' curiosity about a biological pathway has led to a potential new therapeutic approach for patients with ESCC. "You'll hear the term 'an Achilles heel,'" explains Diehl. "Can you find the Achilles heel that's in the cancer but not in the normal cell? And that's what Qie has done. Just from trying to understand the biology of the pathway, he and I have identified a unique therapeutic opportunity." ### The content of the article summarized by this release is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. About MUSC Founded in 1824 in Charleston, MUSC is the oldest medical school in the South, as well as the state's only integrated, academic health sciences center with a unique charge to serve the state through education, research and patient care. Each year, MUSC educates and trains more than 3,000 students and 700 residents in six colleges: Dental Medicine, Graduate Studies, Health Professions, Medicine, Nursing and Pharmacy. The state's leader in obtaining biomedical research funds, in fiscal year 2018, MUSC set a new high, bringing in more than $276.5 million. For information on academic programs, visit http://musc.edu. As the clinical health system of the Medical University of South Carolina, MUSC Health is dedicated to delivering the highest quality patient care available, while training generations of competent, compassionate health care providers to serve the people of South Carolina and beyond. Comprising some 1,600 beds, more than 100 outreach sites, the MUSC College of Medicine, the physicians' practice plan, and nearly 275 telehealth locations, MUSC Health owns and operates eight hospitals situated in Charleston, Chester, Florence, Lancaster and Marion counties. In 2018, for the fourth consecutive year, U.S. News & World Report named MUSC Health the number one hospital in South Carolina. To learn more about clinical patient services, visit http://muschealth.org. MUSC and its affiliates have collective annual budgets of $3 billion. The more than 17,000 MUSC team members include world-class faculty, physicians, specialty providers and scientists who deliver groundbreaking education, research, technology and patient care. About Hollings Cancer Center The Hollings Cancer Center at the Medical University of South Carolina is a National Cancer Institute-designated cancer center and the largest academic-based cancer research program in South Carolina. The cancer center comprises more than 100 faculty cancer scientists and 20 academic departments. It has an annual research funding portfolio of more than $40 million and a dedication to reducing the cancer burden in South Carolina. Hollings offers state-of-the-art diagnostic capabilities, therapies and surgical techniques within multidisciplinary clinics that include surgeons, medical oncologists, radiation therapists, radiologists, pathologists, psychologists and other specialists equipped for the full range of cancer care, including more than 200 clinical trials. For more information, visit http://www.hollingscancercenter.org
10.1038/s41467-019-09179-w
2,019
Nature Communications
Targeting glutamine-addiction and overcoming CDK4/6 inhibitor resistance in human esophageal squamous cell carcinoma
Abstract The dysregulation of Fbxo4-cyclin D1 axis occurs at high frequency in esophageal squamous cell carcinoma (ESCC), where it promotes ESCC development and progression. However, defining a therapeutic vulnerability that results from this dysregulation has remained elusive. Here we demonstrate that Rb and mTORC1 contribute to Gln-addiction upon the dysregulation of the Fbxo4-cyclin D1 axis, which leads to the reprogramming of cellular metabolism. This reprogramming is characterized by reduced energy production and increased sensitivity of ESCC cells to combined treatment with CB-839 (glutaminase 1 inhibitor) plus metformin/phenformin. Of additional importance, this combined treatment has potent efficacy in ESCC cells with acquired resistance to CDK4/6 inhibitors in vitro and in xenograft tumors. Our findings reveal a molecular basis for cancer therapy through targeting glutaminolysis and mitochondrial respiration in ESCC with dysregulated Fbxo4-cyclin D1 axis as well as cancers resistant to CDK4/6 inhibitors.
578270
Low fitness may indicate poor arterial health in adolescents
A recent Finnish study conducted at the University of Jyväskylä showed that adolescents with better aerobic fitness have more compliant arteries than their lower fit peers do. The study also suggests that a higher anaerobic threshold is linked to better arterial health. The results were published in the European Journal of Applied Physiology. Arterial stiffness is one of the first signs of cardiovascular disease, and adults with increased arterial stiffness are at higher risk of developing clinical cardiovascular disease. However, arterial stiffening may have its origin already in childhood and adolescence. "In our study we showed for the first time that the anaerobic threshold is also related to arterial stiffness," says Dr Eero Haapala, PhD, from the University Of Jyväskylä. Anaerobic threshold describes the exercise intensity that can be sustained for long periods of time without excess accumulation of lactic acid. The study showed that adolescents with a higher anaerobic threshold also had lower arterial stiffness than other adolescents did. "The strength of determining anaerobic threshold is that it does not require maximal effort," Haapala explains. "The results of our study can be used to screen increased arterial stiffness in adolescents who cannot perform maximal exercise tests." Fitness and arterial health can be improved The results showed that both peak oxygen uptake and anaerobic threshold were related to arterial stiffness in adolescents between the ages of 16 and 19 years. Genetics may explain part of the observed associations but moderate and especially vigorous physical activity improve fitness and arterial health already in adolescence. "Because the development of cardiovascular disease is a long process, sufficiently intense physical activity starting in childhood may be the first line in prevention of early arterial aging." The study investigated the associations of directly measured peak oxygen uptake and anaerobic threshold with arterial stiffness among 55 Finnish adolescents between the ages of 16 and 19 years. Peak oxygen uptake and anaerobic threshold were assessed using a maximal exercise test on a cycle ergometer. Arterial stiffness was measured using pulse wave analysis based on non-invasive oscillometric tonometry. Various confounding factors, including body fat percentage and systolic blood pressure, were controlled for in the analyses.
10.1007/s00421-018-3963-3
2,018
European Journal of Applied Physiology
Peak oxygen uptake, ventilatory threshold, and arterial stiffness in adolescents
To investigate the associations of peak oxygen uptake ([Formula: see text]) and [Formula: see text] at ventilatory threshold ([Formula: see text] at VT) with arterial stiffness in adolescents.The participants were 55 adolescents (36 girls, 19 boys) aged 16-19 years. Aortic pulse wave velocity (PWVao) and augmentation index (AIx%) were measured by non-invasive oscillometric device from right brachial artery level. [Formula: see text] was directly measured during a maximal ramp test on a cycle ergometer. [Formula: see text] at VT was determined using the equivalents for ventilation ([Formula: see text]/[Formula: see text] and [Formula: see text]/[Formula: see text]). [Formula: see text] and [Formula: see text] at VT were normalised for body mass (BM) and lean mass (LM). Data were analysed using linear regression analyses and analysis of covariance adjusted for age and sex.[Formula: see text] normalised for BM (β = - 0.445, 95% CI - 0.783 to - 0.107) and [Formula: see text] normalised for LM (β = - 0.386, 95% CI - 0.667 to - 0.106) were inversely associated with PWVao. A higher [Formula: see text] at VT normalised for BM (β = - 0.366, 95% CI - 0.646 to - 0.087) and LM (β = - 0.321, 95% CI - 0.578 to - 0.064) was associated with lower PWVao. Adolescents in the lowest third of [Formula: see text] by LM (6.6 vs. 6.1 m/s, Cohen's d = 0.33) and [Formula: see text] at VT by LM (6.6 vs. 6.0 m/s, Cohen's d = 0.33) had a higher PWVao than those in the highest third of [Formula: see text] or [Formula: see text] at VT by LM.Higher [Formula: see text] and [Formula: see text] at VT by BM and LM were related to lower arterial stiffness in adolescents. Normalising [Formula: see text] and [Formula: see text] at VT for LM would provide the most appropriate measure of cardiorespiratory fitness in relation to arterial stiffness.
923928
A Highly-Accurate and Broadband Terahertz Counter Eyes "Beyond 5G / 6G"
[Abstract] The National Institute of Information and Communications Technology (NICT, President: TOKUDA Hideyuki, Ph.D.) has developed a broadband and high-precision terahertz (THz) frequency counter based on a semiconductor-superlattice harmonic mixer. It showed a measurement uncertainty of less than 1 x 10-16 from 0.12 THz to 2.8 THz. This compact and easy-to-handle THz counter operating at room temperature suits for various THz applications supported by the next-generation ICT infrastructure, “Beyond 5G / 6G”. This achievement was published in Metrologia as an open-access paper on July 19, 2021. [Achievements] A highly-accurate and wide-measurable band THz frequency counter will become a key metrological instrument for allocation of THz spectrum among huge range of users on the next-generation information and communications infrastructure, or Beyond 5G / 6G. It is also indispensable to high-resolution spectroscopy of ultracold molecules. NICT developed the counter by generating a THz frequency comb from a semiconductor-superlattice harmonic mixer, which is simple, easy to use and operational at room temperature. Consequently, more compact and easy-to-handle counter than previously reported ones requiring an ultrashort pulse laser or a bulky cryogenic refrigerator. To evaluate the precision of the system over a four-octave range from 0.12 THz to 2.8 THz, NICT set up test benches designed carefully to push down the measurement limit. This revealed a measurement uncertainty of less than 1 x 10-16, which corresponds to the frequency determination capability for a 1 THz microwave signal with an accuracy of 100 μHz, for instance. We believe that the THz counter developed has the world’s top performance from the viewpoint of the measurable range and precision. [Future Prospects] NICT endeavors to lead the R&D on the Beyond 5G / 6G as well as the cutting-edge THz technology. A high-precision and broadband THz frequency counter with compactness and easy operability could become a vital metrological tool to accelerate the exploitation of THz frequency domain. Such THz counter developed here will be used for projected extension of the NICT’s calibration service currently supporting only microwave atomic clocks. Using the counter, we will also pursue the verification of the fundamental physics by means of an ultra-accurate THz frequency standard based on ultracold molecules, or a THz molecular clock.
10.1088/1681-7575/ac0712
2,021
Metrologia
Terahertz frequency counter based on a semiconductor-superlattice harmonic mixer with four-octave measurable bandwidth and 16-digit precision
Abstract We have developed a broadband and high-precision terahertz (THz) frequency counter based on a semiconductor-superlattice harmonic mixer (SLHM). Comparison of two THz frequencies determined using two independent counters and direct measurement of frequency-stabilized THz-quantum cascade lasers by a single counter showed a measurement uncertainty of less than 1 × 10 −16 over a four-octave range from 120 GHz to 2.8 THz. Further extension of this measurable range was indicated by the research regarding the higher-harmonics generation of a local oscillator for the SLHM. This compact and easy-to-handle THz counter operating at room temperature is available for high-resolution spectroscopy of ultracold molecules proposed for detecting temporal changes in physics constants as well as many THz applications requiring a wide measurement range without a bulky cryogenic apparatus.
665216
Evidence-based patient-psychotherapist matching improves mental health care
In first-of-its kind research led by a University of Massachusetts Amherst psychotherapy researcher, mental health care patients matched with therapists who had a strong track record of treating the patients' primary concerns had better results than patients who were not so matched. In addition, this "match effect" was even more beneficial and pronounced for patients with more severe problems and for those who identified as racial or ethnic minorities. The findings are published in JAMA Psychiatry and the Journal of Consulting and Clinical Psychology. "One of the things we've been learning in our field is that who the therapist is matters," says lead author Michael Constantino, professor of clinical psychology and director of the Psychotherapy Research Lab, who seeks to understand the variability of outcomes among patients receiving mental health treatment. "We've become very interested in this so-called therapist effect. Earlier on, there was a heavier emphasis on what the treatment was as opposed to who was delivering it." Constantino and colleagues have discovered, for example, that psychotherapists possess relative strengths and weaknesses in treating different types of mental health problems. Such performance "report cards" hold promise, then, for personalizing treatment toward what therapists do well. The researchers conducted a randomized clinical trial involving 48 therapists and 218 outpatients at six community clinics in a health care system in Cleveland, Ohio. They used a matching system based on how well a therapist has historically treated patients with the same concerns. The matching relied on a multidimensional outcomes tool called the Treatment Outcome Package (TOP), which assesses 12 symptomatic or functional domains: depression, quality of life, mania, panic or somatic anxiety, psychosis, substance misuse, social conflict, sexual functioning, sleep, suicidality, violence and work functioning. The matched group was compared to a group of patients who were case-assigned as usual, such as by therapist availability or convenience of office location. "By collecting TOP data from enough patients treated by a given therapist, this outcomes tool can establish the domains in which that therapist is stably effective (historically, on average, their patients' symptoms reliably improved), neutral (historically, on average, their patients' symptoms neither reliably improved nor deteriorated), or ineffective (historically, on average, their patients' symptoms reliably deteriorated)," the paper states. To qualify for matching, the therapists had to have completed a minimum of 15 cases with patients who had completed the TOP before and after treatment. For the trial, neither the patients nor the therapists knew if they had been matched or were case-assigned as usual. "We think there would be an even stronger positive impact if the patients knew they were empirically well-matched versus assigned by chance," Constantino says. "Such knowledge might cultivate more positive expectations, which are generally associated with better therapy outcomes." Post-therapy reports by patients showed that those in the matched group experienced significantly greater reductions in general impairment compared with those who were randomly assigned a therapist. "We showed that with this matching system you can get a big bump in improvement rates," Constantino says. The finding that the improvement in the matched group was even greater among people who identified as racial or ethnic minorities may provide a way to address and improve mental health care access and quality in traditionally underserved populations, Constantino says. The JAMA Psychiatry paper concludes, "Notably, the good fit in this study came not from changing what the therapists did in their treatment, but rather who they treated. Capitalizing on whatever it is that a therapist historically does well when treating patients with certain mental health problems, the current data indicate that our match system can improve the effectiveness of that care, even with neither therapist nor patient being aware of their match status."
10.1037/ccp0000644
2,021
Journal of Consulting and Clinical Psychology
For whom does a match matter most? Patient-level moderators of evidence-based patient–therapist matching.
A double-blind, randomized controlled trial tested the effectiveness of a personalized Match System in which patients are assigned to therapists with a "track record" of effectively treating a given patient's primary concern(s) (e.g., anxiety). Matched patients demonstrated significantly better outcomes than those assigned through usual pragmatic means. The present study examined patient-level moderators of this match effect. We hypothesized that the match benefits would be especially pronounced for patients who presented with (a) greater overall problem severity, and (b) greater problem complexity (i.e., number of elevated problem domains). We also explored if patient racial/ethnic minority status moderated the condition effect.Patients were 218 adults randomized to the Match or as-usual assignment condition, and then treated naturalistically by 48 therapists. The primary outcome was the Treatment Outcome Package (TOP), a multidimensional assessment tool that also primed the Match algorithm (based on historical, therapist-level effectiveness data), and assessed trial patients' symptoms/functioning and demographic information at baseline. Moderator effects were tested as patient-level interactions in three-level hierarchical linear models.The beneficial match effect was significantly more pronounced for patients with higher initial severity (-0.03, 95% CI -0.05, -0.01) and problem complexity (-0.01, 95% CI -0.02, -0.004), yet the high correlation between severity and complexity called into question the uniqueness of the complexity moderator effect. Moreover, the match effect was more pronounced for racial/ethnic minority patients (i.e., nonwhite; -0.05, 95% CI -0.09, -0.01).Measurement-based matching is especially effective for patients with certain characteristics, which further informs mental health treatment personalization. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
781599
Stressed lemurs have worse chances of survival
a sign of long-term stress--are associated with reduced survival in wild grey mouse lemurs (Microcebus murinus), according to a study published in the open access journal BMC Ecology. Researchers at the German Primate Centre and Georg-August University Göttingen, Germany found that grey mouse lemurs with high levels of the stress hormone cortisol in their fur were less likely to survive both long-term and over the reproductive season. Dr Josué Rakotoniaina, the corresponding author said "Despite the wide use of stress hormone levels as an index of health and condition, this study is among the first to correlate an index of chronic stress with survival in a wild population of lemurs. This was only possible by combining hair cortisol levels with several years of life history data that was gathered from a long-term monitoring project of mouse lemurs." Lemurs with low hair cortisol levels had on average a 13.9% higher chance to survive than those with high levels of hair cortisol. Lemurs with very good body condition--that is optimal body mass and size--survived on average 13.7% better than lemurs with poor body condition and females survived, on average, better than males. Variations in parasitism, such as the number of parasite infections, were not linked to survival. Dr Rakotoniaina added: "Our findings indicate that hair cortisol concentrations are a much better predictor of survival, and thus a better index of health, than other commonly used health indicators. Cortisol is taken up by hair as it grows so its concentration in a hair sample allows assessment of overall cortisol levels over time rather than--as single samples of blood, saliva or urine do--at one time point." To test their hypothesis that high hair cortisol concentration as a measure of long-term stress is related to individual survival, the researchers studied a population of grey mouse lemurs in Kirindy Forest, Madagascar from 2012 to 2014. They assessed the relationship between hair cortisol concentration and long-term survival in 171 lemurs, while the effect of body condition on long-term survival was assessed in a sub-sample of 149, and the link between all health indicators (hair cortisol level, body condition and parasitism) and survival during the mating season was assessed in a group of 48 lemurs. The researchers suggest that the benefits of having low stress levels may be even more pronounced prior to the mating season. Individuals that are more affected by challenging conditions may not be able to cope with the additional stress during mating season which is particularly challenging for male mouse lemurs. Although the exact mechanism by which cortisol is built into hair is not yet fully understood and the observational nature of the study does not allow conclusions about the causes of mortality, the findings suggest that hair cortisol concentration may be a valid indicator of health in wild lemur populations. Dr. Rakotoniaina said: "This important information could facilitate conservation decisions as it provides conservationists with an essential tool that could be used to detect issues emerging at the population level and ultimately predict wild populations' responses to environmental challenges." ### Media Contact Anne Korn Press Officer BioMed Central T: 44-0-20-3192-2744 E: [email protected] Notes to editors: 1. Images of the grey mouse lemurs can be found here: http://bit.ly/2w42V1X Please credit Anni Hämäläinen in any re-use. 2. Research article: Hair cortisol concentrations correlate negatively with survival in a wild primate population Rakotoniaina et al. BMC Ecology 2017 DOI: 10.1186/s12898-017-0140-1 For an embargoed copy of the research article please contact Anne Korn at BioMed Central. After the embargo lifts, the article will be available at the journal website: https://bmcecol.biomedcentral.com/articles/10.1186/s12898-017-0140-1 Please name the journal in any story you write. If you are writing for the web, please link to the article. All articles are available free of charge, according to BioMed Central's open access policy. 3. BMC Ecology is an open access, peer-reviewed journal that considers articles on environmental, behavioral and population ecology as well as biodiversity of plants, animals and microbes. 4. A pioneer of open access publishing, BMC has an evolving portfolio of high quality peer-reviewed journals including broad interest titles such as BMC Biology and BMC Medicine, specialist journals such as Malaria Journal and Microbiome, and the BMC series. At BMC, research is always in progress. We are committed to continual innovation to better support the needs of our communities, ensuring the integrity of the research we publish, and championing the benefits of open research. BMC is part of Springer Nature, giving us greater opportunities to help authors connect and advance discoveries across the world. 5. Springer Nature Data Support Services The Springer Nature Data Support Services help authors enhance their peer-reviewed publications and comply with funder and journal policies by preparing their data files for deposition in a repository and improving associated metadata. For the present study, the dataset and basic contextual metadata were uploaded by the author, and initial checks were undertaken by a Data Support Services Research Data Editor to make sure that the data were complete and uncorrupted. A Research Data Editor also rewrote the title of the dataset to support discoverability, and added a comprehensive description and research method to provide context for the data. To further enhance discovery and to allow granular search of the data, categories from the Australian Fields of Research classification system and relevant keywords were added to the metadata. The dataset's author list was cross-referenced with the associated manuscript to ensure accurate citation of the data, and a DOI was generated to provide a persistent link. The dataset was linked to the author's manuscript and the publication date was coordinated for release with the publication of its associated article in BMC Ecology. All checks and enhancements were approved by the author, and the dataset will be made available through the Springer Nature section of the figshare repository. 6. Dataset: Capture-mark-recapture data modelling survival rates of Microcebus murinus in relation to glucocorticoid level, parasite infection and body condition Rakotoniaina et al. 2017 DOI: 10.6084/m9.figshare.5259415. During the embargo period the dataset is available here: https://figshare.com/s/dd48825bb24115b17db6 After the embargo lifts, please reference the dataset using the following link:http://dx.doi.org/10.6084/m9.figshare.5259415
10.1186/s12898-017-0140-1
2,017
BMC Ecology
Hair cortisol concentrations correlate negatively with survival in a wild primate population
Glucocorticoid hormones are known to play a key role in mediating a cascade of physiological responses to social and ecological stressors and can therefore influence animals' behaviour and ultimately fitness. Yet, how glucocorticoid levels are associated with reproductive success or survival in a natural setting has received little empirical attention so far. Here, we examined links between survival and levels of glucocorticoid in a small, short-lived primate, the grey mouse lemur (Microcebus murinus), using for the first time an indicator of long-term stress load (hair cortisol concentration). Using a capture-mark-recapture modelling approach, we assessed the effect of stress on survival in a broad context (semi-annual rates), but also under a specific period of high energetic demands during the reproductive season. We further assessed the power of other commonly used health indicators (body condition and parasitism) in predicting survival outcomes relative to the effect of long-term stress. We found that high levels of hair cortisol were associated with reduced survival probabilities both at the semi-annual scale and over the reproductive season. Additionally, very good body condition (measured as scaled mass index) was related to increased survival at the semi-annual scale, but not during the breeding season. In contrast, variation in parasitism failed to predict survival. Altogether, our results indicate that long-term increased glucocorticoid levels can be related to survival and hence population dynamics, and suggest differential strength of selection acting on glucocorticoids, body condition, and parasite infection.
804711
Plant-based diets high in carbs improve type 1 diabetes, according to new case studies
10.35248/2155-6156.20.11.847
2,020
Journal of Diabetes & Metabolism
Plant-Based Diets for Type 1 Diabetes
Type 1 diabetes is a chronic autoimmune disease characterized by hyperglycemia resulting from the destruction of insulin-producing pancreatic beta-cells. The increasing incidence (at a worldwide rate of 3-5% a year) suggests that in addition to the genetic component, the risk may be influenced by environmental factors, including the diet. A plantbased diet has been shown to improve glycemic control in individuals with type 2 diabetes and to improve beta-cell function in overweight people but has not been thoroughly tested in type 1 diabetes due to its high carbohydrate content. We present two case studies of individuals with type 1 diabetes who adopted a plant-based diet and experienced a significant increase in insulin sensitivity, reductions in insulin dose, and improvements in cardiovascular risk factors.
489496
'Biggest loser' study reveals how dieting affects long-term metabolism
While it's known that metabolism slows when people diet, new research indicates that metabolism remains suppressed even when people regain much of the weight they lost while dieting. The findings come from a study of contestants in "The Biggest Loser" television series. Despite substantial weight regain in the 6 years following participation, resting metabolic rate remained at the same low level that was measured at the end of the weight loss competition. The average rate was approximately 500 calories per day lower than expected based on individuals' body composition and age. "Long-term weight loss requires vigilant combat against persistent metabolic adaptation that acts to proportionally counter ongoing efforts to reduce body weight," wrote the authors of the Obesity study.
10.1002/oby.21538
2,016
Obesity
Persistent metabolic adaptation 6 years after “The Biggest Loser” competition
Objective To measure long‐term changes in resting metabolic rate (RMR) and body composition in participants of “The Biggest Loser” competition. Methods Body composition was measured by dual energy X‐ray absorptiometry, and RMR was determined by indirect calorimetry at baseline, at the end of the 30‐week competition and 6 years later. Metabolic adaptation was defined as the residual RMR after adjusting for changes in body composition and age. Results Of the 16 “Biggest Loser” competitors originally investigated, 14 participated in this follow‐up study. Weight loss at the end of the competition was (mean ± SD) 58.3 ± 24.9 kg ( P &lt; 0.0001), and RMR decreased by 610 ± 483 kcal/day ( P = 0.0004). After 6 years, 41.0 ± 31.3 kg of the lost weight was regained ( P = 0.0002), while RMR was 704 ± 427 kcal/day below baseline ( P &lt; 0.0001) and metabolic adaptation was −499 ± 207 kcal/day ( P &lt; 0.0001). Weight regain was not significantly correlated with metabolic adaptation at the competition's end ( r = − 0.1, P = 0.75), but those subjects maintaining greater weight loss at 6 years also experienced greater concurrent metabolic slowing ( r = 0.59, P = 0.025). Conclusions Metabolic adaptation persists over time and is likely a proportional, but incomplete, response to contemporaneous efforts to reduce body weight.
875016
MIPT biophysicists found a way to take a peek at how membrane receptors work
In a study published in Current Opinion in Structural Biology, MIPT biophysicists explained ways to visualize membrane receptors in their different states. Detailed information on the structure and dynamics of these proteins will enable developing effective and safe drugs to treat many sorts of conditions. Every second, living cells receive myriads of signals from their environment, which are usually transmitted through dedicated signaling molecules such as hormones. Most of these molecules are incapable of penetrating the cell membrane so for the most part, such signals are identified at the membrane. For that purpose, the cell membrane is equipped with cell-surface receptors. That receive outside signals and "interpret" them into the language the cell can understand. Cell surface receptors are vital for proper functioning of cell and the organism as a whole. Should the receptors stop working as intended, the communication between cells becomes disrupted, which leads to the organism developing a medical condition. GPCRs (G protein-coupled receptors) is a large family of membrane receptors that share a common structure; they all have seven protein spirals that cross the membrane and couple the receptor to a G protein situated on the inside of the cell. Interaction of the signal molecule with the receptor triggers a change in the 3D structure, or conformation, of the receptor, which activates the G protein. The activated G protein, in turn, triggers a signal cascade inside the cell, which results in a response to the signal. GPCR family membrane proteins have been linked to a lot of neurodegenerative and cardiovascular diseases and certain types of cancer. GPCR proteins have also been proven to contribute to conditions such as obesity, diabetes, mental disorders, and others (fig. 2). As a result, GPCRs have become a popular drug target, with a large number of drugs currently on the market targeting this particular family of receptors. One of the modern approaches to drug development involves analyzing 3D structures of GPCR molecules. But membrane receptor analysis is a slow and extremely laborious process and even when successful, it does not completely reveal the molecule's behavior inside the cell. "Currently, scientists have two options when it comes to studying proteins. They can either 'freeze' a protein and have its precise static snapshot, or study its dynamics at the cost of losing details. The former approach uses methods such as crystallography and cryogenic electron microscopy; the latter uses spectroscopic techniques," comments Anastasia Gusach, a research fellow at the MIPT Laboratory of Structural Biology of G-protein Coupled Receptors. The authors of the study demonstrated how combining both the structural and the spectroscopic approaches result in "the best of both worlds" in terms of obtaining precise information on functioning of GPCRs (fig. 3). For instance, the double electron-electron resonance (DEER) and the Förster resonance energy transfer (FRET) techniques act as an "atomic ruler", ensuring precise measurements of distances between separate atoms and their groups within the protein. The nuclear magnetic resonance method enables visualizing the overall shape of the receptor molecule while modified mass spectrometry methods (MRF-MS, HDX-MS) help trace the susceptibility of separate groups of atoms constituting the protein to the solvent, thus indicating which parts of the molecule face outwards. "Studying the GPCR dynamics uses cutting-edge methods of experimental biophysical analysis such as nuclear magnetic resonance (NMR) spectroscopy, electron paramagnetic resonance (EPR) spectroscopy, and advanced fluorescence microscopy techniques including single-molecule microscopy," says Alexey Mishin, deputy head of the MIPT Laboratory for Structural Biology of G-protein Coupled Receptors. "Biophysicists that use different methods to study GPCRs have been widely organizing collaborations that already bore some fruitful results. We hope that this review will help scientists specializing in different methods to find some new common ground and work together to obtain a better understanding of receptors' functioning." adds Anastasia Gusach. The precise information on how the membrane receptors function and transfer between states will greatly expand the capabilities for structure-based drug design.### The study was carried out with support from Russian Foundation for Basic Research and the Russian Science Foundation.
10.1016/j.sbi.2020.03.004
2,020
Current Opinion in Structural Biology
Beyond structure: emerging approaches to study GPCR dynamics
G protein-coupled receptors (GPCRs) constitute the largest superfamily of membrane proteins that are involved in regulation of sensory and physiological processes and implicated in many diseases. The last decade revolutionized the GPCR field by unraveling multiple high-resolution structures of many different receptors in complexes with various ligands and signaling partners. A complete understanding of the complex nature of GPCR function is, however, impossible to attain without combining static structural snapshots with information about GPCR dynamics obtained by complementary spectroscopic techniques. As illustrated in this review, structure and dynamics studies are now paving the way for understanding important questions of GPCR biology such as partial and biased agonism, allostery, oligomerization, and other fundamental aspects of GPCR signaling.
934580
Competing quantum interactions enable single molecules to stand up
Nanoscale machinery has many uses, including drug delivery, single-atom transistor technology, or memory storage. However, the machinery must be assembled at the nanoscale which is a considerable challenge for researchers. For nanotechnology engineers the ultimate goal is to be able to assemble functional machinery part-by-part at the nanoscale. In the macroscopic world, we can simply grab items to assemble them. It is not impossible to “grab” single molecules anymore, but their quantum nature makes their response to manipulation unpredictable, limiting the ability to assemble molecules one by one. This prospect is now a step closer to reality thanks to an international effort led by the Research Centre Jülich of the Helmholtz society in Germany including researchers from the Department of Chemistry at the University of Warwick. In the paper, ‘The stabilization potential of a standing molecule’, published on the 10th November 2021 in the journal Science Advances, an international team of researchers have been able to reveal the generic stabilisation mechanism of a single standing molecule, which can be used in the rational design and construction of three-dimensional molecular devices at surfaces. The scanning probe microscope (SPM) has brought the vision of molecular-scale fabrication closer to reality, because it offers the capability to rearrange atoms and molecules on surfaces, thereby allowing the creation of metastable structures that do not form spontaneously. Using SPM, Dr Christian Wagner and his team were able to interact with a single standing molecule, perylene-tetracarboxylic dianhydride (PTCDA) on a surface to study the thermal stability and temperature at which the molecule would cease to be stable and would drop back into its natural state where it adsorbs flat on the surface. This temperature stands at -259.15 Celsius, only 14 degrees above the absolute zero-temperature point. Quantum chemical calculations performed in collaboration with Dr. Reinhard Maurer from the Department of Chemistry at the University of Warwick were able to reveal that the subtle stability of the molecule stems from the competition of two strong counteracting quantum forces, namely the long-range attraction from the surface and the short-range restoring force arising from the anchor point between molecule and the surface. Dr Reinhard Maurer, from the Department of Chemistry at the University of Warwick comments: “The balance of interactions that keeps the molecule from falling over is very subtle and a true challenge for our quantum chemical simulation methods. In addition to teaching us about the fundamental mechanisms that stabilise such unusual nanostructures, the project also helped us to assess and improve the capabilities of our methods.” Dr Christian Wagner from the Peter Grünberg Institute for Quantum Nanoscience (PGI-3) at Research Centre Jülich comments: “To make technological use of the fascinating quantum properties of individual molecules, we need to find the right balance: They must be immobilized on a surface, but without fixing them too strongly, otherwise they would lose these properties. Standing molecules are ideal in that respect. To measure how stable they actually are, we had to stand them up over and over again with a sharp metal needle and time how long they survived at different temperatures.” Now that the interactions that give rise to a stable standing molecule are known, future research can work towards designing better molecules and molecule-surface links to tune those quantum interactions. This can help to increase stability and the temperature at which molecules can be switched into standing arrays towards workable conditions. This raises the prospect of nanofabrication of machinery at the nanoscale. You can also read Jülich's press release here: https://www.fz-juelich.de/SharedDocs/Pressemitteilungen/UK/EN/2021/notifications/2021-11-11-nanodomino.html Science Advances 10.1126/sciadv.abj9751 Experimental study Not applicable The stabilization potential of a standing molecule 10-Nov-2021 The authors declare that they have no competing interests.
10.1126/sciadv.abj9751
2,021
Science Advances
The stabilization potential of a standing molecule
The part-by-part assembly of functional nanoscale machinery is a central goal of nanotechnology. With the recent fabrication of an isolated standing molecule with a scanning probe microscope, the third dimension perpendicular to the surface will soon become accessible to molecule-based construction. Beyond the flatlands of the surface, a wealth of structures and functionalities is waiting for exploration, but issues of stability are becoming more critical. Here, we combine scanning probe experiments with ab initio potential energy calculations to investigate the thermal stability of a prototypical standing molecule. We reveal its generic stabilization mechanism, a fine balance between covalent and van der Waals interactions including the latter’s long-range screening by many-body effects, and find a remarkable agreement between measured and calculated stabilizing potentials. Beyond their relevance for the design and construction of three-dimensional molecular devices at surfaces, our results also indicate that standing molecules may serve as tunable mechanical gigahertz oscillators.
538972
Illinois researchers publish article describing Illinois RapidVent Emergency Ventilator
The design, testing, and validation of the Illinois RapidVent emergency ventilator has been published in the journal Plos One. The article, "Emergency Ventilator for COVID-19," by University of Illinois Urbana researchers, is the first of its kind to report such details about an emergency ventilator that was designed, prototyped, and tested at the start of the COVID-19 pandemic in 2020. "This article reports the development and testing of the RapidVent emergency ventilator," said William King, professor at The Grainger College of Engineering and Carle Illinois College of Medicine, and leader of the RapidVent project. "The research shows integration of different disciplines to develop a medical device, including science-based engineering, ultra-rapid design and manufacturing, functional testing, and animal testing." The Grainger College of Engineering, Carle Illinois College of Medicine, and industry partners launched the Illinois RapidVent project to address ventilator shortages in spring 2020. The goal was to design an emergency ventilator that could be rapidly and inexpensively produced. "Publication of this work helps to share our knowledge with the public and acknowledge the outstanding intellectual contributions of our team members," said King Li, Dean of the Carle Illinois College of Medicine. The RapidVent team consisted of more than 50 people who worked together to develop an emergency ventilator in less than three weeks. The Illinois RapidVent is easy to produce in part because it has few components and can be powered from pressured air or oxygen. "The rapid development and successful delivery of the functional Illinois RapidVent is yet another amazing display of collaborative interdisciplinary research, and a testament of what can be achieved here at Illinois for our community, society, and global health," said Stephen Boppart, Executive Associate Dean and Chief Diversity Officer of Carle Illinois College of Medicine. "It's a primary example of how engineering- and technology-based medicine can support the land-grant mission of our university. The team members from Carle Illinois College of Medicine were proud to help catalyze this effort." "Emergency Ventilator for COVID-19" reports how the team developed their ventilator design based on the needs of physicians. The team then used digital design and advanced manufacturing for ultra-rapid development of prototypes, followed by detailed functional testing, durability testing, and animal testing. To support safe use of the device, the team also developed an electronic monitoring system and detailed training materials that can be used by physicians. The monitoring system became Illinois RapidAlarm, an open-source sensor and alarm system that makes emergency ventilators, such as the Illinois RapidVent, more useful during a crisis. "The RapidVent team built upon relationships forged through Carle Illinois College of Medicine. This focus on engineering and medicine is the exact vison we shared - one that would generate practical solutions to healthcare opportunities and challenges," said Dr. Mark Johnson, Carle Health physician and RapidVent team member. The University of Illinois made the RapidVent technology available through a free license in April 2020. The license includes access to device designs, training materials, and a testing report. Since then, more than 70 organizations have accessed the technology from 15 different countries. There are multiple efforts underway to make use of RapidVent, including two companies that have announced their intention to commercialize the technology. ### The article is available online: https://doi.org/10.1371/journal.pone.0244963 More information about the project is available at https://rapidvent.grainger.illinois.edu The 55 co-authors on the article include: William P. King, Jennifer Amos, Magdi Azer, Daniel Baker, Rashid Bashir, Catherine Best, Eliot Bethke, Stephen A. Boppart, Elisabeth Bralts, Ryan M. Corey, Rachael Dietkus, Gary Durack, Stefan Elbel, Greg Elliott, Jake Fava, Nigel Goldenfeld, Molly H. Goldstein, Courtney Hayes, Nicole Herndon, Shandra Jamison, Blake Johnson, Harley Johnson, Mark Johnson, John Kolaczynski, Tonghun Lee, Sergei Maslov, Davis J. McGregor, Derek Milner, Ralf Moller, Jonathan Mosley, Andy Musser, Max Newberger, David Null, Lucas O'Bryan, Michael Oelze, Jerry O'Leary, Alex Pagano, Michael Philpott, Brian Pianfetti, Alex Pille , Luca Pizzuto, Brian Ricconi, Marcello Rubessa, Sam Rylowicz, Clifford Shipley, Andrew C. Singer, Brian Stewart, Rachel Switzky, Sameh Tawfick, Matthew Wheeler, Karen White, Evan M. Widloski, Eric Wood, Charles Wood, and Abigail R. Wooldridge
10.1371/journal.pone.0244963
2,020
PLoS ONE
Emergency ventilator for COVID-19
The COVID-19 pandemic disrupted the world in 2020 by spreading at unprecedented rates and causing tens of thousands of fatalities within a few months. The number of deaths dramatically increased in regions where the number of patients in need of hospital care exceeded the availability of care. Many COVID-19 patients experience Acute Respiratory Distress Syndrome (ARDS), a condition that can be treated with mechanical ventilation. In response to the need for mechanical ventilators, designed and tested an emergency ventilator (EV) that can control a patient's peak inspiratory pressure (PIP) and breathing rate, while keeping a positive end expiratory pressure (PEEP). This article describes the rapid design, prototyping, and testing of the EV. The development process was enabled by rapid design iterations using additive manufacturing (AM). In the initial design phase, iterations between design, AM, and testing enabled a working prototype within one week. The designs of the 16 different components of the ventilator were locked by additively manufacturing and testing a total of 283 parts having parametrically varied dimensions. In the second stage, AM was used to produce 75 functional prototypes to support engineering evaluation and animal testing. The devices were tested over more than two million cycles. We also developed an electronic monitoring system and with automatic alarm to provide for safe operation, along with training materials and user guides. The final designs are available online under a free license. The designs have been transferred to more than 70 organizations in 15 countries. This project demonstrates the potential for ultra-fast product design, engineering, and testing of medical devices needed for COVID-19 emergency response.
905618
Game time and direction of travel are associated with college football team performance
DARIEN, IL - A study of NCAA Division I college football games found a significant association between the performance of away teams and both their direction of travel and the time of day when games were played. Results show that away teams playing in the afternoon allowed 5% more points and forced 13% more opponent turnovers than those playing in the evening. Teams traveling eastward to play on the road threw 39% more interceptions than those traveling in the same time zone. There also was a significant interaction between direction of travel and time of day for points allowed, and a marginal interaction for points scored. "Most notably, we found that teams playing in the afternoon actually allowed more points and forced more opponent turnovers than teams playing in the evening," said faculty advisor Sean Pradhan, who has a doctorate in sport management and is an assistant professor of sports management and business analytics in the School of Business Administration at Menlo College in Atherton, California. "Our results also showed that teams traveling eastward threw more interceptions than teams traveling in the same time zone." Pradhan and co-investigator Micah Kealaiki-Sales analyzed data from 1,909 NCAA Division I college football games played by 64 "Power Five" conference teams during the 2014 to 2019 regular seasons (i.e., the College Football Playoff Era). All games played at neutral sites were excluded. Data were collected from the publicly available sports database, Sports-Reference. The researchers controlled for both visiting and home team conference, day of game, and team rankings. According to Pradhan and Kealaiki-Sales, the findings offer novel evidence for the influence of circadian factors on the performance of collegiate athletes. "Much of the past research on travel and sports performance has focused on professional teams," noted Pradhan. "Our study extends the literature by providing an examination of NCAA Division I college football teams, where research on the effects of travel has been relatively limited and findings have been inconsistent. Given that the level of student-athlete support naturally varies across colleges, our results do highlight the impact of certain factors in travel for coaches, staff, and even student-athletes to consider." The research abstract was published recently in an online supplement of the journal Sleep and will be presented as poster from June 9 - Nov. 30 during Virtual SLEEP 2021. SLEEP is the annual meeting of the Associated Professional Sleep Societies, a joint venture of the American Academy of Sleep Medicine and the Sleep Research Society.
10.1093/sleep/zsab072.286
2,021
SLEEP
287 East? I Thought You Said Weast! The Influence of Travel on College Football Team Performance
Abstract Introduction Previous research in professional basketball and baseball has shown that traveling up to three hours westward can hamper performance due to circadian disadvantages. However, findings in the context of collegiate football are conflicting, as some prior studies have reported negative effects on scoring during either eastward or westward travel. The current study extends the literature by investigating the impact of travel on both offensive and defensive team performance within National Collegiate Athletic Association (NCAA) Division I college football. Methods Following the NCAA’s introduction of the College Football Playoff in 2014, data from 1,909 away games from 64 “Power Five” conference teams played during the 2014 to 2019 regular seasons were collected from the publicly available sports database, Sports-Reference. For the purposes of our analyses, we excluded all games played at neutral sites. We examined the effects of the direction of travel away from the college’s home city and time of game day on visiting team performance, specifically game outcomes, points scored, points allowed, completion percentages, penalties, fumbles, interceptions, and total turnovers forced and committed, controlling for both visiting and home team conference, day of game, and team rankings. Results Teams playing in the afternoon allowed significantly more points (OR = 1.05, p &amp;lt; .001) and forced more opponent turnovers than those playing in the evening (OR = 1.14, p = .05). Teams traveling eastward threw significantly more interceptions than those traveling in the same time zone (OR = 1.48, p = .004). A significant interaction between direction of travel and time of day was detected for points allowed (χ2 = 12.30, p = .02), and a marginal interaction was present for points scored (χ2 = 8.42, p = .08). Several other marginal differences were also identified for points scored, interceptions, and team turnovers (OR &amp;gt; 1.03, p &amp;lt; .10). Conclusion Findings from our study offer evidence for the influence of circadian factors on team points allowed, interceptions, and opponent turnovers forced. Specifically, travel in varying directions and the time of day when a game is played can impact team performance during away games within college football. Support (if any) None
916542
Study: Meditation and ballet associated with wisdom
Wisdom is often linked with age, but not all elders are wise. So, what makes a person wise? A new study, "The Relationship between Mental and Somatic Practices and Wisdom," published Feb. 18, 2016, in PLOS ONE, confirms the age-old conception that meditation is associated with wisdom. Surprisingly, it also concludes that somatic (physical) practices such as classical ballet might lead to increased wisdom. "As far as I know this is the first study to be published that looks at the relationship between meditation or ballet and increased wisdom," said Monika Ardelt, associate professor of sociology at the University of Florida. Ardelt is a leading wisdom researcher who was not involved in the project. "That meditation is associated with wisdom is good to confirm, but the finding that the practice of ballet is associated with increased wisdom is fascinating. I'm not going to rush out and sign up for ballet, but I think this study will lead to more research on this question." The researchers included ballet in the study, "not expecting to find that it was associated with wisdom, but rather for comparison purposes," said Patrick B. Williams, lead author and a postdoctoral researcher in the University of Chicago's Department of Psychology. Williams is a member of a research project on somatic wisdom headed by principal investigators Berthold Hoeckner, associate professor of music; and Howard Nusbaum, professor of psychology. "The link between ballet and wisdom is mysterious to us and something that we're already investigating further," Williams said. This includes ongoing studies with adult practitioners of ballet, as well as among novices training at Chicago's Joffrey Ballet. Williams wants to track novices and seasoned practitioners of both meditation and ballet for months and years to see whether the association holds up over time. The published research was groundbreaking because science has overlooked somatic practices as a possible path to wisdom, Williams said. Unstudied topic "No studies have examined whether physical practices are linked to the cultivation of personal wisdom, nor have they theorized that this association might exist," the study stated. Understanding the kinds of experiences that are related to increases in wisdom is fundamental in two aspects of the UChicago research, Nusbaum said. "As we learn more about the kinds of experiences that are related to wisdom, we can gain insight into ways of studying the mechanisms that mediate wisdom. This also lets us shift from thinking about wisdom as something like a talent to thinking about it as something more like a skill," he said. "And if we think about wisdom as a skill, it is something we can always get better at, if we know how to practice." The researchers administered a self-reported survey to 298 participants using Survey Monkey, a popular Internet-based tool that is being used increasingly in scientific research. The survey asked about experience (both in number of years and hours of practice) as a teacher or student of four activities: meditation, the Alexander Technique (a method for improving posture, balance, coordination, and movement), the Feldenkrais Method (a form of somatic education that seeks to improve movement and physical function, reduce pain, and increase self-awareness), and classical ballet. It also included psychological questionnaires that asked about characteristics thought to be components of wisdom, such as empathy and anxiety. The results showed that those who practice meditation--vipassana (29 percent), mindfulness (23 percent), Buddhist (14 percent), and other types--had more wisdom, on average, than those in the three other groups. More importantly, it established for the first time that the link between meditation and wisdom might be attributable to a lower level of anxiety. "We are the first to show an association between wisdom, on the one hand, and mental and somatic practice, on the other," Williams said. "We're also the first to suggest that meditation's ability to reduce everyday anxiety might partially explain this relationship." Participants who practiced ballet had the lowest levels of wisdom. Nevertheless, the more they practiced ballet, the higher they scored on measures of psychological traits that are associated with wisdom. Causal relationship? Williams said it's important to note that the research was not looking for and did not establish a causal relationship between wisdom and any of the four practices. But the results suggest that further study could identify such a causal relationship. "We hope our exploratory research will encourage others to replicate our results and look for other experiences that are linked with wisdom, as well as the factors that might explain such links," Williams said. "Although wisdom, as an intellectual pursuit, is one of the oldest subjects studied by human-kind, it is one of the youngest, as a scientific pursuit," he added. Ardelt thinks this study will generate a lot of interest with the public and in the growing field of the study of wisdom, especially due to the current interest in meditation. "These findings indicate that meditation might have more benefits than as a stress-reduction or pain-reduction technique," she said. If mental and somatic practices can lead to more wisdom, "their applications should be explored across settings such as in the classroom or workplace with the goal of creating not only wiser people but also a wiser society," the researchers concluded. ###
10.1371/journal.pone.0149369
2,016
PLoS ONE
The Relationship between Mental and Somatic Practices and Wisdom
In this study we sought to explore how experience with specific mental and somatic practices is associated with wisdom, using self-report measures of experience and wisdom. We administered standard surveys to measure wisdom and experience among four groups of practitioners of mental and somatic practices, namely, meditators, practitioners of the Alexander Technique, practitioners of the Feldenkrais Method, and classical ballet dancers. We additionally administered surveys of trait anxiety and empathy to all participants to explore possible mediating relationships of experience and wisdom by characteristics thought to be components of wisdom. Wisdom was higher on average among meditation practitioners, and lowest among ballet dancers, and this difference held when controlling for differences in age between practices, supporting the view that meditation is linked to wisdom and that ballet is not. However, we found that increased experience with meditation and ballet were both positively associated with wisdom, and that lowered trait anxiety mediated this positive association among meditation practitioners, and, non-significantly, among ballet dancers. These results suggest that not all practices that are purported to affect mental processing are related to wisdom to the same degree and different kinds of experience appear to relate to wisdom in different ways, suggesting different mechanisms that might underlie the development of wisdom with experience.
884183
Ladies, this is why fertility declines with age
Montreal, April 3, 2017 - Researchers at the University of Montreal Hospital Research Center (CRCHUM) have discovered a possible new explanation for female infertility. Thanks to cutting-edge microscopy techniques, they observed for the first time a specific defect in the eggs of older mice. This defect may also be found in the eggs of older women. The choreography of cell division goes awry, and causes errors in the sharing of chromosomes. These unprecedented observations are being published today in Current Biology. "We found that the microtubules that orchestrate chromosome segregation during cell division behave abnormally in older eggs. Instead of assembling a spindle in a controlled symmetrical fashion, the microtubules go in all directions. The altered movement of the microtubules apparently contributes to errors in chromosome segregation, and so represents a new explanation for age-related infertility," stated CRCHUM researcher and Université de Montréal professor Greg FitzHarris. Women -- and other female mammals -- are born with a fixed number of eggs, which remain dormant in the ovaries until the release of a single egg per menstrual cycle. But for women, fertility declines significantly at around the age of 35. "One of the main causes of female infertility is a defect in the eggs that causes them to have an abnormal number of chromosomes. These so-called aneuploid eggs become increasingly prevalent as a woman ages. This is a key reason that older women have trouble getting pregnant and having full-term pregnancies. It is also known that these defective eggs increase the risk of miscarriage and can cause Down's syndrome in full-term babies" explained FitzHarris. Scientists previously believed that eggs are more likely to be aneuploid with age because the "glue" that keeps the chromosomes together works poorly in older eggs. This is known as the "cohesion-loss" hypothesis. "Our work doesn't contradict that idea, but shows the existence of another problem: defects in the microtubules, which cause defective spindles and in doing so seem to contribute to a specific type of chromosome segregation error" asserted Professor FitzHarris. Microtubules are tiny cylindrical structures that organize themselves to form a spindle. This complex biological machine gathers the chromosomes together and sorts them at the time of cell division, then sends them to the opposite poles of the daughter cells in a process called chromosome segregation. "In mice, approximately 50% of the eggs of older females have a spindle with chaotic microtubule dynamics" declared FitzHarris. The researchers conducted a series of micromanipulations on the eggs of mice between the ages of 6 and 12 weeks (young) and 60-week-old mice (old). "We swapped the nuclei of the young eggs with those of the old eggs and we observed problems in the old eggs containing a young nucleus," explained Shoma Nakagawa, a postdoctoral research fellow at the CRCHUM and at the Université de Montréal. "This shows that maternal age influences the alignment of microtubules independently of the age of the chromosomes contained in the nuclei of each egg." Greg FitzHarris's team notes that spindle defects are also a problem in humans. In short, the cellular machinery works less efficiently in aged eggs, but this is not caused by the age of the chromosomes. This discovery may one day lead to new fertility treatments to help women become pregnant and carry a pregnancy to term. "We are currently exploring possible treatments for eggs that might one day make it possible to reverse this problem and rejuvenate the eggs," explained FitzHarris. Many more years of research will be needed before getting to this point. But understanding the precisely orchestrated choreography that unfolds within each egg during cell division will eventually allow us to correct the errors, to ensure the production of healthy eggs that can be fertilized. ### About this study The article Intrinsically defective microtubule dynamics contribute to age-related chromosome segregation errors in mouse oocyte meiosis-I was published on April 3, 2017 in Current Biology. This research initiative was funded by the Canadian Institutes of Health Research (MOP142334), the J.-Louis Lévesque Foundation and the Canada Foundation for Innovation (FCI32711). Greg FitzHarris is a researcher at the CRCHUM and a professor in the Department of Obstetrics and Gynecology at the Université de Montréal. Shoma Nakagawa is a postdoctoral research fellow at the CRCHUM and at the Université de Montréal. DOI: 10.1016/j.cub.2017.02.025. After publication, the article will be available at: http://www.cell.com/current-biology/fulltext/S0960-9822(17)30162-8 B-roll footage is available upon request. Source: University of Montreal Hospital Research Center (CRCHUM) Information: Isabelle Girard Information advisor CRCHUM Phone: +1 514 890-8000, extension 12725 | @CRCHUM [email protected]
10.1016/j.cub.2017.02.025
2,017
Current Biology
Intrinsically Defective Microtubule Dynamics Contribute to Age-Related Chromosome Segregation Errors in Mouse Oocyte Meiosis-I
Chromosome segregation errors in mammalian oocytes compromise development and are particularly prevalent in older females, but the aging-related cellular changes that promote segregation errors remain unclear [1, 2]. Aging causes a loss of meiotic chromosome cohesion, which can explain premature disjunction of sister chromatids [3-7], but why intact sister pairs should missegregate in meiosis-I (termed non-disjunction) remains unknown. Here, we show that oocytes from naturally aged mice exhibit substantially altered spindle microtubule dynamics, resulting in transiently multipolar spindles that predispose the oocytes to kinetochore-microtubule attachment defects and missegregation of intact sister chromatid pairs. Using classical micromanipulation approaches, including reciprocally transferring nuclei between young and aged oocytes, we show that altered microtubule dynamics are not attributable to age-related chromatin changes. We therefore report that altered microtubule dynamics is a novel primary lesion contributing to age-related oocyte segregation errors. We propose that, whereas cohesion loss can explain premature sister separation, classical non-disjunction is instead explained by altered microtubule dynamics, leading to aberrant spindle assembly.
836128
Global biodiversity crisis is a large-scale reorganization, with greatest loss in tropical oceans
Local biodiversity of species - the scale on which humans feel contributions from biodiversity - is being rapidly reorganized, according to a new global analysis of biodiversity data from more than 200 studies, together representing all major biomes. The findings are important as historically, "it has been surprisingly difficult and controversial to find signals of ... global trends in biodiversity in the context of local ecosystems," write Brita Eriksson and Helmut Hillebrand in a related Perspective. The report also shows that changes to biodiversity are greatest and most variable in the oceans - particularly in tropical marine biomes - which are hotspots of species richness loss. These results may help inform conservation prioritization. While there is little doubt that the impacts of climate change and other human activities are causing unprecedented alterations to biodiversity worldwide, recognizing the global trends of decline in the context of local ecosystems has been challenging. Global biodiversity projections are often at odds with the highly variable trends observed at local levels, which suggests a possibility that current biodiversity change has geographic foundations. To explore the geography of biodiversity change, Shane Blowes and colleagues mapped trends in the richness and composition of biodiversity across marine, terrestrial and freshwater realms worldwide using the BioTIME database, the largest collection of local biodiversity time series data to date. In their analysis, Blowes et al. did not identify an overall trend of global species loss but instead showed that the composition of local species assemblages is rapidly being reorganized on a global scale. This restructuring, too, can have severe consequences on ecosystem functioning. The findings suggest that our understanding of biodiversity loss - as well as our efforts to stem the tides of change - needs to be conditional on context and location, the authors say. "Blowes et al. thus highlight that the global biodiversity crisis, at least for now, isn't primarily about decline, but about large-scale reorganization," write Eriksson and Hillebrand.
10.1126/science.aaw1620
2,019
Science
The geography of biodiversity change in marine and terrestrial assemblages
Spatial structure of species change Biodiversity is undergoing rapid change driven by climate change and other human influences. Blowes et al. analyze the global patterns in temporal change in biodiversity using a large quantity of time-series data from different regions (see the Perspective by Eriksson and Hillebrand). Their findings reveal clear spatial patterns in richness and composition change, where marine taxa exhibit the highest rates of change. The marine tropics, in particular, emerge as hotspots of species richness losses. Given that human activities are affecting biodiversity in magnitudes and directions that differ across the planet, these findings will provide a much needed biogeographic understanding of biodiversity change that can help inform conservation prioritization. Science , this issue p. 339 ; see also p. 308
972162
Mom’s dietary fat rewires male and female brains differently
More than half of all women in the United States are overweight or obese when they become pregnant. While being or becoming overweight during pregnancy can have potential health risks for moms, there are also hints that it may tip the scales for their kids to develop psychiatric disorders like autism or depression, which often affects one gender more than the other. What hasn’t been understood however is how the accumulation of fat tissue in mom might signal through the placenta in a sex-specific way and rearrange the developing offspring’s brain. To fill this gap, Duke postdoctoral researcher Alexis Ceasrine, Ph.D., and her team in the lab of Duke psychology & neuroscience professor Staci Bilbo, Ph.D., studied pregnant mice on a high-fat diet. In findings appearing November 28 in the journal Nature Metabolism, they found that mom’s high-fat diet triggers immune cells in the developing brains of male but not female mouse pups to overconsume the mood-influencing brain chemical serotonin, leading to depressed-like behavior. The researchers said a similar thing may be happening in humans, too. People with mood disorders like depression often lose interest in pleasurable activities. For mice, one innately pleasurable activity is drinking sugar water. Since mice preferentially sip sugar water over plain tap when given the choice, Ceasrine measured their drink preference as an estimate for depression. Males, but not females, born by moms on a high-fat diet lacked a preference for simple syrup over tap water. This rodent-like depression suggested to Ceasrine that mom’s nutrition while pregnant must have changed their male offspring’s brain during development. One immediate suspect was serotonin. Often called the “happy” chemical, serotonin is a molecular brain messenger that’s typically reduced in people with depression. Ceasrine and her team found that depressed-like male mice from high-fat diet moms had less serotonin in their brain both in the womb and as adults, suggesting these early impacts have lifelong consequences. Supplementing mom’s high-fat rodent chow with tryptophan, the chemical precursor to serotonin, restored males’ preference for sugar water and brain serotonin levels. Still, it was unclear how fat accumulation in mom lowered serotonin in their offspring. To get at this, the team investigated the resident immune cells of the brain: microglia. Microglia are the understudied Swiss Army knives of the brain. Their jobs include serving as a security monitor for pathogens as well as a hearse to haul away dead nerve cells. Microglia also have ample space and appetites to consume healthy brain cells whole. To see if microglia were overindulging in serotonin, Ceasrine analyzed the contents of their cellular “stomach”, the phagosome, with 3D imaging, and found that males born by moms on high-fat diets had microglia packed with more serotonin than those born to moms on a typical diet. This indicated that elevated fat accumulation during pregnancy somehow signals through the male but not female placenta to microglia and instructs them to overeat serotonin cells. How fat can signal through the placental barrier remained a mystery, though. One thought was that bacteria were to blame. “There's a lot of evidence that when you eat a high fat diet, you actually end up with endotoxemia,” Ceasrine said. “It basically means that you have an increase in circulating bacteria in your blood, or endotoxins, which are just parts of bacteria.” To test if endotoxins could be the critical messenger from mom to enwombed males, the team measured their presence and found that, indeed, high-fat diets during pregnancy beefed up endotoxin levels in the placenta and their offspring’s developing brain. Ceasrine said this may explain how fat accumulation triggers an immune response from microglia by increasing the presence of bacteria, resulting in overconsumed brain cells in male mice. To see whether this may be true of humans as well, Ceasrine teamed up with Susan Murphy, Ph.D., a Duke School of Medicine associate professor in obstetrics and gynecology, who provided placental and fetal brain tissue from a previous study. Just as the researchers observed in mice, they found that the more fat measured in human placental tissue, the less serotonin was detected in the brains of males but not females. Bilbo and Ceasrine are now starting to work out how and why female offspring are impacted differently when mom amasses high levels of fat during pregnancy. Fat doesn’t lead to depression in female mice, but it does make them less social, perhaps due to an overconsumption of the pro-social hormone oxytocin, instead of serotonin. For now, this research highlights that not all placentas are created equally. This work may one day help guide clinicians and parents in better understanding and possible treatment or prevention of the origins of some mood disorders by considering early environmental factors, like fat accumulation during gestation. So, why would the placenta treat male and female fetuses differently? Ceasrine was initially stumped when a student asked a similar question after a talk she gave to Bilbo’s class. Bilbo laughed and reiterated the question. But now they think they have it figured out.  “I was hugely pregnant at the time, and I was like, ‘Oh, wait. Pregnancy!’” Ceasrine recalled. “Men never have to carry a fetus, so they never have to worry about the kind of immune response of self versus non-self that you have to do when you're a woman and you carry a baby.” Support for the research came from the US National Institutes of Health (F32HD104430, R01ES025549), the Robert and Donna Landreth Family Foundation, and the Charles Lafitte Foundation. CITATION: “Maternal Diet Disrupts the Placenta-Brain Axis in a Sex-Specific Manner,” Alexis M. Ceasrine, Benjamin A. Devlin, Jessica L. Bolton, Lauren A. Green, Young Chan Jo, Carolyn Huynh, Bailey Patrick, Kamryn Washington, Cristina L. Sanchez, Faith Joo, A. Brayan Campos-Salazar, Elana R. Lockshin, Cynthia Kuhn, Susan K. Murphy, Leigh Ann Simmons, Staci D. Bilbo. Nature Metabolism, Nov. 28, 2022. DOI: 10.1038/s42255-022-00693-8 Nature Metabolism 10.1038/s42255-022-00693-8 Experimental study Animals Maternal Diet Disrupts the Placenta-Brain Axis in a Sex-Specific Manner 28-Nov-2022
10.1038/s42255-022-00693-8
2,022
Nature Metabolism
Maternal diet disrupts the placenta–brain axis in a sex-specific manner
<jats:title>SUMMARY</jats:title><jats:p>High maternal weight is associated with a number of detrimental outcomes in offspring, including increased susceptibility to neurological disorders such as anxiety, depression, and communicative disorders (e.g. autism spectrum disorders)<jats:sup>1–8</jats:sup>. Despite widespread acknowledgement of sex-biases in the prevalence, incidence, and age of onset of these disorders, few studies have investigated potential sex-biased mechanisms underlying disorder susceptibility. Here, we use a mouse model to demonstrate how maternal high-fat diet, one contributor to overweight, causes endotoxin accumulation in fetal tissue, and subsequent perinatal inflammation influences sex-specific behavioral outcomes in offspring. In male high-fat diet offspring, increased macrophage toll like receptor 4 signaling results in excess phagocytosis of serotonin neurons in the developing dorsal raphe nucleus, decreasing serotonin bioavailability in the fetal and adult brain. Bulk sequencing from a large cohort of matched first trimester human fetal brain, placenta, and maternal decidua samples reveals sex-specific transcriptome-wide changes in placenta and brain tissue in response to maternal triglyceride accumulation (a proxy for dietary fat content). Further, we find that fetal brain serotonin levels decrease as maternal dietary fat intake increases in males only. These findings uncover a microglia-dependent mechanism through which maternal diet may impact offspring susceptibility for neuropsychiatric disorder development in a sex-specific manner.</jats:p>
582202
First identification of brain's preparation for action
Neuroscientists at Bangor University, (Wales,UK) and University College London (UCL) have for the first time, identified the processes which occur in our brains milliseconds before we undertake a series of movements, crucial for speech, handwriting, sports or playing a musical instrument. They have done so by measuring tiny magnetic fields outside the participants' head and identifying unique patterns making up each sequence before it is executed. They identified differences between neural patterns which lead to a more skilled as opposed to a more error-prone execution. The research, funded by the Wellcome Trust, is published in Neuron Issue date 7 FEBRUARY 2019 (doi: 10.1016/j.neuron.2019.01.018). Following further research, this new information could lead to the development of interventions which would assist with rehabilitation post-stroke or improve life for people living with stutter, dyspraxia or other similar conditions. Lead author Dr Katja Kornysheva, of Bangor University's School of Psychology explained the significance of the findings: "Using a non-invasive technique which measures ongoing brain activity on a millisecond-by-millisecond basis, we were able to track brain activity as participants prepared and then moved their fingers from memory. This revealed that the brain prepares for complex actions in the milliseconds beforehand by 'stacking' the actions to be taken, in the correct order." "Reviewing the brain impulses, we could see that when participants were producing the sequences correctly and accurately, with no errors, each activity was spaced and ordered in advance of being executed. However, when mistakes occurred, the 'queuing' of actions was visibly less well-defined as separate and distinct actions. It appears that the more closely bunched and less-defined 'queuing' was, the more participants committed errors in sequence production and timing. Co- researcher and author, Prof Neil Burgess, of UCL's Inst of Cognitive Neuroscience commented: "To our surprise we also found that this preparatory pattern is primarily reflecting a template for position (1st, 2nd, 3rd and so on) which can be reused across sequences - like cabinet drawers into which one can put different objects. This is a way for the brain to be efficient and flexible, by providing a blueprint for new sequences and staying organized." "Unfortunately, disorders of sequence control and fluency can severely disrupt everyday life - with no treatment being predictively effective. Eventually, this research could lead to the development of 'as it happens' decoders which would provide instant feed-back to the user. This could help to 'train' their brains to create the correct preparation state and overcome difficulties such as a stutter or dyspraxia. We hope that the new understanding could also lead to the development of improved brain computer interfaces, used by people who are 'locked- in# or have whole body paralysis." ###
10.1016/j.neuron.2019.01.018
2,019
Neuron
Neural Competitive Queuing of Ordinal Structure Underlies Skilled Sequential Action
Fluent retrieval and execution of movement sequences is essential for daily activities, but the neural mechanisms underlying sequence planning remain elusive. Here participants learned finger press sequences with different orders and timings and reproduced them in a magneto-encephalography (MEG) scanner. We classified the MEG patterns for each press in the sequence and examined pattern dynamics during preparation and production. Our results demonstrate the "competitive queuing" (CQ) of upcoming action representations, extending previous computational and non-human primate recording studies to non-invasive measures in humans. In addition, we show that CQ reflects an ordinal template that generalizes across specific motor actions at each position. Finally, we demonstrate that CQ predicts participants' production accuracy and originates from parahippocampal and cerebellar sources. These results suggest that the brain learns and controls multiple sequences by flexibly combining representations of specific actions and interval timing with high-level, parallel representations of sequence position.
480823
Controlled dynamics of colloidal rods
Colloidal particles have become increasingly important for research as vehicles of biochemical agents. In future, it will be possible to study their behaviour much more efficiently than before by placing them on a magnetised chip. A research team from the University of Bayreuth reports on these new findings in the journal Nature Communications. The scientists have discovered that colloidal rods can be moved on a chip quickly, precisely, and in different directions, almost like chess pieces. A pre-programmed magnetic field even enables these controlled movements to occur simultaneously. For the recently published study, the research team, led by Prof. Dr. Thomas Fischer, Professor of Experimental Physics at the University of Bayreuth, worked closely with partners at the University of Poznán and the University of Kassel. To begin with, individual spherical colloidal particles constituted the building blocks for rods of different lengths. These particles were assembled in such a way as to allow the rods to move in different directions on a magnetised chip like upright chess figures - as if by magic, but in fact determined by the characteristics of the magnetic field. In a further step, the scientists succeeded in eliciting individual movements in various directions simultaneously. The critical factor here was the "programming" of the magnetic field with the aid of a mathematical code, which in encrypted form, outlines all the movements to be performed by the figures. When these movements are carried out simultaneously, they take up to one tenth of the time needed if they are carried out one after the other like the moves on a chessboard. "The simultaneity of differently directed movements makes research into colloidal particles and their dynamics much more efficient," says Adrian Ernst, doctoral student in the Bayreuth research team and co-author of the publication. "Miniaturised laboratories on small chips measuring just a few centimetres in size are being used more and more in basic physics research to gain insights into the properties and dynamics of materials. Our new research results reinforce this trend. Because colloidal particles are in many cases very well suited as vehicles for active substances, our research results could be of particular benefit to biomedicine and biotechnology," says Mahla Mirzaee-Kakhki, first author and Bayreuth doctoral student.
10.1038/s41467-020-18467-9
2,020
Nature Communications
Simultaneous polydirectional transport of colloidal bipeds
Detailed control over the motion of colloidal particles is relevant in many applications in colloidal science such as lab-on-a-chip devices. Here, we use an external magnetic field to assemble paramagnetic colloidal spheres into colloidal rods of several lengths. The rods reside above a square magnetic pattern and are transported via modulation of the direction of the external magnetic field. The rods behave like bipeds walking above the pattern. Depending on their length, the bipeds perform topologically distinct classes of protected walks above the pattern. We demonstrate that it is possible to design parallel polydirectional modulation loops of the external field that command up to six classes of bipeds to walk on distinct predesigned paths. We use such parallel polydirectional loops to induce the collision of reactant bipeds, their polymerization addition reaction to larger bipeds, the separation of product bipeds from the educts, the sorting of different product bipeds, and also the parallel writing of a word consisting of several different letters.
634450
Neanderthals used resin 'glue' to craft their stone tools
10.1371/journal.pone.0213473
2,019
PLoS ONE
Hafting of Middle Paleolithic tools in Latium (central Italy): New data from Fossellone and Sant’Agostino caves
Hafting of stone tools was an important advance in the technology of the Paleolithic. Evidence of hafting in the Middle Paleolithic is growing and is not limited to points hafted on spears for thrusting or throwing. This article describes the identification of adhesive used for hafting on a variety of stone tools from two Middle Paleolithic caves in Latium, Fossellone Cave and Sant'Agostino Cave. Analysis of the organic residue by gas chromatography/mass spectrometry shows that a conifer resin adhesive was used, in one case mixed with beeswax. Contrary to previous suggestions that the small Middle Paleolithic tools of Latium could be used by hand and that hafting was not needed since it did not improve their functionality, our evidence shows that hafting was used by Neandertals in central Italy. Ethnographic evidence indicates that resin, which dries when exposed to air, is generally warmed by exposure to a small fire thus softened to be molded and pushed in position in the haft. The use of resin at both sites suggests regular fire use, as confirmed by moderate frequencies of burnt lithics in both assemblages. Lithic analysis shows that hafting was applied to a variety of artifacts, irrespective of type, size and technology. Prior to our study evidence of hafting in the Middle Paleolithic of Italy was limited to one case only.
645332
Ultrasound selectively damages cancer cells when tuned to correct frequencies
Doctors have used focused ultrasound to destroy tumors without invasive surgery for some time. However, the therapeutic ultrasound used in clinics today indiscriminately damages cancer and healthy cells alike. Most forms of ultrasound-based therapies either use high-intensity beams to heat and destroy cells or special contrast agents that are injected prior to ultrasound, which can shatter nearby cells. Heat can harm healthy cells as well as cancer cells, and contrast agents only work for a minority of tumors. Researchers at the California Institute of Technology and City of Hope Beckman Research Institute have developed a low-intensity ultrasound approach that exploits the unique physical and structural properties of tumor cells to target them and provide a more selective, safer option. By scaling down the intensity and carefully tuning the frequency to match the target cells, the group was able to break apart several types of cancer cells without harming healthy blood cells. Their findings, reported in Applied Physics Letters, from AIP Publishing, are a new step in the emerging field called oncotripsy, the singling out and killing of cancer cells based on their physical properties. "This project shows that ultrasound can be used to target cancer cells based on their mechanical properties," said David Mittelstein, lead author on the paper. "This is an exciting proof of concept for a new kind of cancer therapy that doesn't require the cancer to have unique molecular markers or to be located separately from healthy cells to be targeted." A solid mechanics lab at Caltech first developed the theory of oncotripsy, based on the idea that cells are vulnerable to ultrasound at specific frequencies--like how a trained singer can shatter a wine glass by singing a specific note. The Caltech team found at certain frequencies, low-intensity ultrasound caused the cellular skeleton of cancer cells to break down, while nearby healthy cells were unscathed. "Just by tuning the frequency of stimulation, we saw a dramatic difference in how cancer and healthy cells responded," Mittelstein said. "There are many questions left to investigate about the precise mechanism, but our findings are very encouraging." The researchers hope their work will inspire others to explore oncotripsy as a treatment that could one day be used alongside chemotherapy, immunotherapy, radiation and surgery. They plan to gain a better understanding of what specifically occurs in a cell impacted by this form of ultrasound.
10.1063/1.5100292
2,019
Applied Physics Letters
Narrow band photoacoustic lamb wave generation for nondestructive testing using candle soot nanoparticle patches
The generation of ultrasonic surface waves with a photoacoustic-laser-source has become useful for the noncontact nondestructive testing and evaluation (NDT&amp;E) of materials and structures. In this work, a hybrid ultrasound based NDT&amp;E method is proposed based on the photoacoustic-laser-source as a noncontact Lamb wave generator by incorporating a line-arrayed patterned candle soot nanoparticle-polydimethylsiloxane (CSNPs-PDMS) patch as the signal amplifier and with a narrow bandwidth. The CSNP-PDMS composite has been investigated as the functional patch for its laser energy absorption efficiency, fast thermal diffusion, and large thermoelastic expansion capabilities. The signal amplitude (in mW) from the CSNP-PDMS patch exhibits 2.3 times higher amplitude than the no patch condition and a narrower bandwidth than other conditions. Furthermore, improvement in the sensitivity is also achieved through the selection of the aluminum nitride sensing system. The overall combination of the Lamb wave generation and receiver-sensing system in this study is found to be very promising for a broad range of noncontact NDT&amp;E applications.
932724
Veterinary science: Almost a quarter of a million unowned cats estimated in UK urban areas
The number of unowned cats in urban areas of the UK is estimated to be 247,429 according to a modelling study published in Scientific Reports. The authors suggest that urban areas with higher human density and deprivation may have more unowned cats (feral, lost or abandoned cats).    Drs. Jenni McDonald and Elizabeth Skillings modelled data from 3,101 surveys of residents in five urban towns and cities in England (Beeston, Bradford, Bulwell, Dunstable and Houghton Regis, and Everton) between 2016 and 2018. The authors analysed findings with 877 separate resident reports and 601 expert reports. The two significant factors that predicted unowned cats in the model were socioeconomic deprivation (predicted 31% of the variation of unowned cat abundance) and human population density (predicted 7% of the variation of unowned cat abundance).    The authors scaled up their model to estimate the densities of unowned cats in England and across the UK using data on human population density and deprivation. The findings suggest that there are on average 9.3 unowned cats per km2 in the UK, but the number varies between 1.9 and 57 unowned cats per km2 depending on the location. The authors suggest in areas with greater human population densities, there may be more owned cats as pets which can produce accidental litters, be abandoned or stray from home. Unowned cats in areas with greater human population density may also be supported through access to nutrition sources such as human food waste, according to the authors. The authors speculate that in high deprivation areas, the barriers to the timely neutering of pet cats, which prevents them from breeding, may be related to higher densities of unowned cats.  The authors caution that their model is based on data estimates and many factors may influence populations in each area, but suggest that the model provides an insight into the densities of unowned cats in the UK and may help guide interventions to manage these populations.   Article details: Human influences shape the first spatially explicit national estimate of urban unowned cat abundance DOI: 10.1038/s41598-021-99298-6 Corresponding Authors: Dr. Jenni McDonald Cats Protection, Haywards Heath, UK University of Bristol, Bristol Veterinary School, Bristol, UK The corresponding author can be contacted via Cats Protection's media team Email: [email protected] Tel: +44 1825 741 911 Please link to the article in online versions of your report (the URL will go live after the embargo ends): https://www.nature.com/articles/s41598-021-99298-6 Scientific Reports 10.1038/s41598-021-99298-6
10.1038/s41598-021-99298-6
2,021
Scientific Reports
Human influences shape the first spatially explicit national estimate of urban unowned cat abundance
Abstract Globally, unowned cats are a common element of urban landscapes, and the focus of diverse fields of study due to welfare, conservation and public health concerns. However, their abundance and distribution are poorly understood at large spatial scales. Here, we use an Integrated Abundance Model to counter biases that are inherent in public records of unowned cat sightings to assess important drivers of their abundance from 162 sites across five urban towns and cities in England. We demonstrate that deprivation indices and human population densities contribute to the number of unowned cats. We provide the first spatially explicit estimates of expected distributions and abundance of unowned cats across a national scale and estimate the total UK urban unowned cat population to be 247,429 (95% credible interval: 157,153 to 365,793). Our results provide a new baseline and approach for studies on unowned cats and links to the importance of human-mediated effects.
977568
Migraine: how to diagnose, manage and prevent
Migraine is a major cause of disability, affecting about 12% of people. A 2-part series published in CMAJ (Canadian Medical Association Journal) on diagnosing and managing the condition with both acute and preventive therapy provides guidance for clinicians https://www.cmaj.ca/lookup/doi/10.1503/cmaj.211969. "The goal of treatment of migraine attacks is to provide rapid relief from pain and other migraine-related symptoms, to restore patient function and to prevent recurrence," writes Dr. Tommy Chan, Department of Clinical Neurological Sciences, Western University, London, Ontario, with coauthors. "A stratified approach to treatment that empowers patients to choose from different options, depending on attack symptoms and severity, and encourages them to combine medications from different classes (e.g., nonsteroidal anti-inflammatory drugs and triptans) for severe or prolonged attacks, is preferred." Part 2 of the review, which will be published February 6, focuses on preventive treatment to reduce the frequency and severity of migraine attacks. Canadian Medical Association
10.1503/cmaj.211969
2,023
Canadian Medical Association Journal
Diagnosis and acute management of migraine
See related review article at www.cmaj.ca/lookup/doi/10.1503/cmaj.221607 (to be published February 6, 2023) and a first-person account of the difficulty of finding migraine treatment at [www.cmaj.ca/lookup/doi/10.1503/cmaj.221813][1] KEY POINTS Migraine affects about 12% of adults, with a
777653
Use of HINTS exam in emergency department is of limited value
DES PLAINES, IL - The diagnostic value of the Head-Impulse, Nystagmus, Test of Skew (HINTS) exam in the emergency department setting is limited. This is the result of a study titled Diagnostic Accuracy of the HINTS Exam in an Emergency Department: A Retrospective Chart Review, which will be published in the April issue of the Academic Emergency Medicine (AEM) journal, a peer-reviewed journal of the Society for Academic Emergency Medicine (SAEM). The lead author of the study is Cait Dmitriew, PhD, from the department of undergraduate medicine at the Northern Ontario School of Medicine, Sudbury, Ontario, Canada. The HINTS exam is a series of bedside ocular motor tests designed to distinguish between central and peripheral causes of dizziness in patients with continuous dizziness, nystagmus, and gait unsteadiness. The study found that HINTS exam use was high, but frequently used in patients who did not meet criteria to receive it. Most often this was because patients lacked documentation of nystagmus or described their symptoms as intermittent. In addition, many patients received both HINTS and Dix-Hallpike exams, which are intended for use in mutually exclusive patient populations. In no case was dizziness due to a central cause identified using the HINTS exam. The results suggest that the test is of limited utility as currently used by emergency department physicians and that further training in how to identify appropriate candidates and interpret the results of the ocular motor exam may improve its diagnostic accuracy. The authors advise that additional training of emergency physicians may be required to improve test sensitivity and specificity.
10.1111/acem.14171
2,020
Academic Emergency Medicine
Diagnostic Accuracy of the HINTS Exam in an Emergency Department: A Retrospective Chart Review
Abstract Introduction The HINTS exam is a series of bedside ocular motor tests designed to distinguish between central and peripheral causes of dizziness in patients with continuous dizziness, nystagmus, and gait unsteadiness. Previous studies, where the HINTS exam was performed by trained specialists, have shown excellent diagnostic accuracy. Our objective was to assess the diagnostic accuracy of the HINTS exam as performed by emergency physicians on patients presenting to the emergency department (ED) with a primary complaint of vertigo or dizziness. Methods A retrospective cohort study was performed using data from patients who presented to a tertiary care ED between September 2014 and March 2018 with a primary complaint of vertigo or dizziness. Patient characteristics of those who received the HINTS exam were assessed along with sensitivity and specificity of the test to rule out a central cause of stroke. Results A total of 2,309 patients met criteria for inclusion in the study. Physician uptake of the HINTS exam was high, with 450 (19.5%) dizzy patients receiving all or part of the HINTS. A large majority of patients (96.9%) did not meet criteria for receiving the test as described in validation studies; most often this was because patients lacked documentation of nystagmus or described their symptoms as intermittent. In addition, many patients received both HINTS and Dix‐Hallpike exams, which are intended for use in mutually exclusive patient populations. In no case was dizziness due to a central cause identified using the HINTS exam. Conclusions Our results suggest that despite widespread use of the HINTS exam in our ED, its diagnostic value in that setting was limited. The test was frequently used in patients who did not meet criteria to receive the HINTS exam (i.e., continuous vertigo, nystagmus, and unsteady gait). Additional training of emergency physicians may be required to improve test sensitivity and specificity.
589550
Penn study: Today's most successful fish weren't always evolutionary standouts
approximately 96 percent--are known as teleosts, a group of ray-finned fish that emerged 260 million years ago. Evolutionary biologists and paleontologists since Darwin have offered hypotheses to explain why teleosts seem to have "out-evolved" other groups. The closely related holosteans, for example, once dominated the oceans but are now considered "living fossils," representing just eight species in forms that resemble those of the past. But this view of the teleost success story may be based on the false premise that teleosts dominate today because they have always been more evolutionarily innovative than other groups. A new analysis of more than a thousand fossil fishes from nearly 500 species led by the University of Pennsylvania's John Clarke revealed that the teleosts' success story is not as straightforward as once believed. Examining the first 160 million years of teleost and holostean evolution, from the Permian to the early Cretaceous periods, the scientists show that holosteans were as evolutionarily innovative as teleosts, and perhaps even more so. "A lot of these so-called living fossils that appear to be kind of boring today actually have a pretty rich history," said Clarke. "If we were to go back in time to the Triassic and you had to place a bet on which group was going to do better going forward, you would have definitely chosen the holosteans. It just didn't work out that way." Clarke collaborated with Graeme T. Lloyd of Macquarie University and Matt Friedman of the University of Oxford on the work, which appears in Proceedings of the Academy of Natural Sciences. It's easy to see why scientists have long presumed teleosts exceptional. They represent 29,000 diverse species worldwide, roughly half of modern vertebrates. In contrast, the eight living species that comprise holosteans share a resemblance, and all dwell in the freshwaters of eastern North America. Numerous ideas have been put forward to explain teleost success, including the flexible structure of their jaws, a diversity of reproductive strategies and the symmetry of their tail fins. With the emergence of molecular and genetic techniques to probe evolution, researchers have also attributed teleost success to a genome duplication event in the evolutionary past that left the fish with twice the number of chromosomes and thus more raw material with which to acquire beneficial mutations and to evolve. Yet Clarke and colleagues wanted to back up a bit, questioning the very assumption that teleosts had always been more evolutionarily innovative and successful. "There were times in the past when holosteans were top dog," Clarke said. "There are lots of holostean fossils, and they were quite diverse, not only in numbers but in the wide variety of sizes and shapes they possessed." It was known from the fossil record that holosteans appeared to be more dominant in the Triassic Period on into the Early and Middle Jurassic. In the Late Jurassic, however, teleosts began to take over. The researchers decided, therefore, to focus on the earlier period of fish evolution, starting in the Permian, which just preceded the Triassic period, and following it through 160 million years into the Early Cretaceous, which followed the Jurassic. To do so, they relied on a dataset that included the size and shape of hundreds of fossils Clarke had compiled during visits to 15 museums as part of his Ph.D. research. They also constructed "supertrees," to summarize the relationships of nearly all known extinct species of holosteans and teleosts from the Triassic, Jurassic and Early Cretaceous. These large trees were built from more than 100 smaller trees already available in the paleontology literature, from studies that examined the morphological traits of fishes to work out their evolutionary tree. While other researchers have examined patterns of diversity in fish fossils, no one had ever applied a quantitative framework to determine whether holostean or teleost fishes possessed higher rates, or greater innovation, in shape and size. The Penn-led scientists were able to use the supertrees to evaluate first the rate of size evolution in teleosts versus holosteans and then to compare the degree of shape innovation in the two groups. In their various analyses of the specimens, Clarke and colleagues found no support for the expectation that teleosts would change their body sizes and shapes faster, or be better able to "invent" new sizes and shapes compared with holosteans. On the contrary, using timescales from molecular studies that suggested holosteans and teleosts evolved much earlier in Earth's history than when their first fossils appear, holosteans seemed to come out on top, appearing more innovative at evolving new sizes and faster at evolving between different shapes. "There is no compelling evidence on any timescale that teleosts were the best at evolving different body sizes and shapes," said Clarke. "And in fact, if anything, there is some evidence hinting that maybe holosteans were more innovative when it came to evolving different body sizes and faster at changing shape." The researchers also used the dataset to investigate whether genome duplication correlated with an increase in evolution rate and innovation. They found no consistent link with size evolution but did see indications that shape evolution was elevated in the more geologically recent teleosts with duplicate genomes relative to more ancient groups of teleosts. However, this occurred because those more ancient teleosts were particularly slow at evolving shapes since they compare equally poorly with holosteans, rather than signifying any exceptional evolution in those teleosts with duplicate genomes. On this basis, the authors deem the role of genome duplication on size and shape evolution to be "ambiguous," suggesting that, in agreement with recent studies of diversification in living teleosts, genome duplication is not the magic bullet that explains the diversity of all teleosts. Clarke would like to continue delving into the history of neopterygian fishes, particularly those living fossils that are often neglected in favor of researching the more dynamic and diverse living teleosts. "Many biologists have focused upon trying to explain why some groups are so incredibly successful," he said. "But there hasn't been a lot of focus on the other end of the spectrum: how do you get living fossils, these species-poor, long-lived groups that stick around doing the same thing for millions of years." ### The study was supported by a Palaeontological Association Whittington Award, a Natural Environmental Research Council Cohort grant, an Australian Research Council grant, the Philip Leverhulme Prize and the John Fell Fund.
10.1073/pnas.1607237113
2,016
Proceedings of the National Academy of Sciences
Little evidence for enhanced phenotypic evolution in early teleosts relative to their living fossil sister group
Since Darwin, biologists have been struck by the extraordinary diversity of teleost fishes, particularly in contrast to their closest "living fossil" holostean relatives. Hypothesized drivers of teleost success include innovations in jaw mechanics, reproductive biology and, particularly at present, genomic architecture, yet all scenarios presuppose enhanced phenotypic diversification in teleosts. We test this key assumption by quantifying evolutionary rate and capacity for innovation in size and shape for the first 160 million y (Permian-Early Cretaceous) of evolution in neopterygian fishes (the more extensive clade containing teleosts and holosteans). We find that early teleosts do not show enhanced phenotypic evolution relative to holosteans. Instead, holostean rates and innovation often match or can even exceed those of stem-, crown-, and total-group teleosts, belying the living fossil reputation of their extant representatives. In addition, we find some evidence for heterogeneity within the teleost lineage. Although stem teleosts excel at discovering new body shapes, early crown-group taxa commonly display higher rates of shape evolution. However, the latter reflects low rates of shape evolution in stem teleosts relative to all other neopterygian taxa, rather than an exceptional feature of early crown teleosts. These results complement those emerging from studies of both extant teleosts as a whole and their sublineages, which generally fail to detect an association between genome duplication and significant shifts in rates of lineage diversification.
960576
Feeling the pressure
Ikoma, Japan – Scientists from Nara Institute of Science and Technology (NAIST) have used elastic shell theory to describe how the stiffness of plant cell walls depends on their elasticity and internal turgor pressure. By utilizing atomic force microscopy (AFM) combined with finite element computer simulations, they were able to show that cell stiffness is very sensitive to internal turgor pressure. Many people will have fond memories from their school days looking at onion peels under a microscope. While the individual cells might have seemed then like simple rectangles, the stability of plant cells reflects complex combinations of forces. In addition to the cell membrane which is similar in animals, plant cells also have a rigid cell wall that provides structural integrity. Turgor, meaning the normal rigidity of cells due to the pressure from its contents, is also a critical factor in maintaining balance with the environment. Too little pressure can cause the cell to shrink. Cells can regulate their turgor pressure osmotic flows that tend to balance the salt concentrations between the interior and the outside of the wall. However, the resulting mechanical properties of plant cells remain nebulous. For example, using AFM alone to determine the stiffness from cell wall deformation makes it difficult to separate the contributions from the tension of the cell wall, cell geometry and turgor pressure. Now, a team of researchers led by NAIST has used finite element method (FEM) simulations to verify a new formula based on elastic shell theory. This allowed them to interpret the apparent stiffness observed using AFM. The team studied onion epidermal cells, which are a model system for understanding the physical properties of plant cells. “Looking at the force versus indentation data suggested that the standard equations were not sufficient for interpreting the apparent stiffness of plant cells,” senior author Yoichiroh Hosokawa says. Based on the FEM simulations, the elastic shell theory equation was shown to be better at describing the AFM response of the onion cells, compared with the conventional model used for objects without internal turgor pressure. Moreover, their findings suggest that tension caused by turgor pressure regulates cell stiffness, which can be modified by slight changes, on the order of 0.1 megapascals. “Our theoretical analysis paves the way for a more complete understanding of the forces inherent in a plant cell,” Hosokawa says. The work helps generalize our understanding of stiffness for living systems. This knowledge can be applied to help ensure that plants maintain their structure even under stressful situations, such as during periods of water deprivation.
10.1038/s41598-022-16880-2
2,022
Scientific Reports
Elastic shell theory for plant cell wall stiffness reveals contributions of cell wall elasticity and turgor pressure in AFM measurement
Abstract The stiffness of a plant cell in response to an applied force is determined not only by the elasticity of the cell wall but also by turgor pressure and cell geometry, which affect the tension of the cell wall. Although stiffness has been investigated using atomic force microscopy (AFM) and Young’s modulus of the cell wall has occasionally been estimated using the contact-stress theory (Hertz theory), the existence of tension has made the study of stiffness more complex. Elastic shell theory has been proposed as an alternative method; however, the estimation of elasticity remains ambiguous. Here, we used finite element method simulations to verify the formula of the elastic shell theory for onion ( Allium cepa ) cells. We applied the formula and simulations to successfully quantify the turgor pressure and elasticity of a cell in the plane direction using the cell curvature and apparent stiffness measured by AFM. We conclude that tension resulting from turgor pressure regulates cell stiffness, which can be modified by a slight adjustment of turgor pressure in the order of 0.1 MPa. This theoretical analysis reveals a path for understanding forces inherent in plant cells.
928048
Substantial health decline among older people supports need for early intervention
Up to three quarters of older individuals in Latin America, India and China experienced significant decline in physical, cognitive, or psychological health over a three- to five-year period, according to a study published September 14th in PLOS Medicine by Martin Prince and A. Matthew Prina of King’s College London, and colleagues. As noted by the authors, the findings support the World Health Organization’s strategy to promote healthy aging by targeting a broad group of individuals who show signs of early decline and are therefore at increased risk of adverse outcomes. The World Health Organization has proposed a program called Integrated Care for Older People (ICOPE) based on the principle that simple interventions could improve the health and functioning of older individuals, aiming to prevent or delay dependence on others. ICOPE recommendations cover mobility loss, malnutrition, visual impairment, hearing loss, cognitive impairment, depressive symptoms, urinary incontinence, the risk of falls, and support for caregivers. ICOPE targets low- and middle-income countries, but it has not been clear how many older people in these settings are significantly affected by early age-related decline, or whether these changes predict who will further deteriorate, leading to dependence and earlier death. To address these questions, Prince and his colleagues first completed baseline community surveys of 17,031 people aged 65 years and over living in 12 urban and rural sites in six Latin American countries, India and China between 2003 and 2007. Three to five years later, between 2007 and 2010, the researchers then conducted follow-up interviews and identified who among 15,901 participants had died, and who among 12,939 participants had become dependent on care. Between two thirds and three quarters of older people experienced significant decline in at least one of the areas targeted by the ICOPE program, with a high number of cases of dementia, stroke, and depression. The oldest participants were more likely to experience significant decline and to have multiple problems. Nearly three quarters of those showing significant decline at baseline had not yet become physically frail or dependent, but they were 1.7 to 1.9 times more likely than those without any significant decline to become dependent over the follow-up period, and 1.3 to 1.4 times more likely to have died. Taken together, the findings suggest that implementing comprehensive assessments, care planning and community interventions for up to three quarters of the older population could be a major challenge for poorly resourced health systems in low- and middle-income countries. According to the authors, effective implementation will require political will, prioritization, investment, and health system strengthening and restructuring.
10.1371/journal.pmed.1003097
2,021
PLoS Medicine
Intrinsic capacity and its associations with incident dependence and mortality in 10/66 Dementia Research Group studies in Latin America, India, and China: A population-based cohort study
Background The World Health Organization (WHO) has reframed health and healthcare for older people around achieving the goal of healthy ageing. The recent WHO Integrated Care for Older People (ICOPE) guidelines focus on maintaining intrinsic capacity, i.e., addressing declines in neuromusculoskeletal, vitality, sensory, cognitive, psychological, and continence domains, aiming to prevent or delay the onset of dependence. The target group with 1 or more declines in intrinsic capacity (DICs) is broad, and implementation may be challenging in less-resourced settings. We aimed to inform planning by assessing intrinsic capacity prevalence, by characterising the target group, and by validating the general approach—testing hypotheses that DIC was consistently associated with higher risks of incident dependence and death. Methods and findings We conducted population-based cohort studies (baseline, 2003–2007) in urban sites in Cuba, Dominican Republic, Puerto Rico, and Venezuela, and rural and urban sites in Peru, Mexico, India, and China. Door-knocking identified eligible participants, aged 65 years and over and normally resident in each geographically defined catchment area. Sociodemographic, behaviour and lifestyle, health, and healthcare utilisation and cost questionnaires, and physical assessments were administered to all participants, with incident dependence and mortality ascertained 3 to 5 years later (2008–2010). In 12 sites in 8 countries, 17,031 participants were surveyed at baseline. Overall mean age was 74.2 years, range of means by site 71.3–76.3 years; 62.4% were female, range 53.4%–67.3%. At baseline, only 30% retained full capacity across all domains. The proportion retaining capacity fell sharply with increasing age, and declines affecting multiple domains were more common. Poverty, morbidity (particularly dementia, depression, and stroke), and disability were concentrated among those with DIC, although only 10% were frail, and a further 9% had needs for care. Hypertension and lifestyle risk factors for chronic disease, and healthcare utilisation and costs, were more evenly distributed in the population. In total, 15,901 participants were included in the mortality cohort (2,602 deaths/53,911 person-years of follow-up), and 12,939 participants in the dependence cohort (1,896 incident cases/38,320 person-years). One or more DICs strongly and independently predicted incident dependence (pooled adjusted subhazard ratio 1.91, 95% CI 1.69–2.17) and death (pooled adjusted hazard ratio 1.66, 95% CI 1.49–1.85). Relative risks were higher for those who were frail, but were also substantially elevated for the much larger sub-groups yet to become frail. Mortality was mainly concentrated in the frail and dependent sub-groups. The main limitations were potential for DIC exposure misclassification and attrition bias. Conclusions In this study we observed a high prevalence of DICs, particularly in older age groups. Those affected had substantially increased risks of dependence and death. Most needs for care arose in those with DIC yet to become frail. Our findings provide some support for the strategy of optimising intrinsic capacity in pursuit of healthy ageing. Implementation at scale requires community-based screening and assessment, and a stepped-care intervention approach, with redefined roles for community healthcare workers and efforts to engage, train, and support them in these tasks. ICOPE might be usefully integrated into community programmes for detecting and case managing chronic diseases including hypertension and diabetes.
949872
Researchers identified a new tsRNA in blood to improve liver cancer diagnosis
Hepatocellular carcinoma (HCC) is the most primary liver cancer. It is one of the most common and mortal cancer worldwide, especially in East Asia. Advanced HCC patients expect a significantly low 5-year survival rate as well as poor prognosis. Early diagnosis is of importance for effective HCC therapies. Researchers from Nanjing University have now shown that a tsRNA named tRF-Gln-TTG-006 in liver cancer patients’ serum may become a promising blood biomarker to detect liver cancer even in the early stage. They also find that this tsRNA may have its potential biological function during HCC progression. The results have been published in Frontiers of Medicine. tRNAs are known as amino acids transfer thus play vital role in protein synthesis. The new identified tRF-Gln-TTG-006 is a fragment of its parent tRNA. Unlike its "parent”, tsRNA have been found that it can be promising blood biomarker and regulator of disease progression in many cancer types. Serum tsRNA signature in HCC has not been elucidated yet. The current study helps filling this gap, in which discovery of HCC-related tsRNA would be valuable in terms of facilitating HCC detection especially at early stage. To elucidated tsRNA signature of HCC serum, the researchers adapted high-throughput sequencing specialized for tsRNAs which bear multiple modifications. Sequencing uncovered hundreds of new tsRNAs which shed lights on a special HCC serum tsRNA profile. The study uses a two-stage validation strategy to screen and finally verified this unique tsRNA which can separate early HCC patients from healthy people. When compared with the common used biomarker α-fetoprotein (AFP), tRF-Gln-TTG-006 shows a significantly superior diagnostic accuracy for patients with early-stage HCC. A total of 177 HCC patients are included in the study. Current study also shows tRF-Gln-TTG-006 may originate from tumor cells and affect tumor cell growth thus take part in the HCC progression. Yanbo Wang says: "Based on our research, tsRNA is a promising biomarker of early HCC diagnosis and our study can provide more information on the relationship between tsRNAs and the development of liver cancer."
10.1007/s11684-022-0920-7
2,022
Frontiers of Medicine
Serum mitochondrial tsRNA serves as a novel biomarker for hepatocarcinoma diagnosis
Hepatocellular carcinoma (HCC), which makes up the majority of liver cancer, is induced by the infection of hepatitis B/C virus. Biomarkers are needed to facilitate the early detection of HCC, which is often diagnosed too late for effective therapy. The tRNA-derived small RNAs (tsRNAs) play vital roles in tumorigenesis and are stable in circulation. However, the diagnostic values and biological functions of circulating tsRNAs, especially for HCC, are still unknown. In this study, we first utilized RNA sequencing followed by quantitative reverse-transcription PCR to analyze tsRNA signatures in HCC serum. We identified tRF-Gln-TTG-006, which was remarkably upregulated in HCC serum (training cohort: 24 HCC patients vs. 24 healthy controls). In the validation stage, we found that tRF-Gln-TTG-006 signature could distinguish HCC cases from healthy subjects with high sensitivity (80.4%) and specificity (79.4%) even in the early stage (Stage I: sensitivity, 79.0%; specificity, 74.8%; 155 healthy controls vs. 153 HCC patients from two cohorts). Moreover, in vitro studies indicated that circulating tRF-Gln-TTG-006 was released from tumor cells, and its biological function was predicted by bioinformatics assay and validated by colony formation and apoptosis assays. In summary, our study demonstrated that serum tsRNA signature may serve as a novel biomarker of HCC.
860893
Evaluating tissue response to biomaterials with a new bone-implant interaction model
10.1089/ten.tec.2016.0250
2,016
Tissue Engineering Part C Methods
A Bone–Implant Interaction Mouse Model for Evaluating Molecular Mechanism of Biomaterials/Bone Interaction
The development of an optimal animal model that could provide fast assessments of the interaction between bone and orthopedic implants is essential for both preclinical and theoretical researches in the design of novel biomaterials. Compared with other animal models, mice have superiority in accessing the well-developed transgenic modification techniques (e.g., cell tracing, knockoff, knockin, and so on), which serve as powerful tools in studying molecular mechanisms. In this study, we introduced the establishment of a mouse model, which was specifically tailored for the assessment of bone–implant interaction in a load-bearing bone marrow microenvironment and could potentially allow the molecular mechanism study of biomaterials by using transgenic technologies. The detailed microsurgery procedures for developing a bone defect (Φ = 0.8 mm) at the metaphysis region of the mouse femur were recorded. According to our results, the osteoconductive and osseointegrative properties of a well-studied 45S5 bioactive glass were confirmed by utilizing our mouse model, verifying the reliability of this model. The feasibility and reliability of the present model were further checked by using other materials as objects of study. Furthermore, our results indicated that this animal model provided a more homogeneous tissue–implant interacting surface than the rat at the early stage of implantation and this is quite meaningful for conducting quantitative analysis. The availability of transgenic techniques to mechanism study of biomaterials was further testified by establishing our model on Nestin-GFP transgenic mice. Intriguingly, the distribution of Nestin+ cells was demonstrated to be recruited to the surface of 45S5 glass as early as 3 days postsurgery, indicating that Nestin+ lineage stem cells may participate in the subsequent regeneration process. In summary, the bone–implant interaction mouse model could serve as a potential candidate to evaluate the early stage tissue response near the implant surface in a bone marrow microenvironment, and it also shows great potential in making transgenic animal resource applicable to biomaterial studies, so that the design of novel biomaterials could be better guided.
484590
Captive-bred juvenile salmon unlikely to become migratory when released into streams
Researchers at the Kobe University Graduate School of Science have revealed that when captive-bred juvenile red-spotted masu salmon are released into natural streams, very few individuals become migrants. Red-spotted masu salmon was an important fish species for the fishing industry in the rivers of west Japan, however in recent years their numbers are declining rapidly. The results of this research offer important suggestions for stocking practices and the management of river environments. The research group consisted of graduate school students TANAKA Tatsuya and UEDA Rui and Associate Professor SATO Takuya. The results were published in Biology Letters on January 13, 2021. Main Points Red-spotted masu salmon (Oncorhynchus masou ishikawae) that exceed the threshold body size by their first fall reach the smoltification (*1) stage of the salmon life cycle, and thus become migratory. When captive-bred individuals are raised in an environment similar to that of a hatchery, they usually grow beyond the threshold size in their first fall, undergoing smoltification. However, according to these research results, very few of the captive-bred fish released into natural streams during their first early summer exceed the threshold size by fall, with hardly any individuals becoming smolts. This research revealed that juvenile captive-bred salmon released into the wild are highly unlikely to become migratory individuals. Research Background Preserving variation within a species is vital for many reasons, including for the species' long-term existence and for the sustainability of resources for humans. Migratory behavior is one example of variation in the life cycle of a particular species which is important for its continuation. For example, within most species of salmonid fish, there are two phenotypes: migratory and non-migratory (resident). Migratory fish travel from the rivers to the ocean and then return to the rivers to spawn, whereas non-migratory individuals live in rivers for their entire lives. However in recent years, various factors such as reduced connectivity between rivers and oceans have caused a sharp decline in the number of migratory individuals. Consequently, large numbers of salmonid fish that were bred in captivity are released into rivers across the globe with the aim of replenishing and conserving fishery resources. It is known that these stocking practices can contribute towards an increase in migratory individuals if the released captive-bred fish have already reached the preparatory stage for migration to the sea (smolt). However, in some stocking practices, juvenile fish that have yet to undergo smoltification are released. It is not known what percentage of these juveniles become migratory in natural rivers. Research Aims and Hypothesis In the Oncorhynchus masou ishikawae salmonid species native to Japan both migratory and resident individuals are found. However, the distribution of these salmon populations is sharply declining nationwide in recent years, due in part to rivers being cut off from the ocean by dams and other artificial barriers. As part of efforts to restore the numbers of migratory individuals in the wild, red-spotted masu salmon are raised in hatcheries and those with a high probability of becoming migratory are released into rivers. Red-spotted masu salmon that exceed the threshold size upon their first fall undergo smoltification and then become migratory individuals. On the other hand, those who do not grow large enough become residents and spend their entire lives in river waters. It has been reported that captive-bred fish experience delayed growth when they are released into rivers, due to factors such as being unable to obtain sufficient food Based on this information, the researchers predicted that, even in the case of captive-bred individuals that were highly likely to reach smoltification, juvenile fish released into natural rivers prior to smoltification would experience growth delays rendering them unable to exceed the threshold size required. Research Methodology and Findings In order to investigate this hypothesis, red-spotted masu salmon from two different hatcheries were released into the natural streams in the upper regions of the Arida River in Wakayama Prefecture, Japan (in 10 sections across 7 streams). The individuals had a high probability of reaching the smolt stage and were released in early summer prior to smoltification. In fall, the researchers investigated the size of the released fish and the percentage that became smolts. In an additional experiment, released fish were raised in outdoor tanks (in which they could access a similar availability of food resources to the hatchery) and the researchers investigated the percentage of smolt and threshold size required for smoltification in these groups. The results of this stocking experiment revealed that, out of 320 fish recaptured from natural streams, only one individual (0.3%) reached the smolt stage (Figure 2). In contrast, the numbers that achieved smoltification were much higher among the group that were raised in outdoor tanks, with 64% of females and 17% of males from K- hatchery, and 75% of females and 33% of males from T-hatchery reaching the smolt stage (Figure 2). The threshold sizes for smolt individuals in the outdoor mesocosm groups were also investigated; these were found to be 124mm for females and 162mm for males from K-hatchery, and 108mm for females and 119cm for males from T-hatchery, respectively (Figure 3a, b). On the other hand, of the juveniles that were released into natural streams, the combined total of females that exceeded the threshold size from both hatcheries was only 8 out of 304 individuals recaptured from the same section (Figure 3c, d). These research results strongly indicate that the vast majority of released salmonids experience reduced growth in natural rivers, meaning that they are unable to exceed the threshold size necessary to become smolts. Further Developments This study has shown that if captive-bred salmonid fish are released into natural rivers prior to smoltification they are highly unlikely to become migratory, even if their phenotypes are expressed in a hatchery. The results thus indicate that releasing large numbers of juvenile individuals doesn't really contribute to the replenishment and conservation of migratory salmon stocks. However, if these fish are raised in a hatchery environment that is similar to a natural river, there is a possibility that fish could be produced that can grow well even after being released. In addition, it may be possible to increase the number of migratory fish by protecting and restoring streams and their surrounding forest environments, which could improve the growth of fish in the wild. Currently, the research group is conducting field experiments in order to improve the growth rate of captive-bred red-spotted masu salmon in rivers and to illuminate the environment required for individuals to become migratory. At the same time, they are also planning to investigate methods of reclaiming river environments that can support the growth of migratory phenotypes, without relying on stocking practices. ### Glossary 1. Smoltification: In salmonid species, individual fish that have made the decision to migrate to the ocean (or a lake) undergo a transformation called smoltification. During this stage, their shape, color and physiology change. They become longer and thinner, the scales on their sides turn silver and the tips of their tail and dorsal fins change to black. Acknowledgements Parts of this research were funded by the following organizations: The Asahi Glass Foundation for Environmental Field Research, and the Patagonia Environmental Program. Journal Information Title: "Captive-bred populations of a partially migratory salmonid fish are unlikely to maintain migratory polymorphism in natural habitats" DOI:10.1098/rsbl.2020.0324 Authors: Tatsuya Tanaka, Rui Ueda and Takuya Sato Journal: Biology Letters
10.1098/rsbl.2020.0324
2,021
Biology Letters
Captive-bred populations of a partially migratory salmonid fish are unlikely to maintain migratory polymorphism in natural habitats
Variation in life history is fundamental to the long-term persistence of populations and species. Partial migration, in which both migratory and resident individuals are maintained in a population, is commonly found across animal taxa. However, human-induced habitat fragmentation continues to cause a rapid decline in the migratory phenotype in many natural populations. Using field and hatchery experiments, we demonstrated that despite both migrants and residents being maintained in captive environments, few individuals of the red-spotted masu salmon, Oncorhynchus masou ishikawae , became migrants in natural streams when released prior to the migration decision. Released fish rarely reached the threshold body size necessary to become migrants in natural streams, presumably owing to lower growth rates in natural than in captive environments. The decision to migrate is often considered a threshold trait in salmonids and other animal taxa. Our findings highlight the need for management programmes that acknowledge the effects of the environment on the determination of the migratory phenotypes of partially migratory species when releasing captive-bred individuals prior to their migratory decisions.
648388
How do trees go to sleep?
Most living organisms adapt their behavior to the rhythm of day and night. Plants are no exception: flowers open in the morning, some tree leaves close during the night. Researchers have been studying the day and night cycle in plants for a long time: Linnaeus observed that flowers in a dark cellar continued to open and close, and Darwin recorded the overnight movement of plant leaves and stalks and called it "sleep". But even to this day, such studies have only been done with small plants grown in pots, and nobody knew whether trees sleep as well. Now, a team of researchers from Austria, Finland and Hungary measured the sleep movement of fully grown trees using a time series of laser scanning point clouds consisting of millions of points each. Trees droop their branches at night "Our results show that the whole tree droops during night which can be seen as position change in leaves and branches", says Eetu Puttonen (Finnish Geospatial Research Institute), "The changes are not too large, only up to 10 cm for trees with a height of about 5 meters, but they were systematic and well within the accuracy of our instruments." To rule out effects of weather and location, the experiment was done twice with two different trees. The first tree was surveyed in Finland and the other in Austria. Both tests were done close to solar equinox, under calm conditions with no wind or condensation. The leaves and branches were shown to droop gradually, with the lowest position reached a couple of hours before sunrise. In the morning, the trees returned to their original position within a few hours. It is not yet clear whether they were "woken up" by the sun or by their own internal rhythm. "On molecular level, the scientific field of chronobiology is well developed, and especially the genetic background of the daily periodicity of plants has been studied extensively", explains András Zlinszky (Centre for Ecological Research, Hungarian Academy of Sciences). "Plant movement is always closely connected with the water balance of individual cells, which is affected by the availability of light through photosynthesis. But changes in the shape of the plant are difficult to document even for small herbs as classical photography uses visible light that interferes with the sleep movement." With a laser scanner, plant disturbance is minimal. The scanners use infrared light, which is reflected by the leaves. Individual points on a plant are only illuminated for fractions of a second. With this laser scanning technique, a full-sized tree can be automatically mapped within minutes with sub-centimeter resolution. "We believe that laser scanning point clouds will allow us to develop a deeper understanding ofplant sleep patterns and to extend our measurement scope from individual plants to larger areas, like orchards or forest plots," says Norbert Pfeifer (TU Wien). "The next step will be collecting tree point clouds repeatedly and comparing the results to water use measurements during day and night", says Eetu Puttonen. "This will give us a better understanding of the trees' daily tree water use and their influence on the local or regional climate." ### This study was published in an open access article in the journal Frontiers in Plant Science: Puttonen, E., Briese, C., Mandlburger, G., Wieser, M., Pfennigbauer, M., Zlinszky, A., Pfeifer N. (2016). "Quantification of Overnight Movement of Birch (Betula pendula) Branches and Foliage with Short Interval Terrestrial Laser Scanning". Frontiers in Plant Science, 7:222. doi: 10.3389/fpls.2016.00222 Further information: Norbert Pfeifer Department für Geodäsie und Geoinformation TU Wien, Österreich T: 43-1-58801-12219 [email protected] Eetu Puttonen, Finnish Geospatial Research Institute (FGI) National Land Survey of Finland, Finnland [email protected] András Zlinszky Centre for Ecological Research, Ungarische Akademieder Wissenschaften [email protected]
10.3389/fpls.2016.00222
2,016
Frontiers in Plant Science
Quantification of Overnight Movement of Birch (Betula pendula) Branches and Foliage with Short Interval Terrestrial Laser Scanning
The goal of the study was to determine circadian movements of silver birch (Petula Bendula) branches and foliage detected with terrestrial laser scanning (TLS). The study consisted of two geographically separate experiments conducted in Finland and in Austria. Both experiments were carried out at the same time of the year and under similar outdoor conditions. Experiments consisted of 14 (Finland) and 77 (Austria) individual laser scans taken between sunset and sunrise. The resulting point clouds were used in creating a time series of branch movements. In the Finnish data, the vertical movement of the whole tree crown was monitored due to low volumetric point density. In the Austrian data, movements of manually selected representative points on branches were monitored. The movements were monitored from dusk until morning hours in order to avoid daytime wind effects. The results indicated that height deciles of the Finnish birch crown had vertical movements between -10.0 and 5.0 cm compared to the situation at sunset. In the Austrian data, the maximum detected representative point movement was 10.0 cm. The temporal development of the movements followed a highly similar pattern in both experiments, with the maximum movements occurring about an hour and a half before (Austria) or around (Finland) sunrise. The results demonstrate the potential of terrestrial laser scanning measurements in support of chronobiology.
926769
‘Stop prescribing hydroxychloroquine for COVID-19’
In 2021, in the United States alone, there have been more than 560,000 prescriptions of hydroxychloroquine for the prevention, post-exposure and treatment of COVID-19. Since the onset in February 2020, the U.S. has been the epicenter of the pandemic and remains the world leader in cases and deaths. Last year, the 890,000 prescriptions for hydroxychloroquine were nine-fold greater than the previous years, leading to major shortages for the approved indications of autoimmune disease predominantly in younger women.      In a commentary published in The American Journal of Medicine, researchers from Florida Atlantic University’s Schmidt College of Medicine and collaborators review the recent major randomized, double-blind placebo-controlled trials and present an updated meta-analysis of hydroxychloroquine in post-exposure prophylaxis as well as in hospitalized patients. Last year, these same researchers issued a plea for a moratorium on prescription of hydroxychloroquine in prevention or treatment pending the outcome of ongoing randomized trials.  “The updated randomized evidence provides even stronger support for the halt on prescribing hydroxychloroquine in the prevention or treatment of COVID-19,” said Charles H. Hennekens, M.D., Dr.PH, senior author, the first Sir Richard Doll professor and senior academic advisor in FAU’s Schmidt College of Medicine. The authors say that in addition to a lack of significant benefit, the new randomized evidence shows some suggestion of harm. They explain that the prior reassuring safety profile of hydroxychloroquine is applicable to patients with lupus and rheumatoid arthritis, both of which are of greater prevalence in younger and middle-age women, whose risks of fatal heart outcomes due to hydroxychloroquine are reassuringly very low. In contrast, the risks of hydroxychloroquine for patients with COVID-19 are significantly higher because fatal cardiovascular complications due to these drugs are so much higher in older patients and those with existing heart disease or its risk factors, both of whom are more predominant in men.  “Premature and avoidable deaths will continue to occur if people take hydroxychloroquine and avoid the public health strategies of proven benefit, which include vaccinations and masking,” added Hennekens.  Co-authors are Manas Rane, M.D., a preventive cardiology fellow at the Harvard Medical School and Boston VA System and a former FAU internal medicine resident; Joshua J. Solano, M.D., an assistant professor of emergency medicine; Scott M. Alter, M.D., M.B.A., an associate professor of emergency medicine; and Richard D. Shih, M.D., a professor of emergency medicine; all within the Schmidt College of Medicine; Dennis G. Maki, M.D., Ovid O. Meyer Professor of Medicine, and David L. DeMets, Ph.D., Max Halperin Professor of Biostatistics, emeritus, and former founding chair of the Department of Biostatistics and Medical Informatics, both with the University of Wisconsin School of Medicine and Public Health; Heather Johnson, M.D., preventive cardiologist at Lynn Women’s Health and Wellness Institute, Boca Raton Regional Hospital/Baptist Health South Florida and an adjunct professor at the University of Wisconsin School of Medicine and Public Health; and Shiv Krishnaswamy, a fourth-year medical student, FAU Schmidt College of Medicine. Hennekens and Maki have been collaborators since 1969, when they served as lieutenant commanders in the U.S. Public Health Service as epidemic intelligence service officers with the U.S. Centers for Disease Control and Prevention. Hennekens, Maki and Johnson also collaborated on a recently published commentary emphasizing the already alarming racial inequalities in mortality from COVID-19, which are only likely to increase further until the vaccines are distributed equitably.  - FAU - About the Charles E. Schmidt College of Medicine: FAU’s Charles E. Schmidt College of Medicine is one of approximately 157 accredited medical schools in the U.S. The college was launched in 2010, when the Florida Board of Governors made a landmark decision authorizing FAU to award the M.D. degree. After receiving approval from the Florida legislature and the governor, it became the 134th allopathic medical school in North America. With more than 70 full and part-time faculty and more than 1,300 affiliate faculty, the college matriculates 64 medical students each year and has been nationally recognized for its innovative curriculum. To further FAU’s commitment to increase much needed medical residency positions in Palm Beach County and to ensure that the region will continue to have an adequate and well-trained physician workforce, the FAU Charles E. Schmidt College of Medicine Consortium for Graduate Medical Education (GME) was formed in fall 2011 with five leading hospitals in Palm Beach County. The Consortium currently has five Accreditation Council for Graduate Medical Education (ACGME) accredited residencies including internal medicine, surgery, emergency medicine, psychiatry, and neurology. About Florida Atlantic University: Florida Atlantic University, established in 1961, officially opened its doors in 1964 as the fifth public university in Florida. Today, the University serves more than 30,000 undergraduate and graduate students across six campuses located along the southeast Florida coast. In recent years, the University has doubled its research expenditures and outpaced its peers in student achievement rates. Through the coexistence of access and excellence, FAU embodies an innovative model where traditional achievement gaps vanish. FAU is designated a Hispanic-serving institution, ranked as a top public university by U.S. News & World Report and a High Research Activity institution by the Carnegie Foundation for the Advancement of Teaching. For more information, visit www.fau.edu. The American Journal of Medicine 10.1016/j.amjmed.2021.07.035 Meta-analysis People Updates on hydroxychloroquine in prevention and treatment of COVID-19 23-Aug-2021
10.1016/j.amjmed.2021.07.035
2,021
The American Journal of Medicine
Updates on Hydroxychloroquine in Prevention and Treatment of COVID-19
In the prevention and treatment of Coronavirus disease 2019 (COVID-19) in the United States, 74% trust their health care providers.1Funk G, Gramlich J. Amid coronavirus threat, Americans generally have a high level of trust in medical doctors. Pew Research Center. Published March 13, 2020. Available at: https://www.pewresearch.org/fact-tank/2020/03/13/amid-coronavirus-threat-americans-generally-have-a-high-level-of-trust-in-medical-doctors/. Accessed July 13, 2021.Google Scholar In 2021 there have been more than 560,000 prescriptions2Jones JLM, Jenny BE, Bergs M. Drug diagnosis code data sought by old HHS OIG may cue enforcement. Bloomberg Law. Published July 12, 2021. Available at:https://news.bloomberglaw.com/health-law-and-business/drug-diagnosis-code-data-sought-by-hhs-oig-may-cue-enforcement. Accessed July 13, 2021.Google Scholar of hydroxychloroquine for the prevention, post-exposure prophylaxis, and treatment of COVID-19. Last year, the >890,000 prescriptions were ninefold greater than previous years, leading to major shortages for the approved indications of autoimmune diseases.3Shih R Johnson HM Maki DG Hennekens CH Hydroxychloroquine for coronavirus: the urgent need for a moratorium on prescriptions.Am J Med. 2020; 133: 1007-1008Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar Biological mechanisms support inhibition of the virus that causes COVID-19.4Liu J Cao R Xu M et al.Hydroxychloroquine, a less toxic derivative of chloroquine, is effective in inhibiting SARS-CoV-2 infection in vitro.Cell Discov. 2020; 6: 16Crossref PubMed Scopus (1485) Google Scholar Some case series lacking comparison groups, claims databases, and observational studies,3Shih R Johnson HM Maki DG Hennekens CH Hydroxychloroquine for coronavirus: the urgent need for a moratorium on prescriptions.Am J Med. 2020; 133: 1007-1008Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar all of which have confounding by indication, reported possible benefits.5Hennekens CH DeMets D Statistical association and causation: contributions of different types of evidence.JAMA. 2011; 305: 1134-1136Crossref PubMed Scopus (82) Google Scholar,6Hennekens CH Buring JE Epidemiology in Medicine. Little, Brown and Company, Lippincott, Wolters Kluwer, Boston1987Google Scholar Randomized trials published in high-quality peer-reviewed journals, which provide the most reliable evidence to detect the most plausible small to moderate effects, have shown disappointing results.3Shih R Johnson HM Maki DG Hennekens CH Hydroxychloroquine for coronavirus: the urgent need for a moratorium on prescriptions.Am J Med. 2020; 133: 1007-1008Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar, 4Liu J Cao R Xu M et al.Hydroxychloroquine, a less toxic derivative of chloroquine, is effective in inhibiting SARS-CoV-2 infection in vitro.Cell Discov. 2020; 6: 16Crossref PubMed Scopus (1485) Google Scholar, 5Hennekens CH DeMets D Statistical association and causation: contributions of different types of evidence.JAMA. 2011; 305: 1134-1136Crossref PubMed Scopus (82) Google Scholar, 6Hennekens CH Buring JE Epidemiology in Medicine. Little, Brown and Company, Lippincott, Wolters Kluwer, Boston1987Google Scholar When the totality of evidence is incomplete, it is appropriate for health care providers to remain uncertain.5Hennekens CH DeMets D Statistical association and causation: contributions of different types of evidence.JAMA. 2011; 305: 1134-1136Crossref PubMed Scopus (82) Google Scholar Nonetheless, regulatory authorities are sometimes compelled to act on incomplete evidence. On March 28, 2020, the US Food and Drug Administration issued an Emergency Use Authorization for hydroxychloroquine in COVID-19. By April 24, 2020, the Food and Drug Administration issued a Drug Safety Communication warning about potentially fatal prolongations of the QTc interval detectable on 12-lead electrocardiograms and risks of other serious cardiac arrhythmias.3Shih R Johnson HM Maki DG Hennekens CH Hydroxychloroquine for coronavirus: the urgent need for a moratorium on prescriptions.Am J Med. 2020; 133: 1007-1008Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar In this Commentary we review the recent major randomized, double-blind, placebo-controlled trials of hydroxychloroquine in post-exposure prophylaxis and hospitalized patients, addressing the primary endpoint of severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2) infections, as well as their meta-analyses. We thus provide updated perspectives on benefits and risks. One randomized, double-blind, placebo-controlled trial included 821 post-exposure prophylaxis subjects, of whom 107 developed COVID-19 over 14 days. The 49 of 414 (11.8%) assigned hydroxychloroquine and 58 of 407 (14.3%) given placebo resulted in a nonsignificant relative risk (RR) of 0.83 (P = .35). Overall, 140 of 349 (40.1%) assigned hydroxychloroquine reported a side effect by day 5, as compared with 50 of 352 (16.8%) assigned placebo, a highly significant increase (P < .001). Nausea, loose stools, and abdominal discomfort were the most common, and there were no serious intervention-related adverse effects.7Boulware DR Pullen MF Bangdiwala AS et al.A randomized trial of hydroxychloroquine as post exposure prophylaxis for Covid-19.N Engl J Med. 2020; 383: 517-525Crossref PubMed Scopus (958) Google Scholar In another study, among 2314 healthy contacts of 672 COVID-19 index cases, 1116 were randomized to hydroxychloroquine and 1198 to usual care. COVID-19 occurred among 5.7% assigned to hydroxychloroquine and 6.2% to usual care, yielding a nonsignificant RR of 0.89 (95% confidence interval [CI], 0.54-1.46). Adverse events were significantly higher in hydroxychloroquine (51.6%) compared with usual care (5.9%), but there were no reported cardiac arrhythmias.8Monne AA Suner C Tebe C et al.A cluster-randomized trial of hydroxychloroquine as prevention of Covid-19 transmission and disease.N Engl J Med. 2021; 384: 417-427Crossref PubMed Scopus (145) Google Scholar In the most recently published trial, 671 households were randomly assigned: 337 (407 participants) to hydroxychloroquine and 334 (422 participants) to the control group. By day 14, there were 53 events in hydroxychloroquine and 45 among usual care, yielding a nonsignificant RR = 1.10 (95% CI, 0.73-1.66]; P > .20). The frequency of participants experiencing adverse events was significantly higher in the hydroxychloroquine group than the control group (66 [16.2%] vs 46 [10.9%]; P = .026).9Barnabas R Brown ER Bershteyn A et al.Hydroxychloroquine as postexposure prophylaxis to prevent severe acute respiratory syndrome in coronary virus 2 infection: a randomized trial.Ann Intern Med. 2021; 174: 344-352Crossref PubMed Scopus (58) Google Scholar One trial was terminated early by the external, independent Data Monitoring Committee due to lack of efficacy and futility. Death within 28 days occurred in 421 patients (27%) in the hydroxychloroquine group and in 790 (25%) in the usual-care group, yielding a nonsignificant RR = 1.09 (95% CI, 0.97-1.23; P = .15). Patients assigned hydroxychloroquine were significantly less likely to be discharged from the hospital alive within 28 days than those in usual care (59.6% vs 62.9%; RR = 0.90; 95% CI, 0.83-0.98). Among the patients not dependent on mechanical ventilation at baseline, those in the hydroxychloroquine group had a significantly higher frequency of invasive mechanical ventilation or death (30.7% vs 26.9%; RR = 1.14; 95% CI, 1.03-1.27). There were no significant differences in new major cardiac arrhythmias.10Horby P Mafham M et al.The RECOVERY Collaborative GroupEffect of hydroxychloroquine in hospitalized patients with Covid-19.N Engl J Med. 2020; 383: 2030-2040Crossref PubMed Scopus (821) Google Scholar At 405 hospitals in 30 countries, of 11,330 patients, 2750 were assigned to remdesivir, 954 to hydroxychloroquine, 1411 to lopinavir (without interferon), 2063 to interferon (including 651 to interferon plus lopinavir), and 4088 to no trial drug. Adherence was 94%-96% midway through treatment, with 2%-6% crossover. Of 1253 deaths reported, 301 were among those assigned to remdesivir and 303 among its control, yielding a nonsignificant RR = 0.95 (95% CI, 0.81-1.11, P = .50). Further, there were 104 deaths among those given hydroxychloroquine, and 84 among its control, yielding a nonsignificant RR = 1.19 (95% CI, 0.89-1.59; P = .23). There were 148 deaths in patients assigned lopinavir and 146 among its control, yielding a nonsignificant RR = 1.00 (95% CI, 0.79-1.25; P = .9). Finally, there were 243 deaths among patients assigned interferon and 216 receiving its control, yielding a nonsignificant RR = 1.16 (95% CI, 0.96-1.39; P = .11). No drug definitely reduced mortality, overall or in any subgroup, or reduced initiation of ventilation or hospitalization duration.11Pan H Peto R et al.WHO Solidarity Trial ConsortiumRepurposed antiviral drugs for Covid-19-Interim WHO Solidarity Trial results.N Engl J Med. 2021; 384: 497-511Crossref PubMed Scopus (1641) Google Scholar The quality and usefulness of any meta-analysis depends on the quality and comparability of data from the component trials. Combined trials should have reasonably high adherence and follow-up rates, and use comparable drugs, doses, and outcomes. The characteristics of participants and the magnitude of effects should be qualitatively similar. Such meta-analyses can be hypothesis testing if each component trial was designed a priori to test the same issue. In other circumstances such as smaller or heterogeneous trials, meta-analyses are hypothesis generating. Meta-analyses of observational studies are useful only to formulate, but not test, hypotheses. They reduce the role of chance but always introduce bias as well as uncontrolled and uncontrollable confounding because the individual trials are not randomized.12Hennekens CH DeMets D The need for large scale randomized evidence without undue emphasis on small trials, meta-analyses or subgroup analyses.JAMA. 2009; 302: 2361-2362Crossref PubMed Scopus (124) Google Scholar Our meta-analysis of hydroxychloroquine in post-exposure prophylaxis indicates a nonsignificant RR = 0.90 (95% CI, 0.69-1.17). Thus, there is a statistically nonsignificant estimated 10% reduction in SARS-CoV-2 infection, but with sufficient precision to rule out a reduction as large as 20%. Our meta-analysis of hydroxychloroquine in hospitalized patients with COVID-19 yields a nonsignificant RR = 1.10 (95% CI, 0.99-1.23). In hospitalized patients, there is an approximate statistically nonsignificant estimated 10% increase in mortality, but with sufficient precision to rule out a reduction as small as 1%. Further, these data suggest equality, but the point estimate is in the direction of small harm on mortality. Previously, we recommended a moratorium to health care providers concerning prescriptions of hydroxychloroquine.3Shih R Johnson HM Maki DG Hennekens CH Hydroxychloroquine for coronavirus: the urgent need for a moratorium on prescriptions.Am J Med. 2020; 133: 1007-1008Abstract Full Text Full Text PDF PubMed Scopus (5) Google Scholar13Kim AHJ Sparks JA Liew JW et al.A rush to judgment? Rapid reporting and dissemination of results and its consequences regarding the use of hydroxychloroquine for COVID-19.Ann Intern Med. 2020; 172: 819-821Crossref PubMed Scopus (163) Google Scholar Since that time, no significant benefits have been found in the recent randomized evidence for post-exposure prophylaxis and among hospitalized patients. Regarding risk, hydroxychloroquine derived a reassuring safety profile from decades of prescriptions for autoimmune diseases of greater prevalence in younger and middle-aged women, whose risks of fatal outcomes due to QTc prolongations are very low. In contrast, the risks associated with COVID-19 are much higher because mortality rates for COVID-19 and the side effects of hydroxychloroquine are both highest in older patients and those with comorbidities, both of whom are predominantly men. The current totality of evidence more strongly supports our previous recommendations concerning the lack of efficacy and possible harm of hydroxychloroquine in the treatment and prevention of COVID-19.
984188
Study finds “important shortcomings” in official cancer drug information
Important information about cancer drug benefits, and related uncertainties, is frequently omitted from official prescription drug information sources for clinicians and patients in Europe, finds an analysis published by The BMJ today. Despite the commitment of medicines regulators to shared decision making and person centred care, the researchers say better information on the benefits and potential harms of medicines are needed to help inform treatment decisions, especially for patients with time limiting conditions such as advanced cancer. To receive and participate in medical care, patients need high quality information about treatments, tests, and services, including information about the benefits of and risks from prescription drugs. Previous studies have looked at how information on drug risks and adverse effects is communicated to patients, but research on communication of drug benefits is limited. To address this, researchers set out to assess the extent to which information about cancer drug benefits, and related uncertainties, is communicated to patients and doctors in regulated prescription drug information sources in Europe. They reviewed official written and electronic information for clinicians (through a summary of product characteristics), patients (information leaflets) and the public (public summaries) for 29 new cancer drugs approved by the European Medicines Agency (EMA) during 2017-2019.  They then compared the information on drug benefits reported in these sources with the information available in regulatory assessment documents (known as European public assessment reports  or EPARs), which contain everything required for drug approval. They found that both patient and public facing information sources were often lacking in relevance. For example, information on drug benefits was not reported in any patient leaflets, while other – potentially less relevant information for patients (eg. how a drug works in the body) – was consistently included. They also found instances where the reporting of a study design and study findings was inconsistent with the information reported in EPARs and potentially misleading.  Important gaps and uncertainties in the evidence base were also rarely reported, particularly those that might be relevant and useful for patients, such as whether a drug extended survival or improved quality of life. Finally, scientific concerns about the reliability of evidence on drug benefits, which were raised by European regulatory assessors for almost all drugs in the study sample, were rarely communicated to clinicians, patients, or the public. The researchers acknowledge that their review may not have captured all information about each trial or drug benefits and uncertainties that might be relevant and useful for patients. What’s more, they included only new cancer drugs and it’s not clear whether these findings extend to other disease areas. Nevertheless, this was a comprehensive review of documents which they say “identified important shortcomings in the communication of information on drug benefits and related uncertainties in regulated sources.” The findings “highlight the need to improve the communication of the benefits and related uncertainties of anticancer drugs in regulated information sources in Europe to support evidence informed decision making by patients and their clinicians,” they conclude. The takeaway message from this study is that information about drugs is rarely communicated well—and particularly not communicated well to patients, say BMJ editors in a linked editorial. It also raises questions about whether this knowledge gap is interfering with shared decision making and whether new ways to present information such as visual representation of data on benefits and harms - used for covid-19 vaccines - could be applied to other types of medicines. “The trust between patients and healthcare providers remains pivotal in ensuring that patients are fully informed about benefits and harms of drugs,” they write. “But regulatory agencies should pay closer attention to important gaps in information for patients, and further research should aim to determine more precisely where these gaps occur and to work with patients to fill them.” [Ends] The BMJ 10.1136/bmj-2022-073711 Observational study People Communication of anticancer drug benefits and related uncertainties to patients and clinicians: document analysis of regulated information on prescription drugs in Europe 29-Mar-2023 All authors have completed the ICMJE uniform disclosure form at https://www.icmje.org/coi_disclosure.pdf and declare: This study was partly funded by Health Action International and the EU Commission’s Consumers, Health, Agriculture and Food Executive Agency (CHAFEA). CD reports membership of Health Action International (HAI), serving as HAI’s representative on the European Medicines Agency (EMA) Patient and Consumer Working Party (PCWP) and receiving reimbursement from EMA for attendance at PCWP meetings; AKW reports grants from the American Cancer Society outside the submitted work, and payment or honorariums from the University of Hong Kong; BM reports membership of HAI, L’Association Mieux Prescrire, and the Scientific and Education Committee of the Therapeutics Initiative (University of British Columbia), and reports serving as an expert witness for Health Canada during a legal case involving marketing of an unapproved drug product in Canada; JL reports grants from the non-profit organisation HAI to undertake data collection and analysis for the submitted work; HN reports grants from the Health Foundation, the National Institute for Health and Care Research, and UK Research and Innovation outside the submitted work, and consulting fees from the World Health Organization and Pharmaceutical Group of the European Union; HN also reports being an adviser to the Analysis section of The BMJ; the other authors declare no support from any organization for the submitted work; no financial relationships with any organisation that might have an interest in the submitted work in the previous three years; and no other relationships or activities that could appear to have influenced the submitted work.
10.1136/bmj-2022-073711
2,023
BMJ
Communication of anticancer drug benefits and related uncertainties to patients and clinicians: document analysis of regulated information on prescription drugs in Europe
Abstract Objective To evaluate the frequency with which relevant and accurate information about the benefits and related uncertainties of anticancer drugs are communicated to patients and clinicians in regulated information sources in Europe. Design Document content analysis. Setting European Medicines Agency. Participants Anticancer drugs granted a first marketing authorisation by the European Medicines Agency, 2017-19. Main outcome measures Whether written information on a product addressed patients’ commonly asked questions about: who and what the drug is used for; how the drug was studied; types of drug benefit expected; and the extent of weak, uncertain, or missing evidence for drug benefits. Information on drug benefits in written sources for clinicians (summaries of product characteristics), patients (patient information leaflets), and the public (public summaries) was compared with information reported in regulatory assessment documents (European public assessment reports). Results 29 anticancer drugs that received a first marketing authorisation for 32 separate cancer indications in 2017-19 were included. General information about the drug (including information on approved indications and how the drug works) was frequently reported across regulated information sources aimed at both clinicians and patients. Nearly all summaries of product characteristics communicated full information to clinicians about the number and design of the main studies, the control arm (if any), study sample size, and primary measures of drug benefit. None of the patient information leaflets communicated information to patients about how drugs were studied. 31 (97%) summaries of product characteristics and 25 (78%) public summaries contained information about drug benefits that was accurate and consistent with information in regulatory assessment documents. The presence or absence of evidence that a drug extended survival was reported in 23 (72%) summaries of product characteristics and four (13%) public summaries. None of the patient information leaflets communicated information about the drug benefits that patients might expect based on study findings. Scientific concerns about the reliability of evidence on drug benefits, which were raised by European regulatory assessors for almost all drugs in the study sample, were rarely communicated to clinicians, patients, or the public. Conclusions The findings of this study highlight the need to improve the communication of the benefits and related uncertainties of anticancer drugs in regulated information sources in Europe to support evidence informed decision making by patients and their clinicians.
901432
New study highlights sociodemographic disparities in oral cancer screening rates
Oral cancer accounts for 2 percent of reported malignancies and 1.2 percent of cancer-related deaths in the United States. Oral cancer screening (OCS), recommended by the American Dental Association since 2010, can help to diagnose the cancer early, and this can significantly improve survival rates. If caught early, the five-year survival rate of oral cancer is 82.8 percent, but once the cancer metastasizes, that rate drops to 28 percent. Researchers at Brigham and Women's Hospital led a study to examine OCS rates among those who had been to the dentist within two years, looking at whether sociodemographic factors such as income or race predicted differences in these rates. The team found that a significantly higher proportion of minority and low-income individuals reported that they had not received an OCS exam despite a recent dental visit. The results of this study are published in The American Journal of Preventive Medicine. "We wanted to look specifically at the population that has access to dental care and report having access to a dentist," said first author Avni Gupta, a research scientist at the Brigham's Center for Surgery and Public Health. "Our results indicate that the selection of patients for screening isn't based on the high-risk profile for oral cancer, but on sociodemographic characteristics. This is not appropriate. All patients should be receiving oral cancer screenings, but providers aren't screening these groups, and this may be why they are presenting with more advanced cancer." The study looked at civilian, non-institutionalized individuals, aged 30-and-over, that had visited a dentist in the last two years. In three data cycles, from 2011 to 2016, patients self-reported if they had ever had an intra-oral or extra-oral exam, noninvasive procedures that can easily and safely be performed in an out-patient setting by a qualified health professional. The intra-oral exam, in which a health professional pulls on the tongue and feels around the mouth to detect any premalignant lesions, was the primary outcome of the study. The extra-oral exam, in which the health professional feels the patient's neck, was a secondary outcome. The patient survey described the intra-oral exam and extra-oral exam in detail so that a patient could easily identify the two types of screenings. The team found that only 37.6 percent of people who had seen a dental professional reported receiving an intra-oral cancer screening exam while only 31.3 percent reported receiving an extra-oral exam. Adjusting for different risk factors, the team found that OCS rates were much lower among racial minorities, lower-income groups, and those who were uninsured or publicly insured. These disparities were independent of the two major risk factors for oral cancer, smoking and alcohol consumption. The largest limitation of the study was the use of self-reported data that is subject to recall bias. However, this would only impact the study if these biases had greater influence on some sociodemographic groups than others. Gupta said that while the previous studies have indicated disparities in access to dental care, she hopes that this study shows that providing access is not enough to get rid of the gaps in OCS rates between different sociodemographic groups. "Just providing access is not enough - it matters what type of care patients are able to access," Gupta said. "We talk a lot about disparities in medical care, but the quality of dental care services is important, too. We need to better understand the barriers that dental care providers face in order to ensure that patients get the same level and quality of care regardless of sociodemographic factors." ### Other authors of this paper include Stephen Sonis, Ravindra Uppaluri, Regan W. Bergmark and Alessandro Villa. No conflicts of interest were reported by the authors of this paper. No financial disclosures were reported by the authors of this paper. Paper cited: Gupta A et al. "Disparities in Oral Cancer Screening Among Dental Professionals: NHANES 2011-2016" The American Journal of Preventive Medicine DOI: 10.1016/j.amepre.2019.04.026
10.1016/j.amepre.2019.04.026
2,019
American Journal of Preventive Medicine
Disparities in Oral Cancer Screening Among Dental Professionals: NHANES 2011–2016
As early detection of oral cancers is associated with better survival, oral cancer screening should be included in dental visits for adults. This study examines the rate and predictors of oral cancer screening exams among U.S. adults with a recent dental visit.Individuals aged ≥30 years who received a dental visit in the last 2 years, in the 2011-2016 National Health and Nutrition Examination Survey were analyzed in December 2018. Weighted multivariable logistic regression models examined the likelihood of intraoral and extraoral oral cancer screening exams, adjusting for age, sex, race/ethnicity, education, marital status, poverty income ratio, health insurance, tobacco smoking, and alcohol consumption. Subgroup analyses were conducted among races/ethnicities, smokers, and alcohol consumers. Statistical significance was set at p<0.01.A total of 37.6% and 31.3% reported receiving an intraoral and extraoral oral cancer screening exam, respectively. Minority racial/ethnic groups versus white, non-Hispanics, less-educated versus more-educated, uninsured and Medicaid-insured versus privately insured, and low-income versus high-income participants were less likely to have received intraoral or extraoral oral cancer screening exams. There was no difference in the likelihood of being screened based on smoking status. Alcohol consumers were more likely to be screened. Among subgroups, less-educated and low-income individuals were less likely to be screened.A significantly higher proportion of minority race/ethnicity and low SES individuals report not receiving an oral cancer screening exam, despite a recent dental visit. This selective screening by dental professionals is incompliant with guidelines and concerning because these groups are more likely to present with an advanced stage of oral cancer at diagnosis. An understanding of the reasons for discriminatory oral cancer screening practices could help develop effective interventions.
857893
DNA repeats -- the genome's dark matter
Expansions of DNA repeats are very hard to analyze. A method developed by researchers at the Max Planck Institute for Molecular Genetics in Berlin allows for a detailed look at these previously inaccessible regions of the genome. It combines nanopore sequencing, stem cell, and CRISPR-Cas technologies. The method could improve the diagnosis of various congenital diseases and cancers in the future. Large parts of the genome consist of monotonous regions where short sections of the genome repeat hundreds or thousands of times. But expansions of these "DNA repeats" in the wrong places can have dramatic consequences, like in patients with Fragile X syndrome, one of the most commonly identifiable hereditary causes of cognitive disability in humans. However, these repetitive regions are still regarded as an unknown territory that cannot be examined appropriately, even with modern methods. A research team led by Franz-Josef Müller at the Max Planck Institute for Molecular Genetics in Berlin and the University Hospital of Schleswig-Holstein in Kiel recently shed light on this inaccessible region of the genome. Müller's team was the first to successfully determine the length of genomic tandem repeats in patient-derived stem cell cultures. The researchers additionally obtained data on the epigenetic state of the repeats by scanning individual DNA molecules. The method, which is based on nanopore sequencing and CRISPR-Cas technologies, opens the door for research into repetitive genomic regions, and the rapid and accurate diagnosis of a range of diseases. A gene defect on the X chromosome In Fragile X syndrome, a repeat sequence has expanded in a gene called FMR1 on the X chromosome. "The cell recognizes the repetitive region and switches it off by attaching methyl groups to the DNA," says Müller. These small chemical changes have an epigenetic effect because they leave the underlying genetic information intact. "Unfortunately, the epigenetic marks spread over to the entire gene, which is then completely shut down," explains Müller. The gene is known to be essential for normal brain development. He states: "Without the FMR1 gene, we see severe delays in development leading to varying degrees of intellectual disability or autism." Female individuals are, in most cases, less affected by the disease, since the repeat region is usually located on only one of the two X chromosomes. Since the unchanged second copy of the gene is not epigenetically altered, it is able to compensate for the genetic defect. In contrast, males have only one X chromosome and one copy of the affected gene and display the full range of clinical symptoms. The syndrome is one of about 30 diseases that are caused by expanding short tandem repeats. First precise mapping of short tandem repeats In this study, Müller and his team investigated the genome of stem cells that were derived from patient tissue. They were able to determine the length of the repeat regions and their epigenetic signature, a feat that had not been possible with conventional sequencing methods. The researchers also discovered that the length of the repetitive region could vary to a large degree, even among the cells of a single patient. The researchers also tested their process with cells derived from patients that contained an expanded repeat in one of the two copies of the C9orf72 gene. This mutation leads to one of the most common monogenic causes of frontotemporal dementia and amyotrophic lateral sclerosis. "We were the first to map the entire epigenetics of extended and unchanged repeat regions in a single experiment," says Müller. Furthermore, the region of interest on the DNA molecule remained physically wholly unaltered. "We developed a unique method for the analysis of single molecules and for the darkest regions of our genome - that's what makes this so exciting for me." Tiny pores scan single molecules "Conventional methods are limited when it comes to highly repetitive DNA sequences. Not to mention the inability to simultaneously detect the epigenetic properties of repeats," says Björn Brändl, one of the first authors of the publication. That's why the scientists used Nanopore sequencing technology, which is capable of analyzing these regions. The DNA is fragmented, and each strand is threaded through one of a hundred tiny holes ("nanopores") on a silicon chip. At the same time, electrically charged particles flow through the pores and generate a current. When a DNA molecule moves through one of these pores, the current varies depending on the chemical properties of the DNA. These fluctuations of the electrical signal are enough for the computer to reconstruct the genetic sequence and the epigenetic chemical labels. This process takes place at each pore and, thus, each strand of DNA. Genome editing tools and bioinformatics illuminate "dark matter" Conventional sequencing methods analyze the entire genome of a patient. Now, the scientists designed a process to look at specific regions selectively. Brändl used the CRISPR-Cas system to cut DNA segments from the genome that contained the repeat region. These segments went through a few intermediate processing steps and were then funneled into the pores on the sequencing chip. "If we had not pre-sorted the molecules in this way, their signal would have been drowned in the noise of the rest of the genome," says bioinformatician Pay Giesselmann. He had to develop an algorithm specifically for the interpretation of the electrical signals generated by the repeats: "Most algorithms fail because they do not expect the regular patterns of repetitive sequences." While Giesselmann's program "STRique" does not determine the genetic sequence itself, it counts the number of sequence repetitions with high precision. The program is freely available on the internet. Numerous potential applications in research and the clinic "With the CRISPR-Cas system and our algorithms, we can scrutinize any section of the genome - especially those regions that are particularly difficult to examine using conventional methods," says Müller, who is heading the project. "We created the tools that enable every researcher to explore the dark matter of the genome," says Müller. He sees great potential for basic research. "There is evidence that the repeats grow during the development of the nervous system, and we would like to take a closer look at this." The physician also envisions numerous applications in clinical diagnostics. After all, repetitive regions are involved in the development of cancer, and the new method is relatively inexpensive and fast. Müller is determined to take the procedure to the next level: "We are very close to clinical application." ### Original publication Pay Giesselmann, Björn Brändl, Etienne Raimondeau, Rebecca Bowen, Christian Rohrandt, Rashmi Tandon, Helene Kretzmer, Günter Assum, Christina Galonska, Reiner Siebert, Ole Ammerpohl, Andrew Heron, Susanne A. Schneider, Julia Ladewig, Philipp Koch, Bernhard M. Schuldt, James E. Graham, Alexander Meissner, Franz-Josef Müller Analysis of short tandem repeat expansions and their methylation state with nanopore sequencing. Nature Biotechnology (2019)
10.1038/s41587-019-0293-x
2,019
Nature Biotechnology
Analysis of short tandem repeat expansions and their methylation state with nanopore sequencing
<jats:p>Expansions of short tandem repeats are genetic variants that have been implicated in neuropsychiatric and other disorders but their assessment remains challenging with current molecular methods. Here, we developed a Cas12a-based enrichment strategy for nanopore sequencing that, combined with a new algorithm for raw signal analysis, enables us to efficiently target, sequence and precisely quantify repeat numbers as well as their DNA methylation status. Taking advantage of these single molecule nanopore signals provides therefore unprecedented opportunities to study pathological repeat expansions.</jats:p>
947729
Folding design leads to heart sensor with smaller profile
As advances in wearable devices push the amount of information they can provide consumers, sensors increasingly have to conform to the contours of the body. One approach applies the principles of kirigami to give sensors the added flexibility. Researchers want to leverage the centuries-old art of cutting paper into designs to develop a sensor sheet that can stretch and breathe with the skin while collecting electrocardiographic data. In Applied Physics Reviews, by AIP Publishing, the sensor made by researchers in Japan uses cuts in a film made of polyethylene terephthalate (PET) printed with silver electrodes to fit on a person's chest to monitor his or her heart. "In terms of wearability, by applying kirigami structure in a PET film, due to PET deformation and bending, the film can be stretchable, so that the film can follow skin and body movement like a bandage," said author Kuniharu Takei, from Osaka Prefecture University. "In addition, since kirigami structure has physical holes in a PET film, skin can be easily breathed through the holes." Unlike the related origami, which involves strictly paper folding, the art of kirigami extends its methods to paper cutting as well. Such a technique allows relatively stiff materials, like PET, to adapt to their surfaces. As companies push for less noticeable wearables, attention has turned to optimizing picking out electrical signals from the heart out of background noise. Devices like the group's that ensure a snugger fit are an attractive solution. The team found the optimal size of the sensor is roughly 200 square millimeters with a distance of 1.5 centimeters between electrodes. At that size, they were able to detect enough signal from the heart to be used in a smartphone app. "The major challenge was how to realize the kirigami structure without using a precise alignment process between the silver electrodes and kirigami cutting," Takei said. Their device with the sensor could accurately and reliably relay heart data across multiple people doing many types of everyday movements, such as walking or working while seated in a chair. The group next aims to integrate more sensors to measure multiple types of data from the surface of the skin to help with early diagnosis of disease, including future medical trials. "We understand that the new mechanism or new material developments makes better impact to the field," Takei said. "However, without improving the stability, it cannot be used for the practical applications, even if the sensor performance is excellent."
10.1063/5.0082863
2,022
Applied Physics Reviews
Wireless, minimized, stretchable, and breathable electrocardiogram sensor system
Home-use, wearable healthcare devices may enable patients to collect various types of medical data during daily activities. Electrocardiographic data are vitally important. To be practical, monitoring devices must be wearable, comfortable, and stable, even during exercise. This study develops a breathable, stretchable sensor sheet by employing a kirigami structure, and we examine the size dependence of electrocardiographic sensors. Because the kirigami film has many holes, sweat readily passes through the sensor from the skin to the environment. For comfort, in addition to breathability, electrocardiographic sensor size is minimized. The limitation of the size is studied in relation to the signal-to-noise ratio of electrocardiographic signals, even under exercise. We found that the optimal size of the sensor is ∼200 mm2 and the distance between electrodes is 1.5 cm. Finally, long-term wireless electrocardiographic monitoring is demonstrated using data transmission to a smart phone app during different activities.
923475
‘Double decoration’ enhances industrial catalyst
Adding lead and calcium to an industrial catalyst dramatically improves its ability to support propylene production at very high temperatures, making it stable and active for a month. Hokkaido University scientists have designed a catalyst for propylene production that is highly stable even at 600°C. They reported their design concept and findings in the journal Angewandte Chemie International Edition. Propylene is a highly desired raw material and building block for a large variety of products, including in textiles, plastics and electronics. Originally, it was produced as a byproduct of breaking down saturated hydrocarbons in a process called steam cracking. However, this process no longer provides the quantities needed by industry. More recently, the industry has been making propylene from shale gas. Shale gas contains a large amount of methane, and smaller amounts of ethane and propane. Propylene can be produced from propane by removing two hydrogen atoms from it through a process called propane dehydrogenation. This process requires very high temperatures, around 600°C. Platinum is widely used as a catalyst in propane dehydrogenation, as it is very good at breaking hydrogen atoms away from carbon. But it is rapidly deactivated by side reactions that occur at high temperatures. Associate Professor Shinya Furukawa led a team of scientists at Hokkaido University’s Institute for Catalysis to improve currently available platinum catalysts. Specifically, they worked with a platinum catalyst that is alloyed with gallium, one of several inactive metals that can help reduce the unwanted side reactions that deactivate the catalyst at high temperatures by separating the platinum atoms from each other. However, gallium’s separation of platinum atoms is not complete. Furukawa and his colleagues added lead atoms to platinum-gallium nanoparticles placed on a silicon oxide base. The lead atoms attached to the surface of the nanoparticles wherever three platinum atoms occurred together. This blocks the side reactions that occur at the sites of the aggregated platinum atoms, leaving single atoms to do the dehydrogenation work. The team further improved the catalyst by depositing calcium ions on its silicon oxide base. The calcium ions donate electrons to the platinum-gallium nanoparticles, improving their stability. “Our ‘doubly decorated’ platinum-gallium catalyst had a significantly superior stability, of one month at 600°C, compared to other reported propane dehydrogenation catalysts which are deactivated within several days,” says Furukawa. The researchers tested additives and bases other than calcium ions and silicon oxide respectively, but none had the superior catalytic ability and stability of the doubly decorated platinum gallium catalyst. “Our catalyst design concept paves the way for enhancing the catalytic performance of intermetallics in saturated hydrocarbon dehydrogenation,” says Furukawa.
10.1002/anie.202107210
2,021
Angewandte Chemie International Edition
Doubly Decorated Platinum–Gallium Intermetallics as Stable Catalysts for Propane Dehydrogenation
Abstract Propane dehydrogenation (PDH) is a promising chemical process that can satisfy the increasing global demand for propylene. However, the Pt‐based catalysts that have been reported thus far are typically deactivated at ≥600 °C by side reactions and coke formation. Thus, such catalysts possess an insufficient life. Herein, we report a novel catalyst design concept, namely, the double decoration of PtGa intermetallics by Pb and Ca, which synergize the geometric and electronic promotion effects on the catalyst stability, respectively. Pb is deposited on the three‐fold Pt 3 sites of the PtGa nanoparticles to block them, whereas Ca, which affords an electron‐enriched single‐atom‐like Pt 1 site, is placed around the nanoparticles. Thus, PtGa−Ca−Pb/SiO 2 exhibits an outstandingly high catalytic stability, even at 600 °C ( k d =0.00033 h −1 , τ =3067 h), and almost no deactivation of the catalyst was observed for up to 1 month for the first time.
608721
Detailed insight into stressed cells
The team led by biochemist Dr. Christian Münch, who heads an Emmy Noether Group, employs a simple but extremely effective trick: when measuring all proteins in the mass spectrometer, a booster channel is added to specifically enhance the signal of newly synthesised proteins to enable their measurement. Thus, acute changes in protein synthesis can now be tracked by state-of-the-art quantitative mass spectrometry. The idea emerged because the team wanted to understand how specific stress signals influence protein synthesis. "Since the amount of newly produced proteins within a brief time interval is rather small, the challenge was to record minute changes of very small percentages for each individual protein," comments group leader Münch. The newly developed analysis method now provides his team with detailed insight into the molecular events that ensure survival of stressed cells. The cellular response to stress plays an important role in the pathogenesis of many human diseases, including cancer and neurodegenerative disorders. An understanding of the underlying molecular processes opens the door for the development of new therapeutic strategies. "The method we developed enables highly precise time-resolved measurements. We can now analyse acute cellular stress responses, i.e, those taking place within minutes. In addition, our method requires little material and is extremely cost-efficient," Münch explains. "This helps us to quantify thousands of proteins simultaneously in defined time spans after a specific stress treatment." Due to the small amount of material required, measurements can also be carried out in patient tissue samples, facilitating collaborations with clinicians. At a conference on Proteostatis (EMBO) in Portugal, PhD student Kevin Klann was recently awarded with a FEBS journal poster prize for his presentation of the first data produced using the new method. The young molecular biologist demonstrated for the first time that two of the most important cellular signaling pathways, which are triggered by completely different stress stimuli, ultimately results in the same effects on protein synthesis. This discovery is a breakthrough in the field. The project is funded by the European Research Council (ERC) as part of Starting Grant "MitoUPR", which was awarded to Münch for studying quality control mechanisms for mitochondrial proteins. In addition, Christian Münch has received funding within the German Research Foundation's (DFG, Deutsche Forschungsmeinschaft) Emmy Noether Programme and is a member of the Johanna Quandt Young Academy at Goethe. Since December 2016, he has built up a group on "Protein Quality Control" at the Institute for Biochemistry II at Goethe University's Medical Faculty, following his stay in one of the leading proteomic laboratories at Harvard University.
10.1016/j.molcel.2019.11.010
2,019
Molecular Cell
Functional Translatome Proteomics Reveal Converging and Dose-Dependent Regulation by mTORC1 and eIF2α
Regulation of translation is essential during stress. However, the precise sets of proteins regulated by the key translational stress responses-the integrated stress response (ISR) and mTORC1-remain elusive. We developed multiplexed enhanced protein dynamics (mePROD) proteomics, adding signal amplification to dynamic-SILAC and multiplexing, to enable measuring acute changes in protein synthesis. Treating cells with ISR/mTORC1-modulating stressors, we showed extensive translatome modulation with ∼20% of proteins synthesized at highly reduced rates. Comparing translation-deficient sub-proteomes revealed an extensive overlap demonstrating that target specificity is achieved on protein level and not by pathway activation. Titrating cap-dependent translation inhibition confirmed that synthesis of individual proteins is controlled by intrinsic properties responding to global translation attenuation. This study reports a highly sensitive method to measure relative translation at the nascent chain level and provides insight into how the ISR and mTORC1, two key cellular pathways, regulate the translatome to guide cellular survival upon stress.
944247
Number of covid-19 infections missed by lateral flow tests “substantial enough to be of clinical importance,” warn experts
The proportion of people with current covid-19 infection missed by the Innova lateral flow test (LFT) is substantial enough to be of clinical importance, particularly when testing people without symptoms, warn experts in The BMJ today. An analysis by Professor Jonathan Deeks and colleagues predicts that Innova would miss 20% of viral culture positive cases attending an NHS Test-and-Trace centre, 29% without symptoms attending mass testing, and 81% attending university screen testing without symptoms - many more than predicted by mathematical models on which policy decisions are based. The authors acknowledge that LFTs are an important tool in controlling the covid-19 pandemic, but say claims that LFTs can identify “the vast majority who are infectious” have been overstated, with risk of false reassurance to those seeking to rule-out infection. Lateral flow tests (LFTs) for SARS-CoV-2 (the virus responsible for covid-19) have been recommended for widespread use, largely based on predictions made by mathematical models. While empirical data show LFTs give a positive result when virus is present on a swab in high quantities, and therefore can detect people who are likely to be infectious, the proportion missed who are infectious has not been evaluated.  To address this evidence gap, Deeks and colleagues drew on empirical data from several sources to predict the proportion of Innova LFTs that produce negative results in those with a high risk of SARS-CoV-2 infectiousness. They then compared these with predictions made by influential mathematical models. Their focus was to identify the joint probability that people are likely to be infectious (in that they have a viral culture positive result or are a secondary case) and that they test negative on Innova. Their results are based on testing in three settings: symptomatic testing at an NHS Test-and-Trace centre, mass testing in Liverpool in residents without symptoms, and in students at the University of Birmingham. The analysis predicted that of those with a viral culture positive result, Innova would miss 20% attending an NHS Test-and-Trace centre, 29% without symptoms attending municipal mass testing, and 81% attending university screen testing without symptoms, along with 38%, 47%, and 90% of sources of secondary cases. In comparison, two mathematical models underestimated the numbers of missed infectious individuals (8%, 10%, and 32% in the three settings for one model, whereas the assumptions from the second model made it impossible to miss an infectious individual). The authors stress that evaluating the accuracy of a test for current infection or infectiousness is challenging owing to the lack of a reference standard, and say there is the potential for error in their estimates. “The findings in this analysis therefore must be taken as illustrative and not exact,” they say. However, they point out that these data “are currently the best available and clearly show that missing people with current infection or who are infectious is possible in all settings.” “Allowing for the uncertainties in the results from our analyses, the proportion of people with current infection missed by the Innova LFT is likely to be of public health importance, particularly in settings with greater proportions of infectious people with lower viral loads; where the tests are often being applied,” they write.  They argue that key models have failed to appropriately use empirical evidence to inform assumptions of test accuracy and chances of infectiousness, resulting in unrealistic overestimates of test performance, and say until new generation LFTs are available that meet the regulatory performance requirements, negative test results from LFTs cannot be relied on to exclude current infection. “Policy makers need to ensure that the public are aware of the risk of being infectious despite testing negative, and that tests are not used in situations where the consequences of false negative results are considerable,” they conclude. When rapid antigen tests were introduced, we were promised they would “identify those who are likely to spread the disease, and when used systematically in mass testing could reduce transmissions by 90%.” Yet despite the UK spending more than £7bn on lateral flow devices since mid-2020, the lack of hard evidence on this promised impact is striking, argue public health experts in a linked opinion article.  They point out that observational studies attempting to assess the impact on transmission as a result of testing asymptomatic non-contacts have struggled to show an effect and none seem to have examined the costs of the programmes. Meanwhile, the World Health Organization cautions against mass asymptomatic testing because of high costs, lack of evidence on the impact, and risk of diverting resources from more important activities. “Surely it is time to start afresh,” they say. “Publication of this new paper should prompt the Medicines and Healthcare Products Regulatory Agency to reassess its authorisations of rapid antigen tests in asymptomatic people. The public deserves to have better evaluations, ensuring good test performance in real life settings, and a policy that specifies effective and efficient test use for carefully targeted purposes,” they conclude. [Ends] The BMJ 10.1136/bmj-2021-066871 Data/statistical analysis People Research: SARS-CoV-2 antigen lateral flow tests for detecting infectious people: linked data analysis 23-Feb-2022
10.1136/bmj-2021-066871
2,022
BMJ
SARS-CoV-2 antigen lateral flow tests for detecting infectious people: linked data analysis
Abstract Objectives To investigate the proportion of lateral flow tests (LFTs) that produce negative results in those with a high risk of infectiousness from SARS-CoV-2, to investigate the impact of the stage and severity of disease, and to compare predictions made by influential mathematical models with findings of empirical studies. Design Linked data analysis combining empirical evidence of the accuracy of the Innova LFT, the probability of positive viral culture or transmission to secondary cases, and the distribution of viral loads of SARS-CoV-2 in individuals in different settings. Setting Testing of individuals with symptoms attending NHS Test-and-Trace centres across the UK, residents without symptoms attending municipal mass testing centres in Liverpool, and students without symptoms screened at the University of Birmingham. Participants Evidence for the sensitivity of the Innova LFT, based on 70 individuals with SARS-CoV-2 and LFT results. Infectiousness was based on viral culture rates on 246 samples (176 people with SARS-CoV-2) and secondary cases among 2 474 066 contacts; distributions of cycle threshold (Ct) values from 231 497 index individuals attending NHS Test-and-Trace centres; 70 people with SARS-CoV-2 detected in Liverpool and 62 people with SARS-CoV-2 in Birmingham (54 imputed). Main outcome measures The predicted proportions who were missed by LFT and viral culture positive and missed by LFT and sources of secondary cases, in each of the three settings. Predictions were compared with those made by mathematical models. Results The analysis predicted that of those with a viral culture positive result, Innova would miss 20% attending an NHS Test-and-Trace centre, 29% without symptoms attending municipal mass testing, and 81% attending university screen testing without symptoms, along with 38%, 47%, and 90% of sources of secondary cases. In comparison, two mathematical models underestimated the numbers of missed infectious individuals (8%, 10%, and 32% in the three settings for one model, whereas the assumptions from the second model made it impossible to miss an infectious individual). Owing to the paucity of usable data, the inputs to the analyses are from limited sources. Conclusions The proportion of infectious people with SARS-CoV-2 missed by LFTs is substantial enough to be of clinical importance. The proportion missed varied between settings because of different viral load distributions and is likely to be highest in those without symptoms. Key models have substantially overestimated the sensitivity of LFTs compared with empirical data. An urgent need exists for additional robust well designed and reported empirical studies from intended use settings to inform evidence based policy.
838691
Evidence of the interconnectedness of global climate
To see how deeply interconnected the planet truly is look no further than the massive ice sheets on the Northern Hemisphere and South Pole. Thousands of kilometers apart, they are hardly next-door neighbors, but according to new research from a team of international scientists -- led by alumna Natalya Gomez Ph.D.'14, and including Harvard professor Jerry X. Mitrovica -- what happens in one region has a surprisingly direct and outsized effect on the other, in terms of ice expanding or melting. The analysis, published in Nature, shows for the first time that changes in the Antarctic ice sheet were caused by the melting of ice sheets in the Northern Hemisphere. The influence was driven by sea-level changes caused by the melting ice in the north during the past 40,000 years. Understanding how this works can help climate scientists grasp future changes and instability as global warming increases the melting of major ice sheets and ice caps, researchers said. The study models how this seesaw effect works. They found that when ice on the Northern Hemisphere stayed frozen during the last peak of the Ice Age, about 20,000 to 26,000 years ago, it led to reduced sea-levels in Antarctica and a growth of the ice sheet there. When the climate warmed after that peak, the ice sheets in the north started melting, causing sea-levels in the southern hemisphere to rise. This rising ocean triggered the ice in Antarctica to retreat quickly to about the size it is today over thousands of years. The question of what caused the Antarctic ice sheet to melt so rapidly during this warming period has been a long-standing enigma. "That's the really exciting part of this," said Mitrovica, the Frank Baird Jr. Professor of Science in the Department of Earth and Planetary Sciences. "What was driving these dramatic events in which the Antarctic released huge amounts of ice mass? This research shows that the events weren't ultimately driven by anything local. They were driven by sea level rising locally but in response to the melting of ice sheets very far away. The study establishes an underappreciated connection between the stability of the Antarctic ice sheet and significant periods of melting in the Northern Hemisphere." The retreat was consistent with the pattern of sea level change predicted by Gomez, now an assistant professor of earth and planetary sciences at McGill University, and colleagues in earlier work on the Antarctic continent. The next step is expanding the study to see where else ice retreat in one location drives retreat in another. That can provide insight on ice sheet stability at other times in the history and perhaps the future. "Looking to the past can really help us to understand how ice sheets and sea levels work," Gomez said. "It gives us a better appreciation of how the whole Earth system works." Along with Gomez and Mitrovica, the team of scientists on the project included researchers from Oregon State University and the University of Bonn in Germany. They combined ice-sheet and sea-level modeling with sediment core samples from the ocean bottom near Antarctica to verify their findings. The rocks they focused on, called ice-rafted debris, were once embedded inside the Antarctic ice sheet. Fallen icebergs carried them into the Southern Ocean In analysis, researchers tried to determine when and where they were released from the ice sheet. They also looked at markers of past shorelines to see how the ice sheet's edge has retreated. Gomez has been researching ice sheets since she was a GSAS student in the Mitrovica Group. She led a study in 2010 that showed that gravitational effects of ice sheets are so strong that when ice sheets melt, the expected sea level rise from all that meltwater entering the oceans would be counterbalanced in nearby areas. Gomez showed that if all of the ice in the West Antarctic ice sheet melted, it could actually lower sea level near the ice by as much as 300 feet, but that sea level would rise significantly more than expected in the Northern Hemisphere. This paper furthered that study by asking how melting ice sheets in one part of the climate system affected another. In this case, the researchers looked at the ice sheets in the Northern Hemisphere that once covered North America and Northern Europe. By putting together modeling data on sea-level rise and ice-sheet melting with the debris left over from icebergs that broke off Antarctica during the Ice Age, the researchers simulated how sea-level and ice dynamics changed in both hemispheres over the past 40,000 years. The researchers were able to explain several periods of instability during the past 20,000 years when the Antarctic ice sheet went through phases of rapid melting known as "meltwater pulses." In fact, according to their model, if not for these periods of rapid retreat the Antarctic ice sheet, which covers almost 14 million square kilometers and weighs about 26 million gigatons, would be even more of a behemoth than it is now. With the geological records, which were collected primarily by Michael Webster from the University of Bonn, the researchers confirmed the timeline predicted by their model and saw that this sea-level change in Antarctica and the mass shedding corresponded with episodes of melting of ice sheets in the Northern Hemisphere. The data caught Gomez by surprise. More than anything though, it deepened her curiosity in these frozen systems. "These ice sheets are really dynamic, exciting, and intriguing parts of the Earth's climate system. It's staggering to think of ice that is several kilometers thick, that covers an entire continent, and that is evolving on all of these different timescales with global consequences," Gomez said. "It's just motivation for trying to better understand these really massive systems that are so far away from us."
10.1038/s41586-020-2916-2
2,020
Nature
Antarctic ice dynamics amplified by Northern Hemisphere sea-level forcing
<jats:p>&amp;lt;p&amp;gt;A longstanding hypothesis for near-synchronous evolution of global ice sheets over ice-age cycles invokes an interhemispheric sea-level forcing whereby sea-level rise due to ice loss in the Northern Hemisphere in response to insolation and greenhouse gas forcing causes grounding-line retreat of marine-based sectors of the Antarctic Ice Sheet (AIS). Recent studies have shown that the AIS experienced substantial millennial-scale variability during and after the last deglaciation, with several times of recorded increased iceberg flux and grounding line retreat coinciding, within uncertainty, with well documented global sea-level rise events, providing further evidence of this sea-level forcing. However, the sea level changes associated with ice sheet mass loss are strongly nonuniform due to gravitational, deformational and Earth rotational effects, suggesting that the response of the AIS to Northern Hemisphere sea-level forcing is more complicated than previously modelled.&amp;lt;/p&amp;gt;&amp;lt;p&amp;gt;We adopt an ice-sheet model coupled to a global sea-level model to show that a large or rapid Northern Hemisphere sea-level forcing enhances grounding-line advance and associated mass gain of the AIS during glaciation, and grounding-line retreat and AIS mass loss during deglaciation. Relative to models without these interactions, including the Northern Hemisphere sea-level forcing leads to a larger AIS volume during the Last Glacial Maximum (about 26,000 to 20,000 years ago), subsequent earlier grounding-line retreat and millennial-scale AIS variability throughout the last deglaciation. These findings are consistent with geologic reconstructions of the extent of the AIS during the Last Glacial Maximum and subsequent ice-sheet retreat, and with relative sea-level change in Antarctica.&amp;amp;#160;&amp;lt;/p&amp;gt;</jats:p>
525738
13.4% of studies in top nutrition journals in 2018 had food industry ties
A new analysis of studies published by top nutrition journals in 2018 shows that 13.4 percent disclosed involvement from the food industry, and studies with industry involvement were more likely to report results favorable to industry interests. Gary Sacks of Deakin University in Melbourne, Australia, and colleagues present these findings in the open-access journal PLOS ONE on December 16. Food companies might choose to become involved in nutrition research to help generate new knowledge. For instance, they might provide funding for academic research or assign employees to research teams. However, growing evidence suggests that food industry involvement could potentially bias nutrition research towards food industry interests, perhaps at the expense of public health. To better understand the extent and potential impact of food industry involvement in research, Sacks and colleagues assessed all peer-reviewed papers published in 2018 in the top 10 most-cited academic journals related to nutrition and diet. They evaluated which papers had food industry ties, such as funding from food companies or authors affiliated with food companies, and noted whether results supported industry interests. The analysis found that 13.4 percent of all analyzed articles reported involvement from the food industry (196/1,461), with some journals having a greater proportion of involvement than others. Compared to a random sample of studies without food industry involvement (n = 196), studies with industry involvement were over five times more likely to report results that favored food industry interests; 55.6 percent compared to 9.7 percent. These findings add to mounting evidence that industry involvement could bias research agendas or findings towards industry interests, while potentially neglecting topics that are more important to public health. The authors of this study suggest several mechanisms that could be explored to prevent the food industry from compromising the integrity of nutrition research. The authors add: "This study found that the food industry is commonly involved in published research from leading nutrition journals. Where the food industry is involved, research findings are nearly six times more likely to be favourable to their interests than when there is no food industry involvement."
10.1371/journal.pone.0243144
2,020
PLoS ONE
The characteristics and extent of food industry involvement in peer-reviewed research articles from 10 leading nutrition-related journals in 2018
Introduction There is emerging evidence that food industry involvement in nutrition research may bias research findings and/or research agendas. However, the extent of food industry involvement in nutrition research has not been systematically explored. This study aimed to identify the extent of food industry involvement in peer-reviewed articles from a sample of leading nutrition-related journals, and to examine the extent to which findings from research involving the food industry support industry interests. Methods All original research articles published in 2018 in the top 10 most-cited nutrition- and dietetics-related journals were analysed. We evaluated the proportion of articles that disclosed involvement from the food industry, including through author affiliations, funding sources, declarations of interest or other acknowledgments. Principal research findings from articles with food industry involvement, and a random sample of articles without food industry involvement, were categorised according to the extent to which they supported relevant food industry interests. Results 196/1,461 (13.4%) articles reported food industry involvement. The extent of food industry involvement varied by journal, with The Journal of Nutrition (28.3%) having the highest and Paediatric Obesity (3.8%) having the lowest proportion of industry involvement. Processed food manufacturers were involved in the most articles (77/196, 39.3%). Of articles with food industry involvement, 55.6% reported findings favourable to relevant food industry interests, compared to 9.7% of articles without food industry involvement. Conclusion Food industry involvement in peer-reviewed research in leading nutrition-related journals is commonplace. In line with previous literature, this study has shown that a greater proportion of peer-reviewed studies involving the food industry have results that favour relevant food industry interests than peer-reviewed studies without food industry involvement. Given the potential competing interests of the food industry, it is important to explore mechanisms that can safeguard the integrity and public relevance of nutrition research.
924355
How landscapes of fear affect the songbirds in our backyards
AMHERST, Mass. – A team of researchers headquartered at the University of Massachusetts Amherst has recently discovered that fear plays an important, unrecognized role in the underdevelopment, and increased vulnerability, of backyard songbirds. Scientists have long known that urban songbirds face a host of increased challenges, from habitat loss to altered food sources and a larger population of predators, such as skunks, rats, squirrels and, especially, house cats, compared to their rural cousins. In particular, urban nestlings weigh significantly less than those born in the country and have a decreased chance of surviving to adulthood, as a result. New research, published in the journal Ecosphere, helps to tease out exactly why. Part of the difficulty in figuring out why urban nestlings struggle is due to what biologists call the “predation paradox:” though there are increased numbers of predators in urban areas, there is actually a lower per-capita rate of predation. “The key,” says Aaron Grade, the paper’s lead author who completed this research as a graduate student in UMass Amherst’s program in organismic and evolutionary biology, “has been hiding in plain sight. We haven’t been paying enough attention to fear itself.” To arrive at this conclusion, Grade, along with his co-authors Susannah B. Lerman, research ecologist at the USDA Forest Service Northern Research Station; and Paige S. Warren, professor in the department of environmental conservation at UMass Amherst; built 38 nest boxes for house wrens and placed them in participants’ backyards. The participants lived in a variety of landscapes, all in the Connecticut River Valley of Massachusetts, from the urban (Springfield, with a population density of 4,775 people per square mile), to low-density suburban (Amherst, 1,445 people per square mile) to the rural (Whately, 72 people per square mile). Grade and his colleagues then played the cries of screech owls and Cooper’s hawks, both of which feed upon house wrens in Massachusetts, from speakers installed in each participant’s yard. “The participants were wonderful,” says Grade. “They put up with this noise in their backyards and were very invested in the experiment.” The nestlings in each box were then weighed every three days until they left the nest. The authors discovered that, due to a variety of ‘urban effects,’ including availability of food, habitat loss and predation, urban nestlings all weighed about 10% less than the rural nestlings—an expected finding that is consistent with previous studies showing the effects of urban development on wildlife. But the authors also discovered that all the nestlings, both rural and urban, subjected to the owl and hawk cries saw a 10% decrease in weight as well. “This is a largely unexplored component of human/wildlife interaction,” says Grade. “Birds are very in-tune with what’s going on, and if they see, or in this case hear, a predator, they’ll change their behavior.” For instance, the parent birds might spend less time finding food for their nestlings to avoid predation. “These landscapes of fear,” says Grade, “can have a greater effect on behavior and survival than the actual predator itself.” In general, hobbyist birders should avoid using recordings of predators because they can cause unintended responses and undue stress in birds, as Grade’s research shows. These experiments were carried out with approval by the UMass Institutional Animal Care and Use Committee and followed best practices for playback experiments to reduce any potential harm. This research was funded by the University of Massachusetts Amherst Graduate School; the Manomet Center for Conservation Sciences; the Blodget Fund for Ornithological Studies; the Animal Behavior Society; the American Ornithological Society; and the National Science Foundation. Contacts: Aaron Grade, [email protected]                  Daegan Miller, [email protected] Ecosphere 10.1002/ecs2.3665 Animals Perilous choices: landscapes of fear for adult birds reduces nestling condition across an urban gradient 20-Jul-2021
10.1002/ecs2.3665
2,021
Ecosphere
Perilous choices: landscapes of fear for adult birds reduces nestling condition across an urban gradient
Abstract Predator fear effects influence reproductive outcomes in many species. In non‐urban systems, passerines often respond to predator cues by reducing parental investment, resulting in smaller and lighter nestlings. Since trophic interactions in urban areas are highly altered, it is unclear how passerines respond to fear effects in human‐altered landscapes. Nestlings of passerines in urban areas also tend to be smaller and lighter than their rural counterparts and are often exposed to high densities of potential predators yet experience lower per capita predation—the predation paradox. We suggest fear effects in urban habitats could be a significant mechanism influencing nestling condition in birds, despite lowered predation rates. We manipulated exposure of nesting birds to adult‐consuming predator risk in residential yards across a gradient of urbanization to determine the relative influence of urbanization and fear on nestling condition. We found nestlings had reduced mass in nests exposed to predator playbacks as well as in more urban areas. Despite lower per capita predation rates in urban areas, fear effects from increased predator densities may influence passerine fitness through reduced nestling condition. As urban development expands, biodiversity conservation hinges on a deeper mechanistic understanding of how urbanization affects reproductive outcomes.
785301
Familial hypercholesterolemia in children and adolescents: diagnosis and treatment
Familial hypercholesterolemia is a hereditary genetic disorder predisposing in premature atherosclerosis and cardiovascular complications. Early diagnoses as well as effective treatment strategies in affected children are challenges among experts. Universal screening and cascade screening among families with familial hypercholesterolemia are being controversially discussed. Treatment approaches for familial hypercholesterolemia in the pediatric population are multidisciplinary and aim to reduce total cardiovascular risk. Diagnosis of familial hypercholesterolemia in children and adolescents is usually based on clinical phenotype upon LDL-C levels and family history of premature cardiovascular and/or elevated LDL-C. The most widely recommended and effective pharmacotherapy in the pediatric age group is currently statins. Ezetimibe and bile acid sequestrants are usually used as second-line agents. Evidence would be expected in the near future by cohort and registry studies. New therapeutic approaches, such as mipomersen and PCSK9 inhibitors seem promising. The main gap of evidence remains the lack of longitudinal follow up studies investigating cardiovascular outcomes, side effects, and effectiveness of treatment starting from childhood.
10.2174/1381612824666181010145807
2,018
Current Pharmaceutical Design
Familial Hypercholesterolemia in Children and Adolescents: Diagnosis and Treatment
Familial hypercholesterolemia is a hereditary genetic disorder predisposing in premature atherosclerosis and cardiovascular complications. Early diagnosis as well as effective treatment strategies in affected children are challenges among experts. Universal screening and cascade screening among families with familial hypercholesterolemia are being controversially discussed. Diagnosis of familial hypercholesterolemia in children and adolescents is usually based on clinical phenotype upon LDL-C levels and family history of premature cardiovascular and/or elevated LDL-C. Treatment approaches for familial hypercholesterolemia in the pediatric population are multidisciplinary and aim to reduce total cardiovascular risk. The most widely recommended and effective pharmacotherapy in the pediatric age group is currently statins. Ezetimibe and bile acid sequestrants are usually used as second-line agents. New therapeutic approaches, such as mipomersen and PCSK9 inhibitors seem promising. The main gap of evidence remains the lack of longitudinal follow up studies investigating cardiovascular outcomes, side effects, and effectiveness of treatment starting from childhood. Evidence would be expected in the near future by cohort and registry studies.
873297
Ben-Gurion U. scientists invent an artificial nose for continuous bacterial monitoring
BEER-SHEVA, Israel, June 21, 2021 - A team of scientists at Ben-Gurion University of the Negev (BGU) have invented an artificial nose that is capable of continuous bacterial monitoring, which has never been previously achieved and could be useful in multiple medical, environmental and food applications. The study was published in Nano-Micro Letters. "We invented an artificial nose based on unique carbon nanoparticles ("carbon dots") capable of sensing gas molecules and detecting bacteria through the volatile metabolites the emit into the air," says lead researcher Prof. Raz Jelinek, BGU vice president for Research & Development, member of the BGU Department of Chemistry and the Ilse Katz Institute for Nanoscale Science and Technology, and the incumbent of the Carole and Barry Kaye Chair in Applied Science. The patent-pending technology has many applications including identifying bacteria in healthcare facilities and buildings; speeding lab testing and breath-based diagnostic testing; identifying "good" vs. pathogenic bacteria in the microbiome; detecting food spoilage and identifying poisonous gases. "BGU has a remarkable track record of sensor development, which has infinite possibilities for real-life application," says Americans for Ben-Gurion University (A4BGU) Chief Executive Officer Doug Seserman. "Our renowned multi-disciplinary research efforts continue to ignite innovation, addressing some of the world's most pressing issues." The artificial nose uses chemical reactions and electrodes to sense and distinguish vapor molecules and record the changes in capacitance onto interdigitated electrodes (IDEs) coated with carbon dots (C dots). The resulting C-dot-IDE platform constitutes a versatile and powerful vehicle for gas sensing in general, and bacterial monitoring in particular. Machine learning can be applied to train the sensor to identify different gas molecules, individually or in mixtures, with high accuracy. ### About the BGU Research Team Other BGU researchers on the team included: Nitzan Shauloff, Dr. Ahiud Morag, Dr. Seema Singh, and Ravit Malishev of the BGU Department of Chemistry and Prof. Lior Rokach, chair of the BGU Department of Software and Information Systems Engineering. About Americans for Ben-Gurion University Americans for Ben-Gurion University plays a vital role in maintaining David Ben-Gurion's vision of an "Oxford in the Negev." By supporting a world-class academic institution that not only nurtures the Negev, but also shares its expertise locally and globally, Americans for Ben-Gurion University engages a community of Americans who are committed to improving the world. The Americans for Ben-Gurion University movement supports a 21st century unifying vision for Israel by rallying around BGU's remarkable work and role as an apolitical beacon of light in the Negev desert. For more information visit http://www.americansforbgu.org.
10.1007/s40820-021-00610-w
2,021
Nano-Micro Letters
Sniffing Bacteria with a Carbon-Dot Artificial Nose
Novel artificial nose based upon electrode-deposited carbon dots (C-dots). Significant selectivity and sensitivity determined by "polarity matching" between the C-dots and gas molecules. The C-dot artificial nose facilitates, for the first time, real-time, continuous monitoring of bacterial proliferation and discrimination among bacterial species, both between Gram-positive and Gram-negative bacteria and between specific strains. Machine learning algorithm furnishes excellent predictability both in the case of individual gases and for complex gas mixtures. Continuous, real-time monitoring and identification of bacteria through detection of microbially emitted volatile molecules are highly sought albeit elusive goals. We introduce an artificial nose for sensing and distinguishing vapor molecules, based upon recording the capacitance of interdigitated electrodes (IDEs) coated with carbon dots (C-dots) exhibiting different polarities. Exposure of the C-dot-IDEs to volatile molecules induced rapid capacitance changes that were intimately dependent upon the polarities of both gas molecules and the electrode-deposited C-dots. We deciphered the mechanism of capacitance transformations, specifically substitution of electrode-adsorbed water by gas molecules, with concomitant changes in capacitance related to both the polarity and dielectric constants of the vapor molecules tested. The C-dot-IDE gas sensor exhibited excellent selectivity, aided by application of machine learning algorithms. The capacitive C-dot-IDE sensor was employed to continuously monitor microbial proliferation, discriminating among bacteria through detection of distinctive "volatile compound fingerprint" for each bacterial species. The C-dot-IDE platform is robust, reusable, readily assembled from inexpensive building blocks and constitutes a versatile and powerful vehicle for gas sensing in general, bacterial monitoring in particular.
969240
River longer than the Thames beneath Antarctic ice sheet could affect ice loss
An unexpected river under the Antarctic ice sheet affects the flow and melting of ice, potentially accelerating ice loss as the climate warms. The 460km-long river is revealed in a new study, which details how it collects water at the base of the Antarctic ice sheet from an area the size of Germany and France combined. Its discovery shows the base of the ice sheet has more active water flow than previously thought, which could make it more susceptible to changes in climate. The discovery was made by researchers at Imperial College London, the University of Waterloo, Canada, Universiti Malaysia Terengganu, and Newcastle University, with the details published today in Nature Geoscience. Co-author Professor Martin Siegert, from the Grantham Institute at Imperial College London, said: “When we first discovered lakes beneath the Antarctic ice a couple of decades ago, we thought they were isolated from each other. Now we are starting to understand there are whole systems down there, interconnected by vast river networks, just as they might be if there weren’t thousands of metres of ice on top of them. “The region where this study is based holds enough ice to raise the sea level globally by 4.3m. How much of this ice melts, and how quickly, is linked to how slippery the base of the ice is. The newly discovered river system could strongly influence this process.” Water can appear beneath ice sheets in two main ways: from surface meltwater running down through deep crevasses, or by melting at the base, caused by the natural heat of the Earth and friction as the ice moves over land. However, the ice sheets around the north and south poles have different characteristics. In Greenland, the surface experiences strong melting over the summer months, where immense amounts of water channel down through deep crevasses called moulins. In Antarctica, however, the surface doesn’t melt in sufficient quantities to create moulins, as the summers are still too cold. It was thought this meant that there was relatively little water at the base of the Antarctic ice sheets. The new discovery turns this idea on its head, showing there is sufficient water from basal melt alone to create huge river systems under kilometres-thick ice. The discovery was made through a combination of airborne radar surveys that allow researchers to look beneath the ice and modelling of the ice sheet hydrology. The team focussed on a largely inaccessible and understudied area that includes ice from both the East and West Antarctic Ice Sheets and reaches the Weddell Sea. That such a large system could be undiscovered until now is testament to how much we still need to learn about the continent, says lead researcher Dr Christine Dow from the University of Waterloo. She said: “From satellite measurements we know which regions of Antarctica are losing ice, and how much, but we don’t necessarily know why. This discovery could be a missing link in our models. We could be hugely underestimating how quickly the system will melt by not accounting for the influence of these river systems. “Only by knowing why ice is being lost can we make models and predictions of how the ice will react in the future under further global heating, and how much this could raise global sea levels.” For example, the newly discovered river emerges into the sea beneath a floating ice shelf – where a glacier extending out from the land is buoyant enough to begin floating on the ocean water. The freshwater from the river however churns up warmer water towards the bottom of the ice shelf, melting it from below. Co-author Dr Neil Ross, from the University of Newcastle, said: “Previous studies have looked at the interaction between the edges of ice sheets and ocean water to determine what melting looks like. However, the discovery of a river that reaches hundreds of kilometres inland driving some of these processes shows that we cannot understand the ice melt fully without considering the whole system: ice sheet, ocean, and freshwater.” The existence of large under-ice rivers also needs to be taken into account when predicting the possible consequences of climate change in the region. For example, if summers warm enough to cause enough surface melt that the water reaches the base of the ice sheet, it could have large effects on the river systems, potentially tipping Antarctica to a Greenland-like state, where ice loss is much faster. There are also potential feedback loops that would accelerate ice loss. For example, if the ice starts to flow faster as water accumulated at the base, then this will increase friction where the ice runs over dry land, which could increase the amount of basal melting and water produced. The team are now looking to gather more data about all these mechanisms from surveys to apply their models to other regions and provide a better understanding of how a changing Antarctica could change the planet. Nature Geoscience 10.1038/s41561-022-01059-1 Antarctic basal environment shaped by high-pressure flow through a subglacial river system 27-Oct-2022
10.1038/s41561-022-01059-1
2,022
Nature Geoscience
Antarctic basal environment shaped by high-pressure flow through a subglacial river system
The stability of ice sheets and their contributions to sea level are modulated by high-pressure water that lubricates the base of the ice, facilitating rapid flow into the ocean. In Antarctica, subglacial processes are poorly characterized, limiting understanding of ice-sheet flow and its sensitivity to climate forcing. Here, using numerical modelling and geophysical data, we provide evidence of extensive, up to 460 km long, dendritically organized subglacial hydrological systems that stretch from the ice-sheet interior to the grounded margin. We show that these channels transport large fluxes (~24 m3 s−1) of freshwater at high pressure, potentially facilitating enhanced ice flow above. The water exits the ice sheet at specific locations, appearing to drive ice-shelf melting in these areas critical for ice-sheet stability. Changes in subglacial channel size can affect the water depth and pressure of the surrounding drainage system up to 100 km either side of the primary channel. Our results demonstrate the importance of incorporating catchment-scale basal hydrology in calculations of ice-sheet flow and in assessments of ice-shelf melt at grounding zones. Thus, understanding how marginal regions of Antarctica operate, and may change in the future, requires knowledge of processes acting within, and initiating from, the ice-sheet interior.
767807
Gender norms affect attitudes towards gay men and lesbian women globally
Washington, DC - Gay men and lesbian women have often been the targets of prejudice and even violence in society. To better understand what shapes these attitudes and prejudices, Maria Laura Bettinsoli, Alexandra Suppes, and Jamie Napier (all New York University - Abu Dhabi) tested how beliefs about gender norms (expectations of society for how men and women act and look) and people's attitudes towards gay men and women relate across the globe. They found that globally, gay men are disliked more than lesbian women across 23 countries. Their results also suggest negative attitudes are guided by the perception that gays and lesbians violate traditional gender norms. But in three countries, China, India, and South Korea, the correlation between beliefs in gender norms and attitudes towards gays and lesbians was absent or even reversed. The research appears online before print in Social Psychological and Personality Science. The team assessed attitudes towards gay men and lesbian women separately, noting that most research focuses on homosexuality as a broad category and doesn't separate attitudes by gender. Bettinsoli and colleagues were surprised at how consistently gay men were rated more negatively than lesbian women in a vast majority of their samples. They were also surprised "at the consistency of the relationship between gender norm endorsement and sexual prejudice," says Bettinsoli. "Even though there were some non-Western countries that did not conform to the pattern, the majority of countries did." These findings were true for western countries including Argentina, Australia, Belgium, Brazil, Canada, France, Germany, Great Britain, Hungary, Italy, Mexico, Peru, Poland, Spain, Sweden, and the USA. The same was true for Russia, South Africa, and Turkey too. "We also found that, in line with previous research, the endorsement of gender norms was associated with anti-gay attitudes--toward both gay men and lesbian women--in every Western country in our sample," says Bettinsoli. In South Korea, the researchers saw that endorsement of gender norms was unrelated to attitudes toward gays and lesbians, and in Japan, there was a small association between gender norm endorsement and attitudes toward gay men, but not towards lesbian women. "In China and India, the reverse pattern emerged. Those who were highest on endorsement of traditional gender roles were the most positive toward gay men and lesbian women," says Bettinsoli. While some of the countries show friendlier attitudes towards gays and lesbians, Bettinsoli notes that even in the more tolerant places discriminatory attitudes still exist. The study is one of several appearing in a future special issue of Social Psychological and Personality Science focused on underrepresented populations.
10.1177/1948550619887785
2,019
Social Psychological and Personality Science
Predictors of Attitudes Toward Gay Men and Lesbian Women in 23 Countries
Dominant accounts of sexual prejudice posit that negative attitudes toward nonheterosexual individuals are stronger for male (vs. female) targets, higher among men (vs. women), and driven, in part, by the perception that gay men and lesbian women violate traditional gender norms. We test these predictions in 23 countries, representing both Western and non-Western societies. Results show that (1) gay men are disliked more than lesbian women across all countries; (2) after adjusting for endorsement of traditional gender norms, the relationship between participant gender and sexual prejudice is inconsistent across Western countries, but men (vs. women) in non-Western countries consistently report more negative attitudes toward gay men; and (3) a significant association between gender norm endorsement and sexual prejudice across countries, but it was absent or reversed in China, India, and South Korea. Taken together, this work suggests that gender and sexuality may be more loosely associated in some non-Western contexts.
640959
Brain waves could help predict how we respond to general anesthetics
The complex pattern of 'chatter' between different areas of an individual's brain while they are awake could help doctors better track and even predict their response to general anaesthesia - and better identify the amount of anaesthetic necessary - according to new research from the University of Cambridge. Currently, patients due to undergo surgery are given a dose of anaesthetic based on the so-called 'Marsh model', which uses factors such as an individual's body weight to predict the amount of drug needed. As patients 'go under', their levels of awareness are monitored in a relatively crude way. If they are still deemed awake, they are simply given more anaesthetic. However, general anaesthetics can carry risks, particularly if an individual has an underlying health condition such as a heart disorder. As areas of the brain communicate with each other, they give off tell-tale signals that can give an indication of how conscious an individual is. These 'networks' of brain activity can be measured using an EEG (electroencephalogram), which measures electric signals as brain cells talk to each other. Cambridge researchers have previously shown that these network signatures can even be seen in some people in a vegetative state and may help doctors identify patients who are aware despite being unable to communicate. These findings build upon advances in the science of networks to tackle the challenge of understanding and measuring human consciousness. In a study published today in the open access journal PLOS Computational Biology, funded by the Wellcome Trust, the researchers studied how these signals changed in healthy volunteers as they received an infusion of propofol, a commonly used anaesthetic. Twenty individuals (9 male, 11 female) received a steadily increasing dose of propofol - all up to the same limit - while undergoing a task that involved pressing one button if they heard a 'ping' and a different button if they heard a 'pong'. At the same time, the researchers tracked their brain network activity using an EEG. By the time the subjects had reached the maximum dose, some individuals were still awake and able to carry out the task, while others were unconscious. As the researchers analysed the EEG readings, they found clear differences between those who were responding to the anaesthetic and those who remained able to carry on with the task. This 'brain signature' was evident in the network of communications between brain areas carried by alpha waves (brain cell oscillations in the frequency range of 7.5-12.5 Hz), the normal range of electrical activity of the brain when conscious and relaxed. In fact, when the researchers looked at the baseline EEG readings before any drug was given, they already saw differences between those who would later succumb to the drug and those who were less responsive to its effects. Dividing the subjects into two groups based on their EEG readings - those with lots of brain network activity at baseline and those with less - the researchers were able to predict who would be more responsive to the drug and who would be less. The researchers also measured levels of propofol in the blood to see if this could be used as a measure of how conscious an individual was. Although they found little correlation with the alpha wave readings in general, they did find a correlation with a specific form of brain network activity known as delta-alpha coupling. This may be able to provide a useful, non-invasive measure of the level of drug in the blood. "A very good way of predicting how an individual responds to our anaesthetic was the state of their brain network activity at the start of the procedure," says Dr Srivas Chennu from the Department of Clinical Neurosciences, University of Cambridge. "The greater the network activity at the start, the more anaesthetic they are likely to need to put them under." Dr Tristan Bekinschtein, senior author from the Department of Psychology, adds: "EEG machines are commonplace in hospitals and relatively inexpensive. With some engineering and further testing, we expect they could be adapted to help doctors optimise the amount of drug an individual needs to receive to become unconscious without increasing their risk of complications." Srivas Chennu will be speaking at the Cambridge Science Festival on Wednesday 16 March. During the event, 'Brain, body and mind: new directions in the neuroscience and philosophy of consciousness', he will be examining what it means to be conscious. ### Reference Chennu, S et al. Brain connectivity dissociates responsiveness from drug exposure during propofol induced transitions of consciousness. PLOS Computational Biology; 14 Jan 2016
10.1371/journal.pcbi.1004669
2,016
PLoS Computational Biology
Brain Connectivity Dissociates Responsiveness from Drug Exposure during Propofol-Induced Transitions of Consciousness
Accurately measuring the neural correlates of consciousness is a grand challenge for neuroscience. Despite theoretical advances, developing reliable brain measures to track the loss of reportable consciousness during sedation is hampered by significant individual variability in susceptibility to anaesthetics. We addressed this challenge using high-density electroencephalography to characterise changes in brain networks during propofol sedation. Assessments of spectral connectivity networks before, during and after sedation were combined with measurements of behavioural responsiveness and drug concentrations in blood. Strikingly, we found that participants who had weaker alpha band networks at baseline were more likely to become unresponsive during sedation, despite registering similar levels of drug in blood. In contrast, phase-amplitude coupling between slow and alpha oscillations correlated with drug concentrations in blood. Our findings highlight novel markers that prognosticate individual differences in susceptibility to propofol and track drug exposure. These advances could inform accurate drug titration and brain state monitoring during anaesthesia.
742817
Global sentiments towards COVID-19 shifts from fear to anger
The fear that people developed at the start of the COVID-19 outbreak has given way to anger over the course of the pandemic, a study of global sentiments led by Nanyang Technological University, Singapore (NTU Singapore) has found. In an analysis of over 20 million tweets in English related to the coronavirus, an international team of communication researchers observed that tweets reflecting fear, while dominant at the start of the outbreak due to the uncertainty surrounding the coronavirus, have tapered off over the course of the pandemic. Xenophobia was a common theme among anger-related tweets, which progressively increased, peaking on 12 March - a day after the World Health Organisation declared the COVID-19 outbreak a pandemic. The anger then evolved to reflect feelings arising from isolation and social seclusion. Accompanying this later shift is the emergence of tweets that show joy, which the researchers say suggested a sense of pride, gratitude, hope, and happiness. Tweets that reflected sadness doubled, although they remain proportionally lower than the other emotions. The rapid evolution of global COVID-19 sentiments within a short period of time points to a need to address increasingly volatile emotions through strategic communication by government and health authorities, as well as responsible behaviour by netizens before they give rise to "unintended outcomes", said Professor May O. Lwin of NTU's Wee Kim Wee School of Communication and Information. Prof Lwin, who led the team representing four countries, said: "Worldwide, strong negative sentiments of fear were detected in the early phases of pandemic but by early April, these emotions have gradually been replaced by anger. Our findings suggest that collective issues driven by emotions, such as shared experiences of distress of the COVID-19 pandemic including large-scale social isolation and the loss of human lives, are developing. "If such overbearing public emotions are not addressed through clear and decisive communication by authorities, citizen groups and social media stakeholders, there is potential for the emergence of issues such as breeding mistrust in the handling of the disease, and a belief in online falsehoods that could hinder the ongoing control of the disease." The study was published in the scientific journal JMIR Public Health & Surveillance in May. A glimmer of hope and gratitude amidst anger To identify trends in the expression of the four basic emotions - fear, anger, sadness, and joy - and examine the narratives underlying those emotions, Prof Lwin and her team first collected 20,325,929 tweets in English containing the keywords 'Wuhan', 'corona', 'nCov', and 'covid'. The tweets, collected from late January to early April at the Institute of High Performance Computing in Agency for Science, Technology and Research (A*STAR) using Twitter's standard search application interface programme, came from over 7 million unique users in more than 170 countries. "Although the data looks at only public tweets surrounding the four selected keywords, the results are sufficient to start a conversation about possible issues arising from the pandemic at present," said Prof Lwin, whose collaborators also include Tianjin University, University of Lugano, and University of Melbourne. The underlying emotions of tweets were then analysed using an algorithm developed by A*STAR, whose accuracy has been demonstrated in previous studies. Word clouds based on the top single words and two-word phrases were generated for each of the four emotions. Upon analysing the results, the team found that words such as 'first case' and 'outbreak' were among the most-used words in tweets from late January, indicating fear that was possibly related to the emerging coronavirus and the unknown nature of it, causing uncertainty about containment and spread. Xenophobia was also reflected at the start of the pandemic, when the disease was predominantly contained in China and Asia, as indicated by words such as 'racist' and 'Chinese people'. As the pandemic escalated, fears around shortages of COVID-19 diagnostic tests and medical supplies emerged, as suggested by words such as 'test shortages' and 'uncounted'. Anger then shifted to discourses around the isolation fatigue that can occur from social seclusion, indicated by words such as "stay home" and several swear words. Signs of sadness surrounding the topics of losing friends and family members also started to surface, with words relating to 'loved one' and 'passed away' highlighting potential social concerns arising from personal traumatic experiences of the pandemic. But accompanying these negative emotions were parallel escalating sentiments of joy relating to national pride, gratitude, and community spirit, the NTU-led team found, with words such as 'thank', 'good news' and 'feel good'. Tweets that were collected and analysed from early April to mid-June as an extension of the JMIR study also showed that these positive sentiments exceeded fear postings on social media. Upcoming follow-up studies led by Prof Lwin will dive into country-specific trends in public emotions. Preliminary findings show that in Singapore, there is a moderate balance of positive sentiments relating to resilience, civic pride, and celebration of heroic acts and acts of kindness. This is in contrast to other countries where strong negative emotions overwhelmingly feature in the social media posts. ### This work is funded by A*STAR and the National Research Foundation Singapore under the COVID-19 Research Fund, administered by the Singapore Ministry of Health's National Medical Research Council. Note to Editors: Paper 'Global sentiments surrounding the COVID-19 pandemic on Twitter' published in JMIR Public Health & Surveillance, May 2020. doi:10.2196/19447 Media contact: Foo Jie Ying Manager, Corporate Communications Office Nanyang Technological University Email: [email protected] About Nanyang Technological University, Singapore A research-intensive public university, Nanyang Technological University, Singapore (NTU Singapore) has 33,000 undergraduate and postgraduate students in the Engineering, Business, Science, Humanities, Arts, & Social Sciences, and Graduate colleges. It also has a medical school, the Lee Kong Chian School of Medicine, set up jointly with Imperial College London. NTU is also home to world-class autonomous institutes - the National Institute of Education, S Rajaratnam School of International Studies, Earth Observatory of Singapore, and Singapore Centre for Environmental Life Sciences Engineering - and various leading research centres such as the Nanyang Environment & Water Research Institute (NEWRI) and Energy Research Institute @ NTU (ERI@N). Ranked amongst the world's top universities by QS, NTU has also been named the world's top young university for the past seven years. The University's main campus is frequently listed among the Top 15 most beautiful university campuses in the world and it has 57 Green Mark-certified (equivalent to LEED-certified) building projects, of which 95% are certified Green Mark Platinum. Apart from its main campus, NTU also has a campus in Singapore's healthcare district. For more information, visit http://www.ntu.edu.sg.
10.2196/19447
2,020
JMIR Public Health and Surveillance
Global Sentiments Surrounding the COVID-19 Pandemic on Twitter: Analysis of Twitter Trends
With the World Health Organization's pandemic declaration and government-initiated actions against coronavirus disease (COVID-19), sentiments surrounding COVID-19 have evolved rapidly.This study aimed to examine worldwide trends of four emotions-fear, anger, sadness, and joy-and the narratives underlying those emotions during the COVID-19 pandemic.Over 20 million social media twitter posts made during the early phases of the COVID-19 outbreak from January 28 to April 9, 2020, were collected using "wuhan," "corona," "nCov," and "covid" as search keywords.Public emotions shifted strongly from fear to anger over the course of the pandemic, while sadness and joy also surfaced. Findings from word clouds suggest that fears around shortages of COVID-19 tests and medical supplies became increasingly widespread discussion points. Anger shifted from xenophobia at the beginning of the pandemic to discourse around the stay-at-home notices. Sadness was highlighted by the topics of losing friends and family members, while topics related to joy included words of gratitude and good health.Overall, global COVID-19 sentiments have shown rapid evolutions within just the span of a few weeks. Findings suggest that emotion-driven collective issues around shared public distress experiences of the COVID-19 pandemic are developing and include large-scale social isolation and the loss of human lives. The steady rise of societal concerns indicated by negative emotions needs to be monitored and controlled by complementing regular crisis communication with strategic public health communication that aims to balance public psychological wellbeing.
681567
Paleontology: New Australian pterosaur may have survived the longest
The discovery of a previously unknown species of pterosaur, which may have persisted as late as the Turonian period (90-93 million years ago), is reported in Scientific Reports this week. The fossil, which includes parts of the skull and five vertebrae, is the most complete pterosaur specimen ever found in Australia. The findings suggest it may be a late-surviving member of the Anhanguera genus of pterodactyls, which were believed to have gone extinct at the end of the Cenomanian period (100-94 million years ago). Pterosaurs are known from fossils discovered on every continent but their remains are often incomplete and fragmentary because their bones are thin and hollow. The fossil record for pterosaurs in Australia is particularly sparse with only 20 known fragmentary specimens. Adele Pentland and colleagues discovered the new pterosaur, which they have named Ferrodraco lentoni (from the Latin ferrum (iron), in reference to the ironstone preservation of the specimen, and the Latin draco (dragon)), in the Winton Formation of Queensland. Based on the shape and characteristics of its jaws, including crests on upper and lower jaw and spike-shaped teeth, the authors identified the specimen as belonging to the Anhanguera, which are known from the Early Cretaceous Romualdo Formation of Brazil. Comparison with other anhanguerian pterosaurs suggests that Ferrodraco's wingspan measured approximately four metres. The authors also report a number of unique dental characteristics, including small front teeth, which distinguish Ferrodraco from other anhanguerians and identify it as a new species. The fossil was discovered in 2017 in a part of the Winton formation that may have formed as late as the early Turonian, which suggests that the anhanguerians may have survived later in Australia than elsewhere. The fossil was discovered in 2017 in a part of the Winton formation that may have formed as late as the early Turonian, which suggests that the anhanguerians may have survived later in Australia than elsewhere. ### Article and author details Ferrodraco lentoni gen. et sp. nov., a new ornithocheirid pterosaur from the Winton formation (cenomanian–lower turonian) of Queensland, Australia Corresponding authors: Adele Pentland Swinburne University of Technology, Hawthorn, Australia [email protected] DOI 10.1038/s41598-019-49789-4 Online paper https://www.nature.com/articles/s41598-019-49789-4 * Please link to the article in online versions of your report (the URL will go live after the embargo ends) CONTACT Adele Pentland (Swinburne University of Technology, Hawthorn, Australia) Tel: +61 0747 417326; E-mail: [email protected]
10.1038/s41598-019-49789-4
2,019
Scientific Reports
Ferrodraco lentoni gen. et sp. nov., a new ornithocheirid pterosaur from the Winton Formation (Cenomanian–lower Turonian) of Queensland, Australia
The Australian pterosaur record is poor by world standards, comprising fewer than 20 fragmentary specimens. Herein, we describe the new genus and species Ferrodraco lentoni gen. et sp. nov., based on the most complete pterosaur specimen ever found in Australia, and the first reported from the Winton Formation (Cenomanian-lower Turonian). The presence of premaxillary and mandibular crests, and spike-shaped teeth with subcircular bases, enable Ferrodraco to be referred to Anhangueria. Ferrodraco can be distinguished from all other anhanguerian pterosaurs based on two dental characters: the first premaxillary and mandibular tooth pairs are small; and the fourth-seventh tooth pairs are smaller than the third and eighth ones. Ferrodraco was included in a phylogenetic analysis of Pterosauria and resolved as the sister taxon to Mythunga camara (upper Albian Toolebuc Formation, Australia), with that clade occupying the most derived position within Ornithocheiridae. Ornithocheirus simus (Albian Cambridge Greensand, England), Coloborhynchus clavirostris (Valanginian Hastings Sands, England), and Tropeognathus mesembrinus (upper Aptian-lower Albian Romualdo Formation, Brazil) were resolved as successive sister taxa, which suggests that ornithocheirids were cosmopolitan during the Albian-Cenomanian. Furthermore, the stratigraphic age of Ferrodraco lentoni (Cenomanian-lower Turonian) implies that anhanguerians might have survived later in Australia than elsewhere.
570165
Potential target for treating many cancers found within GLI1 gene
Scientists from the Stanley Manne Children's Research Institute at Ann & Robert H. Lurie Children's Hospital of Chicago found that a region within the DNA of the cancer-promoting GLI1 gene is directly responsible for regulating this gene's expression. These findings, published in the journal Stem Cells, imply that this region within GLI1 could potentially be targeted as cancer treatment, since turning off GLI1 would interrupt excessive cell division characteristic of cancer. "From previous research, we know that GLI1 drives the unrelenting cell proliferation that is responsible for many cancers, and that this gene also stimulates its own expression," says co-senior author Philip Iannaccone, MD, PhD, Professor Emeritus at the Manne Research Institute at Lurie Children's and Northwestern University Feinberg School of Medicine. "We established in living human embryonic stem cells that removing the GLI1 regulatory region eliminated GLI1 expression and halted its activity. These findings are promising and could point to a therapeutic target for cancer." Dr. Iannaccone and colleagues used CRISPR gene editing technology to delete the binding region of the GLI1 DNA in human embryonic stem cells. They found that without this region, GLI1 remained turned off, which interfered with the gene's normal activity of driving embryonic development of blood, bone, and nerve cells. "A surprising aspect of this work was that turning GLI1 off affected stem cell differentiation to all three embryonic lineages," says first author Yekaterina Galat, BS, Research Associate at the Manne Research Institute at Lurie Children's. "The developmental function of GLI1 ends after birth, so if we manage to stop its expression in the context of cancer, it should not have negative consequences to normal biology," explains Dr. Iannaccone. GLI1 expression is associated with about a third of all human cancers. In addition to promoting cell proliferation, GLI1 expression increases tumor cell migration and is associated with resistance to chemotherapy drugs. "Our team plans to study GLI1 associated proteins that assist in regulation of GLI1 expression through its binding region," says Dr. Iannaccone. "Targeting these proteins as a means to stop GLI1 activity could prove to be a fruitful treatment strategy for cancer."
10.1002/stem.3341
2,021
Stem Cells
CRISPR editing of the GLI1 first intron abrogates GLI1 expression and differentially alters lineage commitment
Abstract GLI1 is one of three GLI family transcription factors that mediate Sonic Hedgehog signaling, which plays a role in development and cell differentiation. GLI1 forms a positive feedback loop with GLI2 and likely with itself. To determine the impact of GLI1 and its intronic regulatory locus on this transcriptional loop and human stem cell differentiation, we deleted the region containing six GLI binding sites in the human GLI1 intron using CRISPR/Cas9 editing to produce H1 human embryonic stem cell (hESC) GLI1-edited clones. Editing out this intronic region, without removing the entire GLI1 gene, allowed us to study the effects of this highly complex region, which binds transcription factors in a variety of cells. The roles of GLI1 in human ESC differentiation were investigated by comparing RNA sequencing, quantitative-real time PCR (q-rtPCR), and functional assays. Editing this region resulted in GLI1 transcriptional knockdown, delayed neural commitment, and inhibition of endodermal and mesodermal differentiation during spontaneous and directed differentiation experiments. We found a delay in the onset of early osteogenic markers, a reduction in the hematopoietic potential to form granulocyte units, and a decrease in cancer-related gene expression. Furthermore, inhibition of GLI1 via antagonist GANT-61 had similar in vitro effects. These results indicate that the GLI1 intronic region is critical for the feedback loop and that GLI1 has lineage-specific effects on hESC differentiation. Our work is the first study to document the extent of GLI1 abrogation on early stages of human development and to show that GLI1 transcription can be altered in a therapeutically useful way.
671593
Increases in certain algae could impact carbon cycle
Two new studies report dramatic changes in phytoplankton abundance and nature, changes that have important implications for storing excess carbon. Collectively, these studies suggest that certain types of carbon-intensive algae are flourishing and will play increasingly prominent roles as carbon pumps, removing carbon dioxide from the atmosphere. Using the isotopic signature of phytoplankton amino acids embedded in skeletons of deep water soft corals, Kelton McMahon and colleagues determined how plankton dominance changed in the North Pacific over the past millennium. Their analysis reveals that there was a transition from dominance by non-nitrogen-fixing cyanobacteria to that by eukaryotic microalgae. What's more, around the beginning of the industrial age, another transition occurred to a stronger nitrogen-fixing cyanobacterial community. The two transition periods were found to be markedly different; whereas the first transition took more than 600 years, the second, more recent transition occurred over less than 200 years. Since some bacteria of the more recent transition act as very efficient carbon pumps, the authors suggest that the ongoing trend might lead to a more efficient carbon pump system in the world's oceans. A second study by Sara Rivero-Calle et al. found a dramatic increase in calcium carbonate-coated (coccolithophore) algae in the North Atlantic, from 2% to more than 20%, between 1965 and 2010. Their analysis suggests this increase was largely driven by carbon dioxide levels and the Atlantic Multidecadal Oscillation. The researchers used survey data of plankton combined with a model that considered more than 20 biological and physical factors. They found a clear link between coccolithophore algae and carbon dioxide increases, and a secondary link between AMO patterns and the increase in algae. Similar to the study by McMahon et al., this dramatic increase was found in a phytoplankton group that is important for carbon cycling, since it incorporates carbon in its scaly exterior.
10.1126/science.aaa9942
2,015
Science
Millennial-scale plankton regime shifts in the subtropical North Pacific Ocean
Climate change is predicted to alter marine phytoplankton communities and affect productivity, biogeochemistry, and the efficacy of the biological pump. We reconstructed high-resolution records of changing plankton community composition in the North Pacific Ocean over the past millennium. Amino acid-specific δ(13)C records preserved in long-lived deep-sea corals revealed three major plankton regimes corresponding to Northern Hemisphere climate periods. Non-dinitrogen-fixing cyanobacteria dominated during the Medieval Climate Anomaly (950-1250 Common Era) before giving way to a new regime in which eukaryotic microalgae contributed nearly half of all export production during the Little Ice Age (~1400-1850 Common Era). The third regime, unprecedented in the past millennium, began in the industrial era and is characterized by increasing production by dinitrogen-fixing cyanobacteria. This picoplankton community shift may provide a negative feedback to rising atmospheric carbon dioxide concentrations.
929175
WHO recommends antibody treatment for covid patients at high risk of hospital admission
A treatment combining two antibodies (casirivimab and imdevimab) is recommended for two specific groups of patients with covid-19 by a WHO Guideline Development Group (GDG) panel of international experts and patients in The BMJ today. The first are patients with non-severe covid-19 who are at highest risk of hospitalisation and the second are those with severe or critical covid-19 who are seronegative, meaning they have not mounted their own antibody response to covid-19. The first recommendation is based on new evidence from three trials that have not yet been peer reviewed, but show that casirivimab and imdevimab probably reduce the risk of hospitalisation and duration of symptoms in those at highest risk of severe disease, such as unvaccinated, older, or immunosuppressed patients.  This second recommendation is based on data from the RECOVERY trial showing that casirivimab and imdevimab probably reduce deaths (ranging from 49 fewer per 1,000 in the severely ill to 87 fewer in the critically ill) and the need for mechanical ventilation in seronegative patients. For all other covid-19 patients, any benefits of this antibody treatment are unlikely to be meaningful. Casirivimab and imdevimab are monoclonal antibodies that when used together bind to the SARS-CoV-2 spike protein, neutralising the virus’s ability to infect cells. The recommendations are part of a living guideline, developed by the World Health Organization with the methodological support of MAGIC Evidence Ecosystem Foundation, to provide up to date, trustworthy guidance on the management of covid-19 and help doctors make better decisions with their patients. Living guidelines are useful in fast moving research areas like covid-19 because they allow researchers to update previously vetted and peer reviewed evidence summaries as new information becomes available. The panel acknowledged several cost and resource implications associated with this treatment, which may make access to low and middle income countries challenging. For example, rapid serological tests will be needed to identify eligible patients who are severely ill, treatment must be given intravenously using specialist equipment, and patients should be monitored for allergic reactions. They also recognise the possibility that new variants may emerge in which casirivimab and imdevimab antibodies may have reduced effect. However, they say given the demonstrated benefits for patients, “the recommendations should provide a stimulus to engage all possible mechanisms to improve global access to the intervention and associated testing.” Today’s guidance adds to previous recommendations for the use of interleukin-6 receptor blockers and systemic corticosteroids for patients with severe or critical covid-19; and against the use of ivermectin and hydroxychloroquine in patients with covid-19 regardless of disease severity. The BMJ 10.1136/bmj.m3379 A living WHO guideline on drugs for covid-19 23-Sep-2021
10.1136/bmj.m3379
2,020
BMJ
A living WHO guideline on drugs for covid-19
Abstract Updates This is the fourteenth version (thirteenth update) of the living guideline, replacing earlier versions (available as data supplements). New recommendations will be published as updates to this guideline. Clinical question What is the role of drugs in the treatment of patients with covid-19? Context The evidence base for therapeutics for covid-19 is evolving with numerous randomised controlled trials (RCTs) recently completed and underway. Emerging SARS-CoV-2 variants and subvariants are changing the role of therapeutics. What is new? The guideline development group (GDG) defined 1.5% as a new threshold for an important reduction in risk of hospitalisation in patients with non-severe covid-19. Combined with updated baseline risk estimates, this resulted in stratification into patients at low, moderate, and high risk for hospitalisation. New recommendations were added for moderate risk of hospitalisation for nirmatrelvir/ritonavir, and for moderate and low risk of hospitalisation for molnupiravir and remdesivir. New pharmacokinetic evidence was included for nirmatrelvir/ritonavir and molnupiravir, supporting existing recommendations for patients at high risk of hospitalisation. The recommendation for ivermectin in patients with non-severe illness was updated in light of additional trial evidence which reduced the high degree of uncertainty informing previous guidance. A new recommendation was made against the antiviral agent VV116 for patients with non-severe and with severe or critical illness outside of randomised clinical trials based on one RCT comparing the drug with nirmatrelvir/ritonavir. The structure of the guideline publication has also been changed; recommendations are now ordered by severity of covid-19. About this guideline This living guideline from the World Health Organization (WHO) incorporates new evidence to dynamically update recommendations for covid-19 therapeutics. The GDG typically evaluates a therapy when the WHO judges sufficient evidence is available to make a recommendation. While the GDG takes an individual patient perspective in making recommendations, it also considers resource implications, acceptability, feasibility, equity, and human rights. This guideline was developed according to standards and methods for trustworthy guidelines, making use of an innovative process to achieve efficiency in dynamic updating of recommendations. The methods are aligned with the WHO Handbook for Guideline Development and according to a pre-approved protocol (planning proposal) by the Guideline Review Committee (GRC). A box at the end of the article outlines key methodological aspects of the guideline process. MAGIC Evidence Ecosystem Foundation provides methodological support, including the coordination of living systematic reviews with network meta-analyses to inform the recommendations. The full version of the guideline is available online in MAGICapp and in PDF on the WHO website, with a summary version here in The BMJ . These formats should facilitate adaptation, which is strongly encouraged by WHO to contextualise recommendations in a healthcare system to maximise impact. Future recommendations Recommendations on anticoagulation are planned for the next update to this guideline. Updated data regarding systemic corticosteroids, azithromycin, favipiravir and umefenovir for non-severe illness, and convalescent plasma and statin therapy for severe or critical illness, are planned for review in upcoming guideline iterations.
923435
Oncotarget: Replication-stress sensitivity in breast cancer cells
Oncotarget published "Frame-shift mediated reduction of gain-of-function p53 R273H and deletion of the R273H C-terminus in breast cancer cells result in replication-stress sensitivity" which reported that these authors recently documented that gain-of-function mutant p53 R273H in triple negative breast cancer cells interacts with replicating DNA and PARP1. The missense R273H GOF mtp53 has a mutated central DNA binding domain that renders it unable to bind specifically to DNA, but maintains the capacity to interact tightly with chromatin. Both the C-terminal domain and oligomerization domain of GOF mtp53 proteins are intact and it is unclear whether these regions of mtp53 are responsible for chromatin-based DNA replication activities. These included a frame-shift mtp53 R273Hfs387, which depleted mtp53 protein expression; mtp53 R273HΔ381-388, which had a small deletion within the CTD; and mtp53 R273HΔ347-393, which had both the OD and CTD regions truncated. The mtp53 R273HΔ347-393 existed exclusively as monomers and disrupted the chromatin interaction of mtp53 R273H. Taken together these Oncotarget findings show that the CTD and OD domains of mtp53 R273H play critical roles in mutant p53 GOF that pertain to processes associated with DNA replication. Taken together these Oncotarget findings show that the CTD and OD domains of mtp53 R273H play critical roles in mutant p53 GOF that pertain to processes associated with DNA replication. Dr. Jill Bargonetti said "The p53 tumor suppressor protein is well known as a transcription factor but p53 also has transcription independent functions." While tumor-derived missense mtp53 proteins have altered functions they contain the two N-terminal transactivation domains, followed by a proline rich domain, an altered central DNA binding domain, and the oligomerization domain and the C-terminal regulatory domain. Herein they further examine the ability of mtp53 R273H, and its OD and CTD regions, to influence cell proliferation, DNA replication, and cell cycle progression of breast cancer cells. The choice to investigate a potential role for the OD and CTD domains within the context of the mtp53 R273H allele was two-fold: They delineated the above GOF pathway in this background and in parallel with the studies reported, worked to generate more tools to elucidate the role of each domain in mtp53 GOF activity; Their pursuit of a genetic approach using CRISPR-Cas9 technology to create specific alterations within each domain necessitated that we focus first on one mtp53 R273H expressing-cell line. They saw that a frameshift mutation in C-terminal end of mtp53 reduced stable mtp53 R273H protein levels compared to the parental MDA-MB-468 cells, reduced cell proliferation, and reduced the chromatin association of replication proteins that mirrored their slow progression through S-phase. The CRISPR-Cas9 targeting also produced cell clones with C-terminal truncated mtp53 R273H proteins; such cells with truncated mtp53 R273H showed decreased proliferation as compared to the parental cells but progressed through S phase in a similar manner. The Bargonetti Research Team concluded in their Oncotarget Research Output that their current studies do not point to a specific function executed by the OD and CTD domains in response to thymidine; however, they can show that their loss does not impact replisome assembly at the onset of S-phase as measured by PCNA chromatin loading and they will address this finding in the future. Thus, OD and CTD domain function(s) correlate with events post S-phase entry, in contrast with that function conferred by other p53 domain(s) deficient in the mtp53fs387 cell line, whose loss impedes S-phase entry. Although currently the authors are unable to articulate the precise roles of these distinct regions of p53 in response to thymidine, their studies suggest that they may function at temporally distinct stages of S-phase. ### DOI - https://doi.org/10.18632/oncotarget.27975 Full text - https://www.oncotarget.com/article/27975/text/ Correspondence to - Jill Bargonetti - [email protected] Keywords - mutant p53, gain-of-function, oligomerization, DNA replication, frame-shift About Oncotarget Oncotarget is a bi-weekly, peer-reviewed, open access biomedical journal covering research on all aspects of oncology. To learn more about Oncotarget, please visit https://www.oncotarget.com or connect with: SoundCloud - https://soundcloud.com/oncotarget Facebook - https://www.facebook.com/Oncotarget/ Twitter - https://twitter.com/oncotarget LinkedIn - https://www.linkedin.com/company/oncotarget Pinterest - https://www.pinterest.com/oncotarget/ Reddit - https://www.reddit.com/user/Oncotarget/ Oncotarget is published by Impact Journals, LLC please visit https://www.ImpactJournals.com or connect with @ImpactJrnls Media Contact [email protected] 18009220957x105 Copyright © 2021 Impact Journals, LLC Impact Journals is a registered trademark of Impact Journals, LLC
10.18632/oncotarget.27975
2,021
Oncotarget
Frame-shift mediated reduction of gain-of-function p53 R273H and deletion of the R273H C-terminus in breast cancer cells result in replication-stress sensitivity
We recently documented that gain-of-function (GOF) mutant p53 (mtp53) R273H in triple negative breast cancer (TNBC) cells interacts with replicating DNA and PARP1. The missense R273H GOF mtp53 has a mutated central DNA binding domain that renders it unable to bind specifically to DNA, but maintains the capacity to interact tightly with chromatin. Both the C-terminal domain (CTD) and oligomerization domain (OD) of GOF mtp53 proteins are intact and it is unclear whether these regions of mtp53 are responsible for chromatin-based DNA replication activities. We generated MDA-MB-468 cells with CRISPR-Cas9 edited versions of the CTD and OD regions of mtp53 R273H. These included a frame-shift mtp53 R273H
949921
null
10.1038/s41587-021-01146-5
2,022
Nature Biotechnology
Learning protein fitness models from evolutionary and assay-labeled data
Machine learning-based models of protein fitness typically learn from either unlabeled, evolutionarily related sequences or variant sequences with experimentally measured labels. For regimes where only limited experimental data are available, recent work has suggested methods for combining both sources of information. Toward that goal, we propose a simple combination approach that is competitive with, and on average outperforms more sophisticated methods. Our approach uses ridge regression on site-specific amino acid features combined with one probability density feature from modeling the evolutionary data. Within this approach, we find that a variational autoencoder-based probability density model showed the best overall performance, although any evolutionary density model can be used. Moreover, our analysis highlights the importance of systematic evaluations and sufficient baselines.
888066
Protonation induced high-Tc phases in iron-based superconductors
Electric-field-controlled modulation of physical properties of materials via ionic evolution is a recent fascinating and fast developing research frontier, due to its novel phase modulation and potential applications in batteries, intelligent glass, fuel cells and etc. In a work published as cover article of Science Bulletin recently, researchers reported their efforts to implant protons into iron-based superconductors, using ionic liquids as the electrical medium. They have succeeded in implanting protons into the 11 and the 122 structural compounds. Protonation induces bulk superconductivity with Tc at 20 K in undoped BaFe2As2, and two high-Tc phases with Tc at 42.5 K and 20 K in FeSe0.93S0.07, and enhances Tc from 4 K to 18 K in FeS. As a significant example, proton NMR measurements in FeS are enabled with evidences of unconventional superconductivity, overcoming the difficulty of lacking sensitive NMR isotopes. Therefore, protons serve a double role of dopant for carrier doping and a sensitive NMR isotope in bulk materials. This protonation methodology is easy to operate, could be immediately applicable to a wide range of materials for exploring insulating, metallic, or superconducting phases, and allows for rich bulk spectroscopic studies in the emergent phases.
10.1016/j.scib.2017.12.009
2,017
Science Bulletin
Protonation induced high- T c phases in iron-based superconductors evidenced by NMR and magnetization measurements
Chemical substitution during growth is a well-established method to manipulate electronic states of quantum materials, and leads to rich spectra of phase diagrams in cuprate and iron-based superconductors. Here we report a novel and generic strategy to achieve nonvolatile electron doping in series of (i.e. 11 and 122 structures) Fe-based superconductors by ionic liquid gating induced protonation at room temperature. Accumulation of protons in bulk compounds induces superconductivity in the parent compounds, and enhances the Tc largely in some superconducting ones. Furthermore, the existence of proton in the lattice enables the first proton nuclear magnetic resonance (NMR) study to probe directly superconductivity. Using FeS as a model system, our NMR study reveals an emergent high-Tc phase with no coherence peak which is hard to measure by NMR with other isotopes. This novel electric-field-induced proton evolution opens up an avenue for manipulation of competing electronic states (e.g. Mott insulators), and may provide an innovative way for a broad perspective of NMR measurements with greatly enhanced detecting resolution.
819207
New research sheds light on neuronal communication
Neurons communicate with each other through specialized structures called synapses. The information is transmitted in the form of synaptic vesicles that contain specific chemical messengers called neurotransmitters The amount and coordinated release of neurotransmitters regulates synaptic strength which is critical to maintain proper communication between neurons. To better understand and address a number of neurological disorders, we need a better understanding of the molecular mechanisms that regulate neuronal communication. A new study has revealed an important function of a class of presynaptic proteins previously implicated in neurological disorders in the regulation of synaptic strength. Synaptic proteins and neuronal transmission A synapse consists of a presynaptic terminal of one neuron and a postsynaptic terminal of another. The presynaptic terminal stores vesicles containing neurotransmitters, while the postsynaptic terminal contains neurotransmitter receptors. A dense collection of proteins is present in these terminals, however the functional role of many of these proteins remains unknown. In particular, the G-protein-coupled receptor kinase-interacting proteins (GITs) exert a critical control in synaptic transmission, since deletions of these proteins are lethal or cause sensory deficits and cognitive impairments in mice. In particular, GIT proteins and the pathways they regulate have been implicated in neurological disorders such as Attention Deficit Hyperactivity Disorder (ADHD) and Huntington's Disease. Several studies have demonstrated the role of GITs in the postsynaptic terminal, but very little is known about their role in the presynaptic terminal. Researchers in Samuel Young Jr.'s research team at the Max Planck Florida Institute for Neuroscience set out to investigate the role of GITs in the giant synapse, the calyx of Held, of the auditory system - the optimal model to study the presynaptic terminal independently from the postsynaptic terminal. New findings In their December publication in Neuron, Drs. Samuel Young Jr. and Mónica S. Montesinos and collaborators report for the first time that GIT proteins are critical presynaptic regulators of synaptic strength. This study uncovers previously unknown distinct roles for GIT1 and GIT2 in regulating neurotransmitter release strength, with GIT1 as a specific regulator of presynaptic release probability. This regulation is likely to contribute to the disruptions in neural circuit functions leading to sensory disorders, memory and learning impairment and other neurological disorders. Future Directions Future studies of Dr. Samuel Young Jr.'s lab will resolve the mechanisms by which GITs regulate synaptic strength and their roles in the early stages of auditory processing and neurological diseases. "Our work brings significant insight into the understanding of how neuronal communication is regulated, which is essential to understand the cellular and molecular mechanisms of information processing by neuronal circuits and the role of these proteins in the development of neurological diseases," explained Dr. Young. ### About Max Planck Florida Institute for Neuroscience The Max Planck Florida Institute for Neuroscience (Jupiter, Florida, USA) specializes in the development and application of novel technologies for probing the structure, function and development of neural circuits. It is the first research institute of the Max Planck Society in the United States.
10.1016/j.neuron.2015.10.042
2,015
Neuron
Presynaptic Deletion of GIT Proteins Results in Increased Synaptic Strength at a Mammalian Central Synapse
A cytomatrix of proteins at the presynaptic active zone (CAZ) controls the strength and speed of neurotransmitter release at synapses in response to action potentials. However, the functional role of many CAZ proteins and their respective isoforms remains unresolved. Here, we demonstrate that presynaptic deletion of the two G protein-coupled receptor kinase-interacting proteins (GITs), GIT1 and GIT2, at the mouse calyx of Held leads to a large increase in AP-evoked release with no change in the readily releasable pool size. Selective presynaptic GIT1 ablation identified a GIT1-specific role in regulating release probability that was largely responsible for increased synaptic strength. Increased synaptic strength was not due to changes in voltage-gated calcium channel currents or activation kinetics. Quantitative electron microscopy revealed unaltered ultrastructural parameters. Thus, our data uncover distinct roles for GIT1 and GIT2 in regulating neurotransmitter release strength, with GIT1 as a specific regulator of presynaptic release probability.
830589
Giant animals lived in Amazonian mega-wetland
A land of giants. This is the best definition for Lake Pebas, a mega-wetland that existed in western Amazonia during the Miocene Epoch, which lasted from 23 million to 5.3 million years ago. The Pebas Formation was the home of the largest caiman and gavialoid crocodilian ever identified, both of which were over ten meters in length, the largest turtle, whose carapace had a diameter of 3.5 meters, and rodents that were as large as present-day buffaloes. Remains of the ancient biome are scattered over an area of more than 1 million square meters in what is now Bolivia, Acre State and western Amazonas State in Brazil, Peru, Colombia and Venezuela. The oldest datings in this biome are for fossils found in Venezuela and show that Lake Pebas existed 18 million years ago. Until recently, scientists believed that the mega-swamp dried up more than 10 million years ago, before the Amazon River reversed course. During most of the Miocene, this river flowed from east to west, opposite to its present direction. The giant animals disappeared when the waters of Pebas receded. While investigating sediments associated with vertebrate fossils from two paleontological sites on the Acre and Purus Rivers, Marcos César Bissaro Júnior, a biologist affiliated with the University of São Paulo's Ribeirão Preto School of Philosophy, Science and Letters (FFCLRP-USP) in Brazil, obtained datings of 8.5 million years with a margin of error of plus or minus 500,000 million years. There is evidence that the Amazon was already running in its present direction 8.5 million years ago, draining from the Peruvian Andes into the Atlantic Ocean. By then, the Pebas system must have no longer resembled the magnificent wetlands of old. Rather, the system resembled a floodplain similar to the present-day Brazilian Pantanal. This is the view of Annie Schmaltz Hsiou, a professor in the Biology Department at FFCLRP-USP and supervisor of Bissaro Júnior's research, which is described in a recently published article in the journal Palaeogeography, Palaeoclimatology, Palaeoecology. The study was supported by São Paulo Research Foundation -FAPESP and Brazil's National Council for Scientific and Technological Development (CNPq). The participants also included researchers from the Federal University of Santa Maria (UFSM), the Zoobotanic Foundation's Natural Science Museum in Rio Grande do Sul, São Paulo State University (UNESP), the Federal University of Acre, and Boise State University in Idaho (USA). The Pebas system encompasses several geological formations in western Amazonia: the Pebas and Fitzcarrald Formations in Peru and Brazil, the Solimões Formation in Brazil, the Urumaco and Socorro Formations in Venezuela, the La Venta Formation in Colombia, and the Quebrada Honda Formation in Bolivia. "While the Solimões Formation is one of the best-sampled Neogene fossil-bearing stratigraphic units of northern South America, assumptions regarding deposition age in Brazil have been based largely on indirect methods," Bissaro Júnior said. "The absence of absolute ages hampers more refined interpretations on the paleoenvironments and paleoecology of the faunistic associations found there and does not allow us to answer some key questions, such as whether these beds were deposited after, during or before the formation of the proto-Amazon River." To answer these and other questions, Bissaro Júnior's study presents the first geochronology of the Solimões Formation, based on mineral zircon specimens collected at two of the region's best-sampled paleontological sites: Niterói on the Acre River in the municipality of Senador Guiomar and Talismã on the Purus River in the municipality of Manuel Urbano. Since the 1980s, many Miocene fossils have been found at the Niterói site, including crocodilians, fishes, rodents, turtles, birds, and xenarthran mammals (extinct terrestrial sloths). Miocene fossils of crocodilians, snakes, rodents, primates, sloths, and extinct South American ungulates (litopterns) have been found in the same period at the Talismã site. As a result of the datings, Bissaro Júnior discovered that the rocks at the Niterói and Talismã sites are approximately 8.5 million and 10.9 million years old (maximum depositional age), respectively. "Based on both faunal dissimilarities and maximum depositional age differences between the two localities, we suggest that Talismã is older than Niterói. However, we stress the need for further zircon dating to test this hypothesis, as well as datings for other localities in the Solimões Formation," he said. Drying up of Pebas Lake Pebas was formed when the land rose in the proto-Amazon basin as a result of the Andean uplift, which began accelerating 20 million years ago. At that time, western Amazonia was bathed by the Amazon (which then flowed toward the Caribbean) and the Magdalena in Colombia. The Andes uplift that occurred in what is now Peru and Colombia eventually interrupted the flow of water toward the Pacific, causing water to pool in western Amazonia and giving rise to the mega-wetland. However, the Andes continued to rise. The continuous uplifting of land in Amazonia had two effects. The proto-Amazon, previously pent up in Lake Pebas, reversed course and became the majestic river we now know. During this process, water gradually drained out of the Pebas mega-swamp. The swamp became a floodplain full of huge animals, which still existed 8.5 million years ago, according to new datings by Bissaro Júnior. Unstoppable geological forces eventually drained the remains of the temporary lagoons and lakes in western Amazonia. This was the end of Pebas and its fauna. "The problem with dating Pebas has always been associating datings directly with the vertebrate fauna. There are countless datings of rocks in which invertebrate fossils have been found, but dating rocks with vertebrates in Brazil was one of our goals," Schmaltz Hsiou said. The new datings, she added, suggest that the Pebas system - i.e., the vast wetland - existed between 23 million and 10 million years ago. The Pebas system gave way to the Acre system, an immense floodplain that existed between 10 million and 7 million years ago, where reptiles such as Purussaurus and Mourasuchus still lived. "The Acre system must have been a similar biome to what was then Venezuela, consisting of lagoons surrounding the delta of a great river, the proto-Orinoco," she said. Giant rodents Rodents are a highly diversified group of mammals that inhabit all continents except for Antarctica. Amazonia is home to a large number of rodent species. "In particular, a rodent group known scientifically as Caviomorpha came to our continent about 41 million years ago from Africa," said Leonardo Kerber, a researcher at UFSM's Quarta Colônia Paleontological Research Support Center (CAPPA) and a coauthor of the article published in Palaeogeography, Palaeoclimatology, Palaeoecology. "In this period, known as the Eocene Epoch, Africa and South America were already totally separated, with at least 1,000 kilometers between the closest points of the two continents, so there couldn't have been any biogeographical connections enabling terrestrial vertebrates to migrate between the two land masses," Kerber said. "However, the ocean currents drove dispersal by means of natural rafts of tree trunks and branches blown into rivers by storms and swept out to sea. Some of these rafts would have borne away small vertebrates. An event of this kind may have enabled small mammals such as Platyrrhini monkeys, as well as small rodents, to cross the ocean, giving rise to one of the most emblematic groups of South American mammals, the caviomorph rodents." According to Kerber, the continent's caviomorph rodents have undergone a long period of evolution since their arrival, becoming highly diversified as a result. In Brazil, the group is currently represented by the paca, agouti, guinea pig, porcupine and bristly mouse, as well as by the capybara, the world's largest rodent. "In Amazonia, above all, we now find a great diversity of bristly mice, porcupines, agoutis and pacas. In the Miocene, however, the Amazonian fauna was very different from what we can observe now," Kerber said. "In recent years, in addition to reporting the presence of many fossils of species already known to science, some of which had previously been recorded in the Solimões Formation and others that were known from other parts of South America but recorded in Solimões for the first time, we've described three new medium-sized rodent species (Potamarchus adamiae, Pseudopotamarchus villanuevai and Ferigolomys pacarana - Dinomyidae) that are related to the pacarana (Dinomys branickii)." Kerber said an article to be published shortly in the Journal of Vertebrate Paleontology will recognize Neoepiblema acreensis, an endemic Brazilian Miocene neoepiblemid rodent that weighed some 120 kg as a valid species. "The species was described in 1990 but was considered invalid at the end of the decade. These fossil records of both known and new species help us understand how life evolved in the region and how its biodiversity developed and experienced extinctions over millions of years in the past," Kerber said.
10.1016/j.palaeo.2018.11.032
2,018
Palaeogeography Palaeoclimatology Palaeoecology
Detrital zircon U–Pb geochronology constrains the age of Brazilian Neogene deposits from Western Amazonia
Abstract The fossiliferous beds of the Solimoes Formation, western Brazilian Amazon have yielded several vertebrate fossils that are key to understand the evolution of Neotropical biotas. Although this sedimentary unit has been studied for more than two centuries, no absolute dates are available so far, preventing more precise bio/chronostratigraphic interpretations and broader comprehension of the biotic/geological events that affected this northern portion of South America during the Neogene. Here, we present the first Neogene radioisotopic dates for the Brazilian Amazon, via U-Pb dating of detrital zircon from two classical sites. Both samples have small proportions of relatively young zircon grains, the dates from which are the maximum ages of deposition. LA-ICPMS analysis of two grains from the Niteroi locality yielded a weighted mean of 8.5 ± 0.5 Ma and CA-TIMS analysis of two grains from the Talisma locality yielded a weighted mean of 10.89 ± 0.13 Ma. These maximum deposition ages are in the Tortonian stage, late Miocene, confirming the age previously inferred based on biochronological data.
953606
Genetic roots of 3 mitochondrial diseases ID’d via new approach
When something goes wrong in mitochondria, the tiny organelles that power cells, it can cause a bewildering variety of symptoms such as poor growth, fatigue and weakness, seizures, developmental and cognitive disabilities, and vision problems. The culprit could be a defect in any of the 1,300 or so proteins that make up mitochondria, but scientists have very little idea what many of those proteins do, making it difficult to identify the faulty protein and treat the condition. Researchers at Washington University School of Medicine in St. Louis and the University of Wisconsin–Madison systematically analyzed dozens of mitochondrial proteins of unknown function and suggested functions for many of them. Using these data as a starting point, they identified the genetic causes of three mitochondrial diseases and proposed another 20 possibilities for further investigation. The findings, published May 25 in Nature, indicate that understanding how mitochondria’s hundreds of proteins work together to generate power and perform the organelles’ other functions could be a promising path to finding better ways to diagnose and treat such conditions. “We have a parts list for mitochondria, but we don’t know what many of the parts do,” said co-senior author David J. Pagliarini, PhD, the Hugo F. and Ina C. Urbauer Professor and a BJC Investigator at Washington University. “It’s similar to if you had a problem with your car, and you brought it to a mechanic, and upon opening the hood they said, ‘We’ve never seen half of these parts before.’ They wouldn’t know how to fix it. This study is an attempt to define the functions of as many of those mitochondrial parts as we can so we have a better understanding of what happens when they don’t work and, ultimately, a better chance at devising therapeutics to rectify those problems.” Mitochondrial diseases are a group of rare genetic conditions that collectively affect one in every 4,300 people. Since mitochondria provide energy for almost all cells, people with defects in their mitochondria can have symptoms in any part of the body, although the symptoms tend to be most pronounced in the tissues that require the most energy, such as the heart, brain and muscles. To better understand how mitochondria work, Pagliarini teamed up with colleagues, including co-senior author Joshua J. Coon, PhD, a UW-Madison professor of biomolecular chemistry & chemistry and an investigator with the Morgridge Institute for Research; and co-first authors Jarred W. Rensvold, PhD, a former staff scientist in Pagliarini’s lab, and Evgenia Shishkova, PhD, a staff scientist in Coon’s lab, to identify the functions of as many mitochondrial proteins as possible. The researchers used CRISPR-Cas9 technology to remove individual genes from a human cell line. The procedure created a set of related cell lines, each derived from the same original cell line but with a single gene deleted. The missing genes coded for 50 mitochondrial proteins of unknown function and 66 mitochondrial proteins with known functions. Then, they examined each cell line for clues to the role each missing gene normally plays in keeping the mitochondria running properly. The researchers monitored the cells’ growth rates and quantified the levels of 8,433 proteins, 3,563 lipids and 218 metabolites for each cell line. They used the data to build the MITOMICS (mitochondrial orphan protein multi-omics CRISPR screen) app, equipping it with tools to analyze and identify the biological processes that faltered when a specific protein went missing. After validating the approach with mitochondrial proteins of known function, the researchers proposed possible biological roles for many mitochondrial proteins of unknown function. With further investigation, they were able to tie three proteins to three separate mitochondrial conditions. “It is very exciting to see how our mass spectrometry technology platform can generate data on this scale but more importantly, data that can directly help us to understand human disease,” Coon said. One condition is a multisystemic disorder caused by defects in the main energy-producing pathway. Co-author Robert Taylor, PhD, DSc, a professor of mitochondrial pathology at Newcastle University in Newcastle-upon-Tyne, U.K., identified a patient with clear signs of the disorder but no mutations in the usual suspect genes. The researchers identified a new gene in the pathway and showed that the patient carried a mutation in it. Separately, Pagliarini and colleagues noticed that disrupting one gene, RAB5IF, eliminated a protein encoded by a different gene, TMCO1, that has been linked to cerebrofaciothoracic dysplasia. The condition is characterized by distinctive facial features and severe intellectual disability. In collaboration with co-author Nurten Akarsu, PhD, a professor of human genetics at Hacettepe University in Ankara, Turkey, the researchers showed that a mutation in RAB5IF was responsible for one case of cerebrofaciothoracic dysplasia and two cases of cleft lip in one Turkish family. A third gene, when disrupted, led to problems with sugar storage, contributing to a fatal autoinflammatory syndrome. Data regarding that syndrome were published last year in a paper led by Bruno Reversade, PhD, of A*STAR, Singapore’s Agency for Science, Technology and Research. “We focused primarily on the three conditions, but we found data connecting about 20 other proteins to biological pathways or processes,” said Pagliarini, a professor of cell biology & physiology, of biochemistry & molecular biophysics and of genetics. “We can’t chase down 20 stories in one paper, but we made hypotheses and put them out there for us and others to test.” To aid scientific discovery, Pagliarini, Coon and colleagues have made the MITOMICS app available to the public. They built in several user-friendly analysis tools so anyone can look for patterns and create plots just by clicking around. All of the data can be downloaded for more advanced analysis. “The hope is that this large dataset becomes one of a number in the field that collectively help us to devise better biomarkers and diagnostics for mitochondrial diseases,” Pagliarini said. “Every time we discover a function of a new protein, it gives us a new opportunity to target a pathway therapeutically. Our long-term goal is to understand mitochondria at sufficient depth to be able to intervene therapeutically, which we can’t do yet.” Nature 10.1038/s41586-022-04765-3 Experimental study Cells Defining mitochondrial protein functions through deep multi-omic profiling 25-May-2022 J.J.C. is a consultant for Thermo Fisher Scientific.
10.1038/s41586-022-04765-3
2,022
Nature
Defining mitochondrial protein functions through deep multiomic profiling
Mitochondria are epicentres of eukaryotic metabolism and bioenergetics. Pioneering efforts in recent decades have established the core protein componentry of these organelles1 and have linked their dysfunction to more than 150 distinct disorders2,3. Still, hundreds of mitochondrial proteins lack clear functions4, and the underlying genetic basis for approximately 40% of mitochondrial disorders remains unresolved5. Here, to establish a more complete functional compendium of human mitochondrial proteins, we profiled more than 200 CRISPR-mediated HAP1 cell knockout lines using mass spectrometry-based multiomics analyses. This effort generated approximately 8.3 million distinct biomolecule measurements, providing a deep survey of the cellular responses to mitochondrial perturbations and laying a foundation for mechanistic investigations into protein function. Guided by these data, we discovered that PIGY upstream open reading frame (PYURF) is an S-adenosylmethionine-dependent methyltransferase chaperone that supports both complex I assembly and coenzyme Q biosynthesis and is disrupted in a previously unresolved multisystemic mitochondrial disorder. We further linked the putative zinc transporter SLC30A9 to mitochondrial ribosomes and OxPhos integrity and established RAB5IF as the second gene harbouring pathogenic variants that cause cerebrofaciothoracic dysplasia. Our data, which can be explored through the interactive online MITOMICS.app resource, suggest biological roles for many other orphan mitochondrial proteins that still lack robust functional characterization and define a rich cell signature of mitochondrial dysfunction that can support the genetic diagnosis of mitochondrial diseases.
708667
Perovskites -- materials of the future in optical communication
Researchers at the universities in Linköping and Shenzhen have shown how an inorganic perovskite can be made into a cheap and efficient photodetector that transfers both text and music. "It's a promising material for future rapid optical communication", says Feng Gao, researcher at Linköping University. "Perovskites of inorganic materials have a huge potential to influence the development of optical communication. These materials have rapid response times, are simple to manufacture, and are extremely stable." So says Feng Gao, senior lecturer at LiU who, together with colleagues who include Chunxiong Bao, postdoc at LiU, and scientists at Shenzhen University, has published the results in the prestigious journal Advanced Materials. All optical communication requires rapid and reliable photodetectors - materials that capture a light signal and convert it into an electrical signal. Current optical communication systems use photodetectors made from materials such as silicon and indium gallium arsenide. But these are expensive, partly because they are complicated to manufacture. Moreover, these materials cannot to be used in some new devices, such as mechanically flexible, light-weight or large-area devices. Researcher have been seeking cheap replacement, or at least supplementary, materials for many years, and have looked at, for example, organic semi-conductors. However, the charge transport of these has proved to be too slow. A photodetector must be rapid. The new perovskite materials have been extremely interesting in research since 2009, but the focus has been on their use in solar cells and efficient light-emitting diodes. Feng Gao, researcher in Biomolecular and Organic Electronics at LiU, was awarded a Starting Grant of EUR 1.5 million from the European Research Council (ERC) in the autumn of 2016, intended for research into using perovskites in light-emitting diodes. Perovskites form a completely new family of semi-conducting materials that are defined by their crystal structures. They can consist of both organic and inorganic substances. They have good light-emitting properties and are easy to manufacture. For applications such as light-emitting diodes and efficient solar cells, most interest has been placed on perovskites that consist of an organic substance (containing carbon and hydrogen), metal, and halogen (fluorine, chlorine, bromine or iodine) ions. However, when this composition was used in photodetectors, it proved to be too unstable. The results changed, however, when Chunxiong Bao used the right materials, and managed to optimise the manufacturing process and the structure of the film. The film in the new perovskite, which contains only inorganic elements (caesium, lead, iodine and bromine), has been tested in a system for optical communication, which confirmed its ability to transfer both text and images, rapidly and reliably. The quality didn't deteriorate, even after 2,000 hours at room temperature. "It's very gratifying that we have already achieved results that are very close to application," says Feng Gao, who leads the research, together with Professor Wenjing Zhang at Shenzhen University. ### The article: High Performance and Stable All?Inorganic Metal Halide Perovskite-Based Photodetectors for Optical Communication Applications. Chunxiong Bao, Jie Yang, Sai Bai, Weidong Xu, Zhibo Yan, Qingyu Xu, Junming Liu, Wenjing Zhang and Feng Gao. Advanced Materials 2018, DOI 10.1002/adma.201803422
10.1002/adma.201803422
2,018
Advanced Materials
High Performance and Stable All‐Inorganic Metal Halide Perovskite‐Based Photodetectors for Optical Communication Applications
Abstract Photodetectors are critical parts of an optical communication system for achieving efficient photoelectronic conversion of signals, and the response speed directly determines the bandwidth of the whole system. Metal halide perovskites, an emerging class of low‐cost solution‐processed semiconductors, exhibiting strong optical absorption, low trap states, and high carrier mobility, are widely investigated in photodetection applications. Herein, through optimizing the device engineering and film quality, high‐performance photodetectors based on all‐inorganic cesium lead halide perovskite (CsPbI x Br 3– x ), which simultaneously possess high sensitivity and fast response, are demonstrated. The optimized devices processed from CsPbIBr 2 perovskite show a practically measured detectable limit of about 21.5 pW cm −2 and a fast response time of 20 ns, which are both among the highest reported device performance of perovskite‐based photodetectors. Moreover, the photodetectors exhibit outstanding long‐term environmental stability, with negligible degradation of the photoresponse property after 2000 h under ambient conditions. In addition, the resulting perovskite photodetector is successfully integrated into an optical communication system and its applications as an optical signal receiver on transmitting text and audio signals is demonstrated. The results suggest that all‐inorganic metal halide perovskite‐based photodetectors have great application potential for optical communication.
574268
Cannabinoids associated with negative respiratory health effects in older adults with COPD
Cannabinoids, a class of prescription pills that contain synthetically-made chemicals found in marijuana, are associated with a 64 per cent increase in death among older adults with chronic obstructive pulmonary disease (COPD), according to the first published data on the impact of cannabinoids on the respiratory health of individuals with the lung disease. The findings, published Wednesday in Thorax, have significant clinical implications as more physicians prescribe cannabinoids to patients with COPD to treat chronic muscle pain, difficulty sleeping and breathlessness. The study, led by St. Michael's Hospital of Unity Health Toronto, found that cannabinoids can contribute to negative respiratory health events in people with COPD, including hospitalization and death. COPD is a progressive lung disease that causes difficulty breathing and chronic productive coughing, and can be associated with a variety of non-respiratory issues, like chronic muscle pain and insomnia. "Cannabinoid drugs are being increasingly used by older adults with COPD, so it is important for patients and physicians to have a clear understanding of the side-effect profile of these drugs," says Dr. Nicholas Vozoris, lead author, a respirologist at St. Michael's and an associate scientist at the hospital's Li Ka Shing Knowledge Institute. "Our study results do not mean that cannabinoid drugs should be never used among older adults with COPD. Rather, our findings should be incorporated by patients and physicians into prescribing decision-making. Our results also highlight the importance of favouring lower over higher cannabinoid doses, when these drugs actually do need to be used." The study analyzed the health data of over 4,000 individuals in Ontario ages 66 years and older with COPD from 2006 to 2016. The data was equally split into two groups: older adults with COPD who were new cannabinoid users and older adults with COPD not using cannabinoids. Older adults in Ontario with COPD who were new cannabinoid users represented 1.1 per cent of the data, which was made available by ICES. Researchers observed particularly worse health outcomes among patients with COPD who were using higher doses of cannabinoids. Compared to non-users, new higher-dose cannabinoid users had a 178 per cent relative increase in hospitalizations for COPD or pneumonia, and a 231 per cent relative increase in all-cause death. "Older adults with COPD represent a group that would likely be more susceptible to cannabinoid-related respiratory side-effects, since older adults less efficiently break down drugs and hence, drug effects can linger in the body for longer - and since individuals with COPD have pre-existing respiratory troubles and respiratory compromise," says Dr. Vozoris, who is also a scientist at ICES. Researchers conducted a sub-analysis to explore what impact cannabinoid drugs versus opioid drugs had on respiratory outcomes among older adults with COPD, since cannabinoid drugs are often prescribed as an alternative to opioids to treat chronic pain. The research team did not find evidence to support that cannabinoids were a safer choice over opioids for older adults with COPD in so far as respiratory health outcomes.
10.1136/thoraxjnl-2020-215346
2,020
Thorax
Morbidity and mortality associated with prescription cannabinoid drug use in COPD
Introduction Respiratory-related morbidity and mortality were evaluated in relation to incident prescription oral synthetic cannabinoid (nabilone, dronabinol) use among older adults with chronic obstructive pulmonary disease (COPD). Methods This was a retrospective, population-based, data-linkage cohort study, analysing health administrative data from Ontario, Canada, from 2006 to 2016. We identified individuals aged 66 years and older with COPD, using a highly specific, validated algorithm, excluding individuals with malignancy and those receiving palliative care (n=185 876 after exclusions). An equivalent number (2106 in each group) of new cannabinoid users (defined as individuals dispensed either nabilone or dronabinol, with no dispensing for either drug in the year previous) and controls (defined as new users of a non-cannabinoid drug) were matched on 36 relevant covariates, using propensity scoring methods. Cox proportional hazard regression was used. Results Rate of hospitalisation for COPD or pneumonia was not significantly different between new cannabinoid users and controls (HR 0.87; 95% CI 0.61–1.24). However, significantly higher rates of all-cause mortality occurred among new cannabinoid users compared with controls (HR 1.64; 95% CI 1.14–2.39). Individuals receiving higher-dose cannabinoids relative to controls were observed to experience both increased rates of hospitalisation for COPD and pneumonia (HR 2.78; 95% CI 1.17–7.09) and all-cause mortality (HR 3.31; 95% CI 1.30–9.51). Conclusions New cannabinoid use was associated with elevated rates of adverse outcomes among older adults with COPD. Although further research is needed to confirm these observations, our findings should be considered in decisions to use cannabinoids among older adults with COPD.
674622
Study shows link between precipitation, climate zone and invasive cancer rates in the US
10.1089/ees.2019.0241
2,019
Environmental Engineering Science
Precipitation and Climate Zone Explains the Geographical Disparity in the Invasive Cancer Incidence Rates in the United States
Environmental factors such as ultraviolet radiation and pollution have been known to influence the incidence rate of cancer. However, these factors do not explain the variation in incidence rates across the United States. In this study, the hypothesis that precipitation and climate zone play a role in determining the incidence rate of invasive cancer in the United States is tested. The hypothesis was tested using the county-level cancer incidence rate data obtained from the Center for Disease Control's National Program of Cancer Registries Cancer Surveillance System. Individual generalized linear models were developed for each of the five separate cancer incidence rates, as well as total invasive cancer. Precipitation and climate zone were included in each model along with demographic variables such as annual income, population by gender, race, and age—important control variables. Results indicate that in the United States, counties with high precipitation and cold climate have statistically significantly higher rates of invasive cancer incidence rates (p < 0.05). This is the first study reporting precipitation and climate as natural environmental factors responsible for the geographical disparity in invasive cancer incidence rates within the United States.
612892
Fruitful discoveries: The power to purify water is in your produce
10.1021/acs.jchemed.8b00240
2,018
Journal of Chemical Education
Fruit and Vegetable Peels as Efficient Renewable Adsorbents for Removal of Pollutants from Water: A Research Experience for General Chemistry Students
Sustainability is emerging as a prominent curricular initiative at the undergraduate level, and as a result, involving students in real-world problems in the classroom and laboratory is an important goal. The specific problem of a dwindling supply of clean and safe drinking water is also of utmost importance and relevance. This general chemistry laboratory curriculum provides first-year students with an opportunity to design and implement their own experiments that employ fruit and vegetable peels as adsorbents to remove pollutants from water. The project is nine laboratory periods long, with the first 2 weeks devoted to providing students with the necessary tools to perform original research. In the third week, students visit the Dickinson College farm and brainstorm possible hypotheses. Working in pairs, students perform original research in the fourth through sixth weeks and investigate adsorption capacity and percent removal. In the final 3 weeks, students perform calculations and engage in peer review of their posters, which are presented at an all-college public poster session. This project introduces students to UV–vis and AA spectroscopy, making standard solutions and employing Beer's Law, as well as literature searching and experiment design. If time allows, FTIR spectroscopy may be employed to examine the chemical makeup of the peels. This curriculum can be used in subsets with additional guidance in a standard two-semester introductory course sequence.
698989
Antibiotic resistance rises in 'lonely' mutating microbes
A major study led by The University of Manchester has discovered that so called 'lonely' microbes, those living at low population densities, are more likely to mutate causing higher rates of antibiotic resistance. After analysing 70 years of data and nearly 500 different measurements of mutations, the study shows individual microbes - such as bacteria - found in denser microbial populations mutate much less than microbes in sparser groups. Mutations in bacteria can result in a range of outcomes, including becoming antibiotic resistant. Therefore, this research could pave the way to a better understanding of antibiotic resistance, contributing to more effective ways of combating the rise of antibiotic resistant 'superbugs'. The study, in collaboration with the Universities of Keele and Middlesex, follows the team's previous research that also looked at the relationship between mutation rate and population density of microbes, but only in one specific bacterium, E. coli. That study found that 'lonely' bacteria were nearly ten times as likely to mutate to resist antibiotics as those living in dense populations. This research, funded by the Biotechnology and Biological Sciences Research Council (BBSRC), expands massively on those initial findings by analysing mutation rates from all branches of life, analysing 68 independent studies of 26 species of microbes, even including viruses. This was followed with hundreds more experiments, using nearly 2 trillion microbial cells which, though tiny, if laid end to end, would stretch from Manchester to Newfoundland in Canada (nearly 4000km). The findings show that the initial discovery is repeated for multiple antibiotics that target bacteria and even yeast. Dr Chris Knight, from the University's Faculty of Science and Engineering and senior author of the study, said: 'Spontaneous mutations fuel evolution, but when that mutation leads to something more serious, such as resistance to antibiotics, it becomes an issue. According to the World Health Organisation (WHO), if resistance continues rise, by 2050 it would lead to 10 million people dying every year. 'That is why the particular mutations we looked at in the lab are those related to antibiotic resistance'. The WHO says the issue threatens the effective prevention and treatment of an ever-increasing range of infections caused by bacteria, parasites, viruses and fungi and is an increasingly serious threat to global public health. The authors call their finding 'density-associated mutation-rate plasticity', or DAMP. DAMP is a way these mutation rates vary with the microbe's environment. The researchers found that DAMP could play a key role in reducing the mutations that cause antibiotic resistance. Dr Krašovec, from the Faculty of Biology, Medicine and Health and lead author of the study, explains: 'In our analyses DAMP gives bacteria a lower chance of becoming antibiotic resistant at higher population densities. We anticipate that DAMP affects the course of evolution more generally and understanding its causes and effects will help understand and control evolution. 'What's exciting about DAMP is that it requires protein molecules that do the same thing in very different microbes, meaning that we can start to understand why mutation rates vary like this. This means that our results could be the first step towards manipulating microbial DAMP clinically as a way to slow the evolution of antibiotic resistance.' ### About The University of Manchester The University of Manchester, a member of the prestigious Russell Group, is the UK's largest single-site university with 38,600 students and is consistently ranked among the world's elite for graduate employability. The University is also one of the country's major research institutions, rated fifth in the UK in terms of 'research power' (REF 2014). World class research is carried out across a diverse range of fields including cancer, advanced materials, addressing global inequalities, energy and industrial biotechnology. No fewer than 25 Nobel laureates have either worked or studied here. It is the only UK university to have social responsibility among its core strategic objectives, with staff and students alike dedicated to making a positive difference in communities around the world. Manchester is ranked 35th in the world in the Academic Ranking of World Universities 2016 and 5th in the UK. The University had an annual income of almost £1 billion in 2015/16. Visit http://www.manchester.ac.uk for further information. Facts and figures: http://www.manchester.ac.uk/discover/facts-figures/ Research Beacons: http://www.manchester.ac.uk/research/beacons/ News and media contacts: http://www.manchester.ac.uk/discover/news/
10.1371/journal.pbio.2002731
2,017
PLoS Biology
Spontaneous mutation rate is a plastic trait associated with population density across domains of life
Rates of random, spontaneous mutation can vary plastically, dependent upon the environment. Such plasticity affects evolutionary trajectories and may be adaptive. We recently identified an inverse plastic association between mutation rate and population density at 1 locus in 1 species of bacterium. It is unknown how widespread this association is, whether it varies among organisms, and what molecular mechanisms of mutagenesis or repair are required for this mutation-rate plasticity. Here, we address all 3 questions. We identify a strong negative association between mutation rate and population density across 70 years of published literature, comprising hundreds of mutation rates estimated using phenotypic markers of mutation (fluctuation tests) from all domains of life and viruses. We test this relationship experimentally, determining that there is indeed density-associated mutation-rate plasticity (DAMP) at multiple loci in both eukaryotes and bacteria, with up to 23-fold lower mutation rates at higher population densities. We find that the degree of plasticity varies, even among closely related organisms. Nonetheless, in each domain tested, DAMP requires proteins scavenging the mutagenic oxidised nucleotide 8-oxo-dGTP. This implies that phenotypic markers give a more precise view of mutation rate than previously believed: having accounted for other known factors affecting mutation rate, controlling for population density can reduce variation in mutation-rate estimates by 93%. Widespread DAMP, which we manipulate genetically in disparate organisms, also provides a novel trait to use in the fight against the evolution of antimicrobial resistance. Such a prevalent environmental association and conserved mechanism suggest that mutation has varied plastically with population density since the early origins of life.
962706
Reconstructing alternative paths to complex multicellularity in Animals and Fungi from today's genetic diversity
An international team of researchers with a central contribution from researchers at the Dept. of Biological Physics at Eötvös Loránd University (ELTE) has unravelled the evolutionary origins of animals and fungi. The findings, published in the journal Nature, demonstrate how genomic data and powerful computational methods allow scientists to answer fundamental questions in evolutionary biology that were previously unapproachable. Scientists have always been curious about the evolutionary history of animals and fungi: These two groups of complex multicellular organisms are at first sight entirely dissimilar, but in fact, they are cousins on the Tree of Life. Animals and fungi are members of the same extended family, called an eukaryotic supergroup, and are much more closely related to each other than either is to plants. Understanding how such complex yet contrasting groups evolved within the same eukaryotic supergroup has been challenging due to the lack of a detailed fossil record from when the two groups diverged. “In order to solve this evolutionary enigma, we first had to produce genomic data from the unicellular groups that branch between animals and fungi in the tree of life” — said Iñaki Ruiz-Trillo, Principal Investigator and Professor of Evolutionary Biology at the Institute of Evolutionary Biology in Barcelona and last author of the article. Instead of relying on fossils, the authors reconstructed the evolution of the two groups from the genetic information found in the genomes of fungi and animals living today. By combining the genomic data produced for these unicellular groups together with genomic data from multiple species of animals and fungi, the researchers reconstructed the trajectory of genetic changes that led to the origin of these two eukaryotic groups using sophisticated computational models of genetic change. “On a methodological level, there are two factors that are having a huge impact in the field of evolutionary biology. One is that currently, it is much easier to produce genomic data for any organism. The second is that nowadays our computers can run much more complex evolutionary models to analyze this data” — commented Gergely J Szöllősi, Principal Investigator at the ERC GENECLOCKS research group and Assistant Professor at the Department of Biological Physics at ELTE and co-author of the article. The global picture that emerged from analyses is that the genomic differences we see today between modern animals and fungi result from gradual changes that began early in evolution. The authors’results indicate that this process started immediately after the divergence of the ancestors of the two groups over a billion years ago.    “This surprised us, because we expected most changes to have occurred specifically in concomitance with the origin of animals and fungi. What we saw instead is the opposite, most changes in gene content occurred before the origin of the two groups” said Eduard Ocaña-Pallarès, a postdoctoral researcher at ELTE university and first author. According to the researchers, the line of descent leading to animals began to accumulate genes that would later become essential for animal multicellularity. In contrast, the lineage leading to modern fungi experienced more genetic losses and shifted its genetic content towards metabolic functions. This shift allowed the fungi to adapt to and survive in a bewildering variety of environments.  “Moving from Barcelona to Hungary and joining the ERC GENECLOCKS research group at ELTE was the best decision I could have ever taken from a professional perspective. During my PhD in Barcelona, we generated plenty of genomic data, but all this data is meaningless unless you analyse it with the proper methods. I decided to continue this research in the group of Gergely since I was aware that they were developing cutting-edge software for ancestral gene content reconstruct. This decision was crucial for the success of the project” — concluded Eduard Ocaña-Pallarès, postdoctoral researcher at the Department of Biological Physics at ELTE. “This work is a great example of how collaboration around the globe can boost science and lead to research excellence.”  adds Gergely J Szöllősi. Nature 10.1038/s41586-022-05110-4 24-Aug-2022
10.1038/s41586-022-05110-4
2,022
Nature
Divergent genomic trajectories predate the origin of animals and fungi
Animals and fungi have radically distinct morphologies, yet both evolved within the same eukaryotic supergroup: Opisthokonta1,2. Here we reconstructed the trajectory of genetic changes that accompanied the origin of Metazoa and Fungi since the divergence of Opisthokonta with a dataset that includes four novel genomes from crucial positions in the Opisthokonta phylogeny. We show that animals arose only after the accumulation of genes functionally important for their multicellularity, a tendency that began in the pre-metazoan ancestors and later accelerated in the metazoan root. By contrast, the pre-fungal ancestors experienced net losses of most functional categories, including those gained in the path to Metazoa. On a broad-scale functional level, fungal genomes contain a higher proportion of metabolic genes and diverged less from the last common ancestor of Opisthokonta than did the gene repertoires of Metazoa. Metazoa and Fungi also show differences regarding gene gain mechanisms. Gene fusions are more prevalent in Metazoa, whereas a larger fraction of gene gains were detected as horizontal gene transfers in Fungi and protists, in agreement with the long-standing idea that transfers would be less relevant in Metazoa due to germline isolation3-5. Together, our results indicate that animals and fungi evolved under two contrasting trajectories of genetic change that predated the origin of both groups. The gradual establishment of two clearly differentiated genomic contexts thus set the stage for the emergence of Metazoa and Fungi.
644268
Factors that predict obesity by adolescence revealed
Three simple factors that predict whether a healthy weight child will be overweight or obese by adolescence have been revealed in a new study led by the Murdoch Children's Research Institute (MCRI). The research shows three factors - a child's and mother's Body Mass Index (BMI) and the mother's education level - predict the onset or resolution of weight problems by adolescence, especially from age 6-7 years onwards. Each one-unit higher BMI when the child is aged 6-7 years increased the odds at 14-15 years of developing weight problems by three-fold and halved the odds of resolution. Similarly, every one-unit increase in the mother's BMI when the child is aged 6-7 years increased the odds at 14-15 years of developing weight problems by 5 per cent and decreased the odds of resolution by about 10 per cent. Mothers having a university degree was associated with lower odds of a child being overweight and obese at ages 2-5 years and higher odds of resolving obesity issues by adolescence. Study author MCRI's Dr Kate Lycett said the prevalence of being overweight/obese at the age of 14-15 years was 13 per cent among children with none of these three risk factors at age 6-7 years, compared with 71 per cent among those with all risk factors. Dr Lycett said identifying these three factors may help clinicians predict which children will develop and resolve excess weight with about 70 per cent accuracy. "In the case of BMI, it is an objective measure that is easily measured and reflects diet and exercise choices, but is free from the challenges of assessing physical activity and diet in a standard clinical appointment such as recall bias," she said. The findings, published in the latest edition of the International Journal of Obesity, also found children who are overweight or obese at 2-5 years have a low chance of resolving their weight problems by adolescence when these three risk factors are present. Data was sourced from 3469 participants at birth and 3276 participants at kinder from the Longitudinal Study of Australian Children. The child's height and weight were measured every two years. Dr Lycett said until now most studies have overlooked the important questions around which children are likely to become overweight/obese and how it be resolved. "Because clinicians haven't been able to tell which children will grow up to become teens with excess weight, it's been hard to target interventions for those most at risk," she said. "The consequences of this are dire, with childhood obesity predicting premature death and being implicated in cardiovascular disease, diabetes and cancer." The study examined how combinations of 25 potential short clinical markers such as time breastfeeding and amount of outdoor activity at various ages predict weight issues, as well as resolution, by ages 10-11 and 14-15 years. Intriguingly, short questions about poor diet, low physical activity and other common lifestyle factors were not predictive of weight outcomes. Lead author Professor Markus Juonala, from the University of Turku in Finland, said a simple risk score, which would be easily available to child health clinicians, could help target treatment or prevention. "Combining data on these three easily obtainable risk factors may help clinicians make appropriate decisions targeting care to those most at risk of adolescent obesity," he said. "The benefits of removing a focus on those unlikely to need clinical interventions for obesity has largely been ignored, despite an increasing policy emphasis on avoiding wasteful or unnecessary health care."
10.1038/s41366-019-0457-2
2,019
International Journal of Obesity
Early clinical markers of overweight/obesity onset and resolution by adolescence
We examined how combinations of clinical indicators at various ages predict overweight/obesity development, as well as resolution, by 10-11 and 14-15 years of age.Data were derived from Birth (N = 3469) and Kinder (N = 3276) cohorts of the Longitudinal Study of Australian Children, followed from ages 2-3 and 4-5 years, respectively. Every two years, 25 potential obesity-relevant clinical indicators were quantified. Overweight/obesity was defined using International Obesity Taskforce cutpoints at 10-11 years and 14-15 years.In both cohorts, three factors predicted both development and resolution of overweight/obesity in multivariable models. Among normal weight children, increased odds of developing overweight/obesity were associated with higher child (odd ratio (OR) 1.67-3.35 across different study waves) and maternal (OR 1.05-1.09) BMI, and inversely with higher maternal education (OR 0.60-0.62, when assessed at age 2-7 years). Lower odds of resolving existing overweight/obesity were related with higher child (OR 0.51-0.79) and maternal (OR 0.89-0.95) BMI, and inversely with higher maternal education (OR 1.62-1.92, when assessed at age 2-5 years). The prevalence of overweight/obesity at the age of 14-15 years was 13% among children with none of these risk factors at age 6-7 years, compared with 71% among those with all 3 risk factors (P < 0.001).From early childhood onwards, child and maternal BMI and maternal education predict overweight/obesity onset and resolution by adolescence. A simple risk score, easily available to child health clinicians, could help target treatment or prevention.
470363
Fecal transplants show promise as treatment for non-alcoholic fatty liver disease
LONDON, ON - A new study from Lawson Health Research Institute and Western University suggests that fecal transplants could be used as a treatment for non-alcoholic fatty liver disease (NAFLD). The randomized controlled trial published in the American Journal of Gastroenterology found that fecal transplants in patients with NAFLD result in a reduction in how easily pathogens and other unwanted molecules pass through the human gut and into circulation, known as intestinal permeability. The results could have implications for the treatment of numerous conditions including metabolic syndrome and autoimmune diseases. "Intestinal permeability plays a role in the development of metabolic syndrome which is a major cause of coronary and cerebrovascular disease. It has also been associated with autoimmune diseases like multiple sclerosis (MS), rheumatoid arthritis, systemic lupus and type 1 diabetes," explains Dr. Michael Silverman, Associate Scientist at Lawson and Professor at Western's Schulich School of Medicine & Dentistry. Many NAFLD patients have increased intestinal permeability which triggers inflammation, increased fat in the liver, insulin resistance and elevated levels of triglycerides in the blood. The human microbiome - the diverse collection of microbes in our body - is thought to play a role. Previous studies have shown differences between the gut microbiome of NAFLD patients compared to healthy individuals. "Our team wondered whether we could change the gut microbiome of NAFLD patients to reduce intestinal permeability," says Dr. Jeremy Burton, Lawson Scientist and Associate Professor at Schulich Medicine & Dentistry. The trial included 21 NAFLD patients from London Health Sciences Centre (LHSC) and St. Joseph's Health Care London. Patients were randomized to receive a fecal transplant using stool from a healthy donor or a placebo (the patient's own stool). Fecal material was delivered to the small intestine using endoscopy. Patients were followed for six months to assess changes to their gut microbiome, intestinal permeability, percentage of liver fat and insulin resistance. While the researchers found no changes in percentage of liver fat or insulin resistance, they observed significant reduction in intestinal permeability in those patients who had elevated intestinal permeability at the study's start (seven patients in total). They also observed changes to the gut microbiome in all patients who received a fecal transplant from a healthy donor. "Our study demonstrates that intestinal permeability can be improved through fecal transplant from a healthy donor," says Dr. Laura Craven, a recent PhD graduate from Schulich Medicine & Dentistry and first author on the published study. "This suggests that fecal transplant could be used as an early intervention in the treatment of NAFLD to reduce intestinal permeability and prevent inflammation" "Our findings have implications for other conditions too," adds Dr. Silverman, who is also Chair/Chief of Infectious Diseases at Western, LHSC and St. Joseph's. "Changing the gut microbiome could hold promise in preventing and treating metabolic syndrome and autoimmune diseases associated with increased gut permeability." The team hopes to next conduct a large multi-centre trial to further investigate FMT as an intervention for NAFLD and as a therapy to reduce intestinal permeability. NAFLD is an obesity-related disorder and is the second-leading cause of liver transplant in North America. While reversible if treated early, its progression can lead to liver failure or cancer. Current therapies are not overly effective and the prevalence of NAFLD is increasing. Dr. Silverman is a pioneer in the field of fecal transplants, including their use as a treatment for Clostridioides difficile (C. diff). He is involved in multiple studies examining the potential of fecal transplants as treatments or supportive therapies for numerous conditions including multiple sclerosis (MS) and different types of cancer. "In order to conduct this research, we need stool donors," notes Dr. Silverman. "By donating your poop, you can help us assess the value of fecal transplants to treat a variety of diseases." The team is in need of young, healthy stool donors for fecal transplants. All donors are required to go through a screening process. Those interested in becoming a stool donor can contact Dr. Seema Nair Parvathy, Research Coordinator, Fecal Transplant Program, at 519-646-6100 ext. 61726.
10.14309/ajg.0000000000000661
2,020
The American Journal of Gastroenterology
Allogenic Fecal Microbiota Transplantation in Patients With Nonalcoholic Fatty Liver Disease Improves Abnormal Small Intestinal Permeability: A Randomized Control Trial
INTRODUCTION: Nonalcoholic fatty liver disease (NAFLD) is an obesity-related disorder that is rapidly increasing in incidence and is considered the hepatic manifestation of the metabolic syndrome. The gut microbiome plays a role in metabolism and maintaining gut barrier integrity. Studies have found differences in the microbiota between NAFLD and healthy patients and increased intestinal permeability in patients with NAFLD. Fecal microbiota transplantation (FMT) can be used to alter the gut microbiome. It was hypothesized that an FMT from a thin and healthy donor given to patients with NAFLD would improve insulin resistance (IR), hepatic proton density fat fraction (PDFF), and intestinal permeability. METHODS: Twenty-one patients with NAFLD were recruited and randomized in a ratio of 3:1 to either an allogenic (n = 15) or an autologous (n = 6) FMT delivered by using an endoscope to the distal duodenum. IR was calculated by HOMA-IR, hepatic PDFF was measured by MRI, and intestinal permeability was tested using the lactulose:mannitol urine test. Additional markers of metabolic syndrome and the gut microbiota were examined. Patient visits occurred at baseline, 2, 6 weeks, and 6 months post-FMT. RESULTS: There were no significant changes in HOMA-IR or hepatic PDFF in patients who received the allogenic or autologous FMT. Allogenic FMT patients with elevated small intestinal permeability (&gt;0.025 lactulose:mannitol, n = 7) at baseline had a significant reduction 6 weeks after allogenic FMT. DISCUSSION: FMT did not improve IR as measured by HOMA-IR or hepatic PDFF but did have the potential to reduce small intestinal permeability in patients with NAFLD.
940896
Most “pathogenic” genetic variants have a low risk of causing disease
Imagine getting a positive result on a genetic test. The doctor tells you that you have a “pathogenic genetic variant,” or a DNA sequence that is known to raise the chances for getting a disease like breast cancer or diabetes. But what exactly are those chances - 10 percent? Fifty percent? One hundred? Currently, that is not an easy question to answer. To address this need, researchers at the Icahn School of Medicine at Mount Sinai analyzed the DNA sequences and electronic health record data of thousands of individuals stored in two massive biobanks. Overall, they discovered that the chance a pathogenic genetic variant may actually cause a disease is relatively low - about 7 percent. Nonetheless, they also found that some variants, such as those associated with breast cancer, are linked to a wide range of risks for disease. The results, published in JAMA, could alter the way the risks associated with these variants are reported, and one day, help guide the way physicians interpret genetic testing results. “A major goal of this study was to produce helpful, advanced statistics which quantitatively assess the impact that known disease-causing genetic variants may have on an individual’s risk to disease,” said Ron Do, PhD, Associate Professor of Genetics and Genomic Sciences and a member of The Charles Bronfman Institute for Personalized Medicine at Icahn Mount Sinai. Over the past 20 years scientists have discovered hundreds of thousands of variants that could cause a variety of diseases. However, due to the nature of these discoveries, it has been difficult to estimate - or provide statistics on - the true risk of this happening for each gene variant. So far, most estimates have been based on studies involving a small number of subjects, who were either part of a family that had a history of having a disease or were recruited at disease-specific clinics. But studies like these that do not use randomly chosen large populations may produce overestimates of the risk posed by variants. In this study, the researchers tackled the issue by searching large-scale DNA sequencing data of 72,434 individuals for 37,780 known variants and then scanning each individual’s health records for a corresponding disease diagnosis. The extensive search involved 29,039 participants in Mount Sinai’s BioMe® Biobank program and 43,395 participants who were part of the UK Biobank. The study was led by Iain S. Forrest, an MD-PhD candidate in Dr. Do’s lab who found inspiration from prior clinical experience he had as part of a postbaccalaureate fellowship at the National Institutes of Health (NIH). “The idea for the study came out of a brainstorming session,” said Mr. Forrest. “Dr. Do and I discussed the need to have a better system for classifying disease risk. Currently, variants are categorized by broad labels such as ‘pathogenic’ or ‘benign.’ As I learned in the clinic, there’s a lot of grey area with these labels. That’s when we realized that the biobanks which link DNA sequence data to electronic health records are an unparalleled opportunity to address this need.” Initial results showed that 157 diseases in their data set could be linked to 5,360 variants that were defined as either “pathogenic” by ClinVar, a widely referenced, NIH-supported public library, or “loss-of-function” as predicted by bioinformatic algorithms. On average, the “penetrance,” or chance that a variant was linked to a disease diagnosis, was low, specifically 6.9 percent. Likewise, the average risk difference, which describes the increase in disease risk for an individual who has the variant over an individual who does not have it, was also low. “At first I was quite surprised by the results. The risks we discovered were lower than I expected,” said Dr. Do. “These results raise questions about how we should be classifying the risks of these variants.” Despite these results, the risks associated with some genetic variants remained high. For instance, pathogenic variants of the breast cancer genes BRCA1 and BRCA2 both averaged 38 percent penetrance, with individual variants falling between zero and 100 percent. Further results demonstrated other advantages of using biobank data. In one example, the researchers were able to calculate the risks of individual variants that are associated with age-related disorders, such as some forms of type 2 diabetes and breast and prostate cancers. On average, the penetrance of these variants was about 10 percent for individuals over 70 years of age whereas it was about 8 percent for those who were older than 20. The team also found that the presence of some variants could depend on an individual’s ethnicity and identified more than 100 variants that are specifically found in individuals of non-European descent. Finally, the authors listed several potential ways the study itself could have under- or overestimated the risks reported. “While more research is needed to be done, we feel that this study is a good first step towards eventually providing doctors and patients with the accurate and nuanced information they need to make more precise diagnoses,” said Dr. Do. This work was supported by the National Institutes of Health (GM124836, GM007280, HL139865, and HL155915). Article Forrest, I.S., et al; Population-based penetrance of deleterious clinical variants, JAMA, January 25, 2022, DOI: 10.1001/jama.2021.23686. About the Mount Sinai Health System The Mount Sinai Health System is New York City's largest academic medical system, encompassing eight hospitals, a leading medical school, and a vast network of ambulatory practices throughout the greater New York region. Mount Sinai advances medicine and health through unrivaled education and translational research and discovery to deliver care that is the safest, highest-quality, most accessible and equitable, and the best value of any health system in the nation. The Health System includes approximately 7,300 primary and specialty care physicians; 13 joint-venture ambulatory surgery centers; more than 415 ambulatory practices throughout the five boroughs of New York City, Westchester, Long Island, and Florida; and more than 30 affiliated community health centers. The Mount Sinai Hospital is ranked on U.S. News & World Report's "Honor Roll" of the top 20 U.S. hospitals and is top in the nation by specialty: No. 1 in Geriatrics and top 20 in Cardiology/Heart Surgery, Diabetes/Endocrinology, Gastroenterology/GI Surgery, Neurology/Neurosurgery, Orthopedics, Pulmonology/Lung Surgery, Rehabilitation, and Urology. New York Eye and Ear Infirmary of Mount Sinai is ranked No. 12 in Ophthalmology. Mount Sinai Kravis Children's Hospital is ranked in U.S. News & World Report’s “Best Children’s Hospitals” among the country’s best in four out of 10 pediatric specialties. The Icahn School of Medicine is one of three medical schools that have earned distinction by multiple indicators: ranked in the top 20 by U.S. News & World Report's "Best Medical Schools," aligned with a U.S. News & World Report "Honor Roll" Hospital, and No. 14 in the nation for National Institutes of Health funding. Newsweek’s “The World’s Best Smart Hospitals” ranks The Mount Sinai Hospital as No. 1 in New York and in the top five globally, and Mount Sinai Morningside in the top 20 globally. For more information, visit https://www.mountsinai.org or find Mount Sinai on Facebook, Twitter and YouTube.
10.1001/jama.2021.23686
2,022
JAMA
Population-Based Penetrance of Deleterious Clinical Variants
Population-based assessment of disease risk associated with gene variants informs clinical decisions and risk stratification approaches.To evaluate the population-based disease risk of clinical variants in known disease predisposition genes.This cohort study included 72 434 individuals with 37 780 clinical variants who were enrolled in the BioMe Biobank from 2007 onwards with follow-up until December 2020 and the UK Biobank from 2006 to 2010 with follow-up until June 2020. Participants had linked exome and electronic health record data, were older than 20 years, and were of diverse ancestral backgrounds.Variants previously reported as pathogenic or predicted to cause a loss of protein function by bioinformatic algorithms (pathogenic/loss-of-function variants).The primary outcome was the disease risk associated with clinical variants. The risk difference (RD) between the prevalence of disease in individuals with a variant allele (penetrance) vs in individuals with a normal allele was measured.Among 72 434 study participants, 43 395 were from the UK Biobank (mean [SD] age, 57 [8.0] years; 24 065 [55%] women; 2948 [7%] non-European) and 29 039 were from the BioMe Biobank (mean [SD] age, 56 [16] years; 17 355 [60%] women; 19 663 [68%] non-European). Of 5360 pathogenic/loss-of-function variants, 4795 (89%) were associated with an RD less than or equal to 0.05. Mean penetrance was 6.9% (95% CI, 6.0%-7.8%) for pathogenic variants and 0.85% (95% CI, 0.76%-0.95%) for benign variants reported in ClinVar (difference, 6.0 [95% CI, 5.6-6.4] percentage points), with a median of 0% for both groups due to large numbers of nonpenetrant variants. Penetrance of pathogenic/loss-of-function variants for late-onset diseases was modified by age: mean penetrance was 10.3% (95% CI, 9.0%-11.6%) in individuals 70 years or older and 8.5% (95% CI, 7.9%-9.1%) in individuals 20 years or older (difference, 1.8 [95% CI, 0.40-3.3] percentage points). Penetrance of pathogenic/loss-of-function variants was heterogeneous even in known disease predisposition genes, including BRCA1 (mean [range], 38% [0%-100%]), BRCA2 (mean [range], 38% [0%-100%]), and PALB2 (mean [range], 26% [0%-100%]).In 2 large biobank cohorts, the estimated penetrance of pathogenic/loss-of-function variants was variable but generally low. Further research of population-based penetrance is needed to refine variant interpretation and clinical evaluation of individuals with these variant alleles.
836652
A more sustainable way to refine metals
A team of chemists in Canada has developed a way to process metals without using toxic solvents and reagents. The system, which also consumes far less energy than conventional techniques, could greatly shrink the environmental impact of producing metals from raw materials or from post-consumer electronics. "At a time when natural deposits of metals are on the decline, there is a great deal of interest in improving the efficiency of metal refinement and recycling, but few disruptive technologies are being put forth," says Jean-Philip Lumb, an associate professor in McGill University's Department of Chemistry. "That's what makes our advance so important." The discovery stems from a collaboration between Lumb and Tomislav Friscic at McGill in Montreal, and Kim Baines of Western University in London, Ont. In an article published recently in Science Advances, the researchers outline an approach that uses organic molecules, instead of chlorine and hydrochloric acid, to help purify germanium, a metal used widely in electronic devices. Laboratory experiments by the researchers have shown that the same technique can be used with other metals, including zinc, copper, manganese and cobalt. The research could mark an important milestone for the "green chemistry" movement, which seeks to replace toxic reagents used in conventional industrial manufacturing with more environmentally friendly alternatives. Most advances in this area have involved organic chemistry - the synthesis of carbon-based compounds used in pharmaceuticals and plastics, for example. "Applications of green chemistry lag far behind in the area of metals," Lumb says. "Yet metals are just as important for sustainability as any organic compound. For example, electronic devices require numerous metals to function." Taking a page from biology There is no single ore rich in germanium, so it is generally obtained from mining operations as a minor component in a mixture with many other materials. Through a series of processes, that blend of matter can be reduced to germanium and zinc. "Currently, in order to isolate germanium from zinc, it's a pretty nasty process," Baines explains. The new approach developed by the McGill and Western chemists "enables you to get germanium from zinc, without those nasty processes." To accomplish this, the researchers took a page from biology. Lumb's lab for years has conducted research into the chemistry of melanin, the molecule in human tissue that gives skin and hair their color. Melanin also has the ability to bind to metals. "We asked the question: 'Here's this biomaterial with exquisite function, would it be possible to use it as a blueprint for new, more efficient technologies?'" The scientists teamed up to synthesize a molecule that mimics some of the qualities of melanin. In particular, this "organic co-factor" acts as a mediator that helps to extract germanium at room temperature, without using solvents. Next step: industrial scale The system also taps into Friscic's expertise in mechanochemistry, an emerging branch of chemistry that relies on mechanical force - rather than solvents and heat - to promote chemical reactions. Milling jars containing stainless-steel balls are shaken at high speeds to help purify the metal. "This shows how collaborations naturally can lead to sustainability-oriented innovation," Friscic says. "Combining elegant new chemistry with solvent-free mechanochemical techniques led us to a process that is cleaner by virtue of circumventing chlorine-based processing, but also eliminates the generation of toxic solvent waste" The next step in developing the technology will be to show that it can be deployed economically on industrial scales, for a range of metals. "There's a tremendous amount of work that needs to be done to get from where we are now to where we need to go," Lumb says. "But the platform works on many different kinds of metals and metal oxides, and we think that it could become a technology adopted by industry. We are looking for stakeholders with whom we can partner to move this technology forward." ### Funding for the research was provided by the Natural Sciences and Engineering Research Council of Canada, the National Natural Science Foundation of China, the Soochow University-Western University Center for Synchrotron Radiation Research, and the Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University. "A chlorine-free protocol for processing germanium," Martin Glavinovic et al., Science Advances, 5 May 2017. DOI: 10.1126/sciadv.1700149 http://advances.sciencemag.org/content/3/5/e1700149
10.1126/sciadv.1700149
2,017
Science Advances
A chlorine-free protocol for processing germanium
A quinone/catechol redox platform replaces Cl 2 or HCl for processing germanium metal or germanium dioxide to germanes.
476047
How do corals make the most of their symbiotic algae?
10.1038/s41467-019-13963-z
2,020
Nature Communications
Symbiont population control by host-symbiont metabolic interaction in Symbiodiniaceae-cnidarian associations
Abstract In cnidarian-Symbiodiniaceae symbioses, algal endosymbiont population control within the host is needed to sustain a symbiotic relationship. However, the molecular mechanisms that underlie such population control are unclear. Here we show that a cnidarian host uses nitrogen limitation as a primary mechanism to control endosymbiont populations. Nitrogen acquisition and assimilation transcripts become elevated in symbiotic Breviolum minutum algae as they reach high-densities within the sea anemone host Exaiptasia pallida . These same transcripts increase in free-living algae deprived of nitrogen. Symbiotic algae also have an elevated carbon-to-nitrogen ratio and shift metabolism towards scavenging nitrogen from purines relative to free-living algae. Exaiptasia glutamine synthetase and glutamate synthase transcripts concomitantly increase with the algal endosymbiont population, suggesting an increased ability of the host to assimilate ammonium. These results suggest algal growth and replication in hospite is controlled by access to nitrogen, which becomes limiting for the algae as their population within the host increases.
490196
Touchscreens may boost motor skills in toddlers
Does your toddler use a touchscreen tablet? A recent study published in Frontiers in Psychology has shown that early touchscreen use, and in particular actively scrolling the screen, correlates with increased fine motor control in toddlers. Smartphones and tablets are now commonplace at work and in the home. If you are reading this on your morning commute on public transport, it is likely to be on a touchscreen device, while surrounded by people who are completely absorbed by their own touchscreens. There has been a dramatic increase in the ownership and use of tablets and smartphones in recent years. In the UK, family ownership of touchscreen devices increased from 7% in 2011 to 71% in 2014. It is therefore not surprising that children are using touchscreens from a very early age, but is this a good thing or not? The effects of using touchscreens on young children are a concern for some parents and policymakers. Popular opinion holds that using touchscreens at an early age is likely to delay the cognitive development of children. The American Academy of Pediatrics advises that children should not be exposed to any screens, including touchscreens, before the age of two, and similar agencies in other countries have adopted these guidelines. However, we don't yet know if these fears are justified, as it turns out that when it comes to touchscreens, they aren't backed by hard data. The current guidelines are arguably more of a knee-jerk reaction to a new technology than an informed health strategy. Scientists have not yet extensively studied the relationship between childhood development and using touchscreens, because the technology is still so new and the children that have used it from early childhood are still very young. Despite the guidelines, in reality many toddlers use touchscreens from a very early age. Dr. Tim J. Smith of Birbeck, University of London, realized that there is a need for more solid data and with the help of his collaborators at King's College, set up an online survey for UK parents to answer questions about their children's touchscreen use. This included questions about whether the toddlers used touchscreens, when they first used one, and how often and how long they use them. The survey also included specific questions to assess the development of the children, such as the age that they first stacked blocks, which indicates fine motor skills, or the age they first used two-word sentences, which indicates language development. In total, 715 families responded and the study confirmed that using touchscreens is extremely common in UK toddlers. "The study showed that the majority of toddlers have daily exposure to touchscreen devices, increasing from 51.22% at 6-11 months to 92.05% at 19-36 months," explained Dr. Smith. They found no significant associations between using touchscreens and either walking or language development. However, "in toddlers aged 19-36 months, we found that the age that parents reported their child first actively scrolling a touchscreen was positively associated with the age that they were first able to stack blocks, a measure of fine motor control." It is not yet known if this correlation indicates that using touchscreens can enhance fine motor skills, or if children with fine motor skills are more likely to use touchscreens earlier, and so further work is required to determine the nature of this relationship more precisely. However, it is clear that the current generation of toddlers is adapting rapidly to this new technology and these children look set to use these devices throughout their lives. ###
10.3389/fpsyg.2016.01108
2,016
Frontiers in Psychology
Toddlers’ Fine Motor Milestone Achievement Is Associated with Early Touchscreen Scrolling
Touchscreen technologies provide an intuitive and attractive source of sensory/cognitive stimulation for young children. Despite fears that usage may have a negative impact on toddlers' cognitive development, empirical evidence is lacking. The current study presents results from the UK Toddler Attentional Behaviours and LEarning with Touchscreens (TABLET) project, examining the association between toddlers' touchscreen use and the attainment of developmental milestones. Data were gathered in an online survey of 715 parents of 6- to 36-month-olds to address two research questions: (1) How does touchscreen use change from 6 to 36 months? (2) In toddlers (19-36 months, i.e., above the median age, n = 366), how does retrospectively reported age of first touchscreen usage relate to gross motor (i.e., walking), fine motor (i.e., stacking blocks), and language (i.e., producing two-word utterances) milestones? In our sample, the proportion of children using touchscreens, as well as the average daily usage time, increased with age (youngest quartile, 6-11 months: 51.22% users, 8.53 min per day; oldest quartile, 26-36 months: 92.05% users, average use of 43.95 min per day). In toddlers, aged 19-36 months, age of first touchscreen use was significantly associated with fine motor (stacking blocks), p = 0.03, after controlling for covariates age, sex, mother's education (a proxy for socioeconomic status) as well as age of early fine motor milestone achievement (pincer grip). This effect was only present for active scrolling of the touchscreen p = 0.04, not for video watching. No significant relationships were found between touchscreen use and either gross motor or language milestones. Touchscreen use increases rapidly over the first 3 years of life. In the current study, we find no evidence to support a negative association between the age of first touchscreen usage and developmental milestones. Indeed, earlier touchscreen use, specifically scrolling of the screen, was associated with earlier fine motor achievement. Future longitudinal studies are required to elucidate the temporal order and mechanisms of this association, and to examine the impact of touchscreen use on other, more fine-grained, measures of behavioral, cognitive, and neural development.
493479
Extreme weather has greater impact on nature than expected -- researchers launch roadmap
An oystercatcher nest is washed away in a storm surge. Australian passerine birds die during a heatwave. A late frost in their breeding area kills off a group of American cliff swallows. Small tragedies that may seem unrelated, but point to the underlying long-term impact of extreme climatic events. In the special June issue of Philosophical Transactions of the Royal Society B researchers of the Netherlands Institute of Ecology (NIOO-KNAW) launch a new approach to these 'extreme' studies. Extremes, outliers, cataclysms. As a field of biological research it's still in its infancy, but interest in the impact of extreme weather and climate events on nature is growing rapidly. That's partly because it is now increasingly clear that the impact of extreme events on animal behaviour, ecology and evolution could well be greater than that of the 'normal' periods in between. And partly because the frequency of such events is likely to increase, due to climate change. Not 1 to 1 But how do we define extreme events in the first place? That's problematic, explain NIOO researchers Marcel Visser and Martijn van de Pol. "For climatologists, weather has to be warmer, colder or more extreme in another way than it is 95% of the time. But that doesn't necessarily make it extreme in terms of its impact on nature. There isn't a 1 to 1 correspondence." According to the researchers and a group of international colleagues, most of the evidence suggests that the impact varies depending on the species and the circumstances. "Obviously for a bird, the impact of a couple of extremely cold days in December wouldn't be the same as in April or May, when there are chicks in the nest." This makes it very difficult to predict the consequences of extremes. "We also don't know enough about the long-term consequences for nature of these crucially important extremes", say Van de Pol and Visser. "But that could be about to change." As guest editors of a themed issue of the world's oldest scientific journal, dedicated to extreme climatic events, they take stock of the available knowledge and the hiatuses that currently exist. They suggest a 'roadmap' for the further development of this new area of research, aimed at making it easier to compare and synthesize information across fields. An added complication is that storm surges, heatwaves of five days or longer and decades of drought tend to be quite rare. But when they do occur, the consequences are often catastrophic: a challenging combination for researchers. Van de Pol: "Take the Wadden Sea. At the end of the 12th century, there was a storm that utterly transformed the Wadden Sea. The ecological consequences of that storm have continued for decades, if not centuries." "Or take the dinosaurs", adds Visser. "They never recovered from the impact of a single meteorite in Mexico." Fatal for fairies? Less cataclysmic events, too, can have major consequences. Two examples from Phil. Trans. B are oystercatchers that build their nests close to the coast despite rising sea levels, and fairy-wrens - Australian passerine birds - that are increasingly exposed to heatwaves and high temperatures, with sometimes fatal consequences. Just imagine you're an oystercatcher: one moment you sit there peacefully incubating your eggs on the saltmarsh, and the next your nest is gone. Engulfed by the Wadden Sea during a storm surge. Time-lapse footage from researchers on the Wadden island of Schiermonnikoog clearly demonstrates the danger. Van de Pol. "We've studied these nests for two decades, and during that time the number of flooding events has more than doubled. Yet the oystercatchers don't take any action." The researchers were keen to find out if the birds would learn from experience and build their nests on higher ground - safer but further from their favourite sea food, "but they don't". This could result in natural selection based on nest elevation, with only breeders who build their nest on high ground likely to survive. But this could affect the future viability of the population. The other example looks at the impact on two species of passerine birds of a decrease in the number of cold spells and an increase in the number of heatwaves. The red-winged fairy-wren and the white-browed scrubwren both have their habitat in southwestern Australia and they are ecologically quite similar. So how do they respond over time? Do they change their body size to mediate the impact of the extreme temperatures? Van de Pol: "Data over nearly 40 years shows that the two species, although quite similar, respond in completely different ways". Rocket science? So could rare extreme events be more likely to determine the success or failure of populations than the much longer 'normal' periods in between? "Let's say you've studied a breeding population of migratory birds for 49 years", explains Marcel Visser, "and year after year, the birds that arrive early in spring have the most chicks. It's hard to understand why more birds don't arrive early. Then, in the 50th year, a night of extremely cold weather suddenly kills 80% of the early arrivals, while the latecomers escape from the massacre. This may explain why the late birds are so successful at passing on their traits." If that makes it sound as if it's really very hard to make predictions, Visser agrees. "It's not exactly rocket science", he says,"with its complex and elaborate calculations. In fact, it's much more difficult than that!" ### With more than 300 staff members and students, NIOO is one of the largest research institutes of the Royal Netherlands Academy of Arts and Sciences (KNAW). The institute specialises in water and land ecology. As of 2011, the institute is located in an innovative and sustainable research building in Wageningen, the Netherlands. NIOO has an impressive research history that stretches back 60 years and spans the entire country, and beyond. Themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events' van Philosophical Transactions of the Royal Society B (Biological Sciences), Martijn van de Pol, Stéphanie Jenouvrier & Marcel E. Visser. 19 June 2017. Available online now for subscribers: http://rstb.royalsocietypublishing.org/content/372/1723 Three of the fourteen articles in the themed issue were written by NIOO researchers: Article 1: Behavioural, ecological and evolutionary responses to extreme climatic events: challenges and directions, Martijn van de Pol, Stéphanie Jenouvrier, Johannes H. C. Cornelissen, Marcel E. Visser. http://rstb.royalsocietypublishing.org/content/372/1723/20160134 Article 2: No phenotypic plasticity in nest-site selection in response to extreme flooding events, Liam D. Bailey, Bruno J. Ens, Christiaan Both, Dik Heg, Kees Oosterbeek, Martijn van de Pol. http://rstb.royalsocietypublishing.org/content/372/1723/20160139 Article 3: Effects of extreme weather on two sympatric Australian passerine bird species, Janet L. Gardner, Eleanor Rowley, Perry de Rebeira, Alma de Rebeira, Lyanne Brouwer. http://rstb.royalsocietypublishing.org/content/372/1723/20160148
10.1098/rstb.2016.0134
2,017
Philosophical Transactions of the Royal Society B Biological Sciences
Behavioural, ecological and evolutionary responses to extreme climatic events: challenges and directions
More extreme climatic events (ECEs) are among the most prominent consequences of climate change. Despite a long-standing recognition of the importance of ECEs by paleo-ecologists and macro-evolutionary biologists, ECEs have only recently received a strong interest in the wider ecological and evolutionary community. However, as with many rapidly expanding fields, it lacks structure and cohesiveness, which strongly limits scientific progress. Furthermore, due to the descriptive and anecdotal nature of many ECE studies it is still unclear what the most relevant questions and long-term consequences are of ECEs. To improve synthesis, we first discuss ways to define ECEs that facilitate comparison among studies. We then argue that biologists should adhere to more rigorous attribution and mechanistic methods to assess ECE impacts. Subsequently, we discuss conceptual and methodological links with climatology and disturbance-, tipping point- and paleo-ecology. These research fields have close linkages with ECE research, but differ in the identity and/or the relative severity of environmental factors. By summarizing the contributions to this theme issue we draw parallels between behavioural, ecological and evolutionary ECE studies, and suggest that an overarching challenge is that most empirical and theoretical evidence points towards responses being highly idiosyncratic, and thus predictability being low. Finally, we suggest a roadmap based on the proposition that an increased focus on the mechanisms behind the biological response function will be crucial for increased understanding and predictability of the impacts of ECE.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'.
954898
Novel immunotherapy mechanism suppresses breast cancer development
Journal of Experimental Medicine 10.1084/jem.20201963 Experimental study Animals CD4+ T helper 2 cells suppress breast cancer by inducing terminal differentiation 3-Jun-2022 Disclosures: M. Boieri is an employee of Zelluna Immunotherapy. S. Iyer is an employee of and has equity in Verve Therapeutics. M.N. Rivera reported grants from Advanced Cell Diagnostics and non-financial support from Merck/Serono outside the submitted work. No other disclosures were reported.
10.1084/jem.20201963
2,022
The Journal of Experimental Medicine
CD4+ T helper 2 cells suppress breast cancer by inducing terminal differentiation
Cancer immunology research is largely focused on the role of cytotoxic immune responses against advanced cancers. Herein, we demonstrate that CD4+ T helper (Th2) cells directly block spontaneous breast carcinogenesis by inducing the terminal differentiation of the cancer cells. Th2 cell immunity, stimulated by thymic stromal lymphopoietin, caused the epigenetic reprogramming of the tumor cells, activating mammary gland differentiation and suppressing epithelial–mesenchymal transition. Th2 polarization was required for this tumor antigen–specific immunity, which persisted in the absence of CD8+ T and B cells. Th2 cells directly blocked breast carcinogenesis by secreting IL-3, IL-5, and GM-CSF, which signaled to their common receptor expressed on breast tumor cells. Importantly, Th2 cell immunity permanently reverted high-grade breast tumors into low-grade, fibrocystic-like structures. Our findings reveal a critical role for CD4+ Th2 cells in immunity against breast cancer, which is mediated by terminal differentiation as a distinct effector mechanism for cancer immunoprevention and therapy.
946161
Kwong Lab develops biosensors for quick assessment of cancer treatment
Immune checkpoint blockade (ICB) inhibitors have transformed the treatment of cancer and have become the frontline therapy for a broad range of malignancies. It’s because they work better than the previous standard of care. Still, less than 25% of patients benefit from these drugs, which are designed to block proteins that stop the immune system from attacking cancer cells. And in many cases, that benefit is temporary. Compounding all of that is the difficulty in telling, in a timely fashion, if the treatment is working at all. That kind of critical feedback can determine whether a patient should stay the course or move onto an alternative therapy. “We don’t have an effective way of providing that information early enough, and that’s a big problem,” noted Gabe Kwong, associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. “Another problem is, even for patients that respond to the therapy, there will likely come a point when they develop a resistance and stop responding.” So Kwong and his team have developed a system of synthetic biosensors that will let a patient and doctor quickly learn if an ICB therapy is working through a non-invasive urinalysis. The research team shared their work in a study published March 3 in the journal Nature Biomedical Engineering. Typically, when physicians want to know if their patients are responding to cancer drugs, they have two basic options: They can perform a biopsy, but that is invasive, can be painful, and the results may take a few days.  Or they can take pictures — a CT scan, for example — and actually look at the tumor. But imaging can be deceiving when monitoring immunotherapies. For example, if the tumor appears to have increased in size, it might seem like the drug isn’t working for the patient. “But if you’re successful in activating the immune system, you’re going to get a flood of immune cells into the tumor, and it will look like the tumor has grown larger,” Kwong said. “In reality, the patient is responding to the therapy.” That’s called “pseudoprogression” of the disease. In blocking the activity of those unfriendly proteins, the ICB drug activates protective T cells, which attack the tumor en masse. The T cells kill it with a deadly secretion of proteases called granzymes, part of the same class of enzymes found in the stomach that are used to digest food. Potent stuff.  “We reasoned, if patients are responding to the drug, it means these T cells are making proteases, and if they’re not responding, these proteases are not present, so the T cells are not active,” said Kwong, whose collaborators included Coulter Associate Professor Peng Qiu and lead authors Quoc D. Mac and Anirudh Sivakumar, both grad students when the study was conducted. Kwong’s lab has been making and improving their synthetic biosensors for more than a decade. For this study, they developed sensors to detect both T cell and tumor proteases (tumors also secrete a type of protease) during ICB treatment. The sensors are attached to the ICB drug that makes its way toward the tumor environment after injection. When they reach their destination, the sensors are activated by proteases produced by both T cells and tumor cells, which triggers the release of signaling fluorescent reporters that are designed to concentrate into urine. “Basically, these signals would be diluted in blood and would be very hard to pick up, but everything from your blood gets filtered through the kidneys,” Kwong said. “So when we look at the urine, we get very concentrated signals, which increase or decrease, corresponding to whether the patients are responding or not.” A second way of reading the biosensor reporters involves artificial intelligence and machine learning techniques to identify signal patterns that discriminate between the different ways the drug can fail. The second part of the paper focuses mainly on this part, teasing apart two different mechanisms of intrinsic resistance. “There are multiple versions of resistance,” Kwong said. “A patient can be intrinsically resistant to the therapy — that is, it would never work for them. And there are patients who have acquired resistance — the drug initially worked for them but over time it stops working.” Kwong’s biosensors can tell if the drug is working and can discriminate between two mechanisms of intrinsic resistance — both due to mutations in different protein coding genes. “Next we’d like to develop the same biosensor approach for patients that acquire resistance,” Kwong said. “We try to think of the patient journey in our work: the person who gets a bad diagnosis, starts a new treatment, responds to the drug, and then three months down the road they’re no longer responding. It’s a subtle difference, but a big problem.” CITATION: Quoc D. Mac, Anirudh Sivakumar, Hathaichanok Phuengkham, Congmin Xu, James R. Bowen, Fang-Yi Su, Samuel Z. Stentz, Hyoungjun Sim, Adrian M. Harris, Tonia T. Li, Peng Qiu, Gabriel A. Kwong. “Urinary detection of early responses to checkpoint blockade and of resistance to it via protease-cleaved antibody-conjugated sensors.” Nature Biomaterials (March 3, 2022) https://doi.org/10.1038/s41551-022-00852-y FUNDING: This work was funded by the NIH Director’s New Innovator Award DP2HD091793 and National Cancer Institute R01 grant 5R01CA237210. DISCLOSURES: Gabe Kwong is co-founder of and serves as consultant to Glympse Bio and Satellite Bio. This study could affect his personal financial status. The terms of this arrangement have been reviewed and approved by Georgia Tech in accordance with its conflict-of-interest policies. Mac, Bowen, and Kwong are listed as inventors on a patent application pertaining to the results of the paper. The patent applicant is the Georgia Tech Research Corporation. The application number is PCT/US2019/050530. The patent is currently pending/published (publication number WO2020055952A1). The mass-barcoded antibody-sensor conjugates and related applications are covered in this patent. Nature Biomedical Engineering 10.1038/s41551-022-00852-y Experimental study Cells Urinary detection of early responses to checkpoint blockade and of resistance to it via protease-cleaved antibody-conjugated sensors 3-Mar-2022 Gabe Kwong is co-founder of and serves as consultant to Glympse Bio and Satellite Bio. This study could affect his personal financial status. The terms of this arrangement have been reviewed and approved by Georgia Tech in accordance with its conflict-of-interest policies. Mac, Bowen, and Kwong are listed as inventors on a patent application pertaining to the results of the paper. The patent applicant is the Georgia Tech Research Corporation. The application number is PCT/US2019/050530. The patent is currently pending/published (publication number WO2020055952A1). The mass-barcoded antibody-sensor conjugates and related applications are covered in this patent.
10.1038/s41551-022-00852-y
2,022
Nature Biomedical Engineering
Urinary detection of early responses to checkpoint blockade and of resistance to it via protease-cleaved antibody-conjugated sensors
Immune checkpoint blockade (ICB) therapy does not benefit the majority of treated patients, and those who respond to the therapy can become resistant to it. Here we report the design and performance of systemically administered protease activity sensors conjugated to anti-programmed cell death protein 1 (αPD1) antibodies for the monitoring of antitumour responses to ICB therapy. The sensors consist of a library of mass-barcoded protease substrates that, when cleaved by tumour proteases and immune proteases, are released into urine, where they can be detected by mass spectrometry. By using syngeneic mouse models of colorectal cancer, we show that random forest classifiers trained on mass spectrometry signatures from a library of αPD1-conjugated mass-barcoded activity sensors for differentially expressed tumour proteases and immune proteases can be used to detect early antitumour responses and discriminate resistance to ICB therapy driven by loss-of-function mutations in either the B2m or Jak1 genes. Biomarkers of protease activity may facilitate the assessment of early responses to ICB therapy and the classification of refractory tumours based on resistance mechanisms.
962511
Perseverance rover retrieves key rocky clues to Mars’ geologic and water history
In its first year exploring Jezero Crater on Mars, the Perseverance rover collected rock samples that scientists anticipate will provide a long-awaited timeline for the planet’s geologic and water history. They’ll just have to wait a decade to find out the answer, until the samples can be scooped up from the surface and returned to Earth for dating in 2033. The scientists are nevertheless enthused by what they’ve discovered so far about the samples. These discoveries are outlined in a paper that will appear Aug. 25 in the journal Science, with more detailed analyses in a second Science paper and two other papers published online simultaneously in Science Advances. Jezero Crater, just north of the Martian equator, was a target for NASA’s Mars 2020 Mission and its Perseverance rover because it contained what looked like a river delta that formed inside a lake bed and thus could potentially tell scientists about when water flowed on the planet’s surface. Rocks collected from the floor of the crater underlie the delta sediments, so their crystallization ages will provide an upper limit for the delta’s formation, according to geochemist David Shuster, professor of earth and planetary science at the University of California, Berkeley. “When that delta was deposited is one of the main objectives of our sample return program, because that will quantify when the lake was present and when the environmental conditions were present that could possibly have been amenable to life,” said Shuster, who is a member of NASA’s science team for sample collection, one of three main authors of the Science paper that summarizes the work and co-author of two of the three other papers. The two other lead authors of the summary Science paper are geochemist Kenneth Farley of Caltech, Perseverance’s project scientist, and Mars 2020 Deputy Project Scientist Katherine Stack Morgan of NASA’s Jet Propulsion Laboratory (JPL). The main surprise, Shuster said, is that the rocks collected from four sites on the floor of Jezero Crater are igneous cumulate rocks — that is, they were formed by the cooling of molten magma and are the best rocks for precise geochronology once the samples have returned to Earth. They also show evidence of having been altered by water. “From a sampling perspective, this is huge,” he said. “The fact that we have evidence of aqueous alteration of igneous rocks — those are the ingredients that people are very excited about, with regard to understanding environmental conditions that could potentially have supported life at some point after these rocks were formed.” “One great value of the igneous rocks we collected is that they will tell us about when the lake was present in Jezero. We know it was there more recently than the igneous crater floor rocks formed,” Farley said. “This will address some major questions: When was Mars’ climate conducive to lakes and rivers on the planet’s surface? And when did it change to the very cold and dry conditions we see today?” Before the mission, geologists expected that the floor of the crater was filled with either sediment or lava, which is molten rock that spilled onto the surface and cooled rapidly. But at two sites referred to as Séítah — the Navajo word for “amidst the sand” — the rocks appear to have formed underground and cooled slowly. Evidently, whatever was covering them has eroded away over the past 2.5 to 3.5 billion years. “We literally debated for the first nine months, as we were driving around on the crater floor, whether the rocks that we're looking at are sediments that were deposited into a lake, or igneous rocks,” he said. “In fact, they are igneous rocks. And the form of the igneous rocks that we found is quite surprising, because it doesn't look like a simple volcanic rock that flowed into the crater. Instead, it looks like something that formed at depth and cooled gradually in a largish magma chamber.“ The crystal structure of the igneous rock — not unlike the granite of the Sierra Nevada, but with different composition and much more finely grained — showed millimeter-sized grains of olivine intergrown with pyroxene that could only have been formed by slow cooling. The coarse-grained olivine is similar to that seen in some meteorites that are thought to have originated on Mars and eventually crashed into Earth. The data supporting this came from multispectral images and X-ray fluorescence analysis by instruments aboard Perseverance and are detailed in a second Science paper by lead author Yang Liu, a planetary geologist at JPL. Séítah and Máaz sites According to Shuster, the data allow for a couple of scenarios that explain the igneous rocks on the crater floor. “Either the rock cooled underground and came up from below, somehow, or there was something like a magma lake that filled up the crater and cooled gradually,” he said. Samples from a second nearby site called Máaz — Mars in the Navajo language — are igneous also, but of a different composition. Because this layer overlies the layer of igneous rock exposed at Séítah, the Máaz rock could have been the upper layer of the magma lake. In magma lakes on Earth, the denser minerals settle downward as they crystalize, creating layers of different compositions. These types of igneous formations are called cumulate, which means they formed by the settling of iron- and magnesium-enriched olivine and the subsequent multi-stage cooling of a thick magma body. The Máaz igneous rocks could also be from a later volcanic eruption. In either case, the upper layer that has partly eroded away could have been on the order of hundreds of meters thick, Shuster said. Both the slow-cooled rocks at Séítah and the potentially more rapidly-cooled rocks at Máaz showed alteration by water, though in different ways. The Máaz rocks contained pockets of minerals that may have condensed from salty brine, while the Séítah rocks had reacted with carbonated water, according to chemical analyses onboard the rover. The precise times when these various layers formed will be revealed only by lab analysis on Earth, since the geochemical analysis tools required for dating are too large to have been placed aboard Perseverance. “There are a variety of different geochemical observations that we can make in these rocks when we return them to Earth. That will give us all sorts of information about that igneous environment,” he said. “We can figure out when the rock crystallized, which is one of the things that I'm most excited about for providing a delta timing constraint. But it also gives us information about when igneous activity was occurring in the planet’s interior. Combined with satellite imagery, we can then relate that to some of the bigger-picture, more regional igneous activity.” Shuster noted that duplicate rock samples were taken at each of the four sites and that, within a year, will be cached along with other duplicate samples at a contingency site near the delta, to be used only if the primary samples onboard Perseverance become inaccessible because of mechanical failure. That future cache will also include recently collected samples of sediments from the delta itself — details of which are being prepared for a future scientific paper. Science 10.1126/science.abo2196 Experimental study Aqueously altered igneous rocks sampled on the floor of Jezero crater, Mars 25-Aug-2022
10.1126/science.abo2196
2,022
Science
Aqueously altered igneous rocks sampled on the floor of Jezero crater, Mars
The Perseverance rover landed in Jezero crater, Mars, to investigate ancient lake and river deposits. We report observations of the crater floor, below the crater's sedimentary delta, finding that the floor consists of igneous rocks altered by water. The lowest exposed unit, informally named Séítah, is a coarsely crystalline olivine-rich rock, which accumulated at the base of a magma body. Magnesium-iron carbonates along grain boundaries indicate reactions with carbon dioxide-rich water under water-poor conditions. Overlying Séítah is a unit informally named Máaz, which we interpret as lava flows or the chemical complement to Séítah in a layered igneous body. Voids in these rocks contain sulfates and perchlorates, likely introduced by later near-surface brine evaporation. Core samples of these rocks have been stored aboard Perseverance for potential return to Earth.
748690
Early birds less prone to depression
Middle-to-older aged women who are naturally early to bed and early to rise are significantly less likely to develop depression, according to a new study by researchers at University of Colorado Boulder and the Channing Division of Network Medicine at Brigham and Women's Hospital in Boston. The study of more than 32,000 female nurses, published in the Journal of Psychiatric Research, is the largest and most detailed observational study yet to explore the link between chronotype, or sleep-wake preference, and mood disorders. It shows that even after accounting for environmental factors like light exposure and work schedules, chronotype - which is in part determined by genetics - appears to mildly influence depression risk. "Our results show a modest link between chronotype and depression risk. This could be related to the overlap in genetic pathways associated with chronotype and mood," said lead author Céline Vetter, director of the Circadian and Sleep Epidemiology Laboratory (CASEL) at CU Boulder. Previous studies have shown that night owls are as much as twice as likely to suffer from depression. But because those studies often used data at a single time-point and didn't account for many other factors that influence depression risk, it has been hard to determine whether depression leads people to stay up later or a late chronotype boosts risk of depression. To shed light on the question, researchers used data from 32,470 female participants, average age 55, in the Nurses' Health Study, which asks nurses to fill out health questionnaires biennially. In 2009, all the participants included in the study were free of depression. When asked about their sleep patterns, 37 percent described themselves as early types, 53 percent described themselves as intermediate types, and 10 percent described themselves as evening types. The women were followed for four years to see who developed depression. Depression risk factors like body weight, physical activity, chronic disease, sleep duration, or night shift work were also assessed. The researchers found that late chronotypes, or night owls, are less likely to be married, more likely to live alone and be smokers, and more likely to have erratic sleep patterns. After accounting for these factors, they found that early risers still had a 12 - 27 percent lower risk of being depressed than intermediate types. Late types had a 6 percent higher risk than intermediate types ( this modest increase was not statistically significant.) "This tells us that there might be an effect of chronotype on depression risk that is not driven by environmental and lifestyle factors," said Vetter. Genetics play a role in determining whether you are an early bird, intermediate type, or night owl, with research showing 12-42 percent heritability. And some studies have already shown that certain genes (including PER2 and RORA), which influence when we prefer to rise and sleep, also influence depression risk. "Alternatively, when and how much light you get also influences chronotype, and light exposure also influences depression risk. Disentangling the contribution of light patterns and genetics on the link between chronotype and depression risk is an important next step" Vetter said. Vetter stresses that while the study does suggest that chronotype is an independent risk factor for depression, it does not mean night owls are doomed to be depressed. "Yes, chronotype is relevant when it comes to depression but it is a small effect," she says, noting that her study found a more modest effect than previous ones have. Her advice to night owls who want to lower their risk? "Being an early type seems to beneficial, and you can influence how early you are" she said. Try to get enough sleep, exercise, spend time outdoors, dim the lights at night, and try to get as much light by day as possible.
10.1016/j.jpsychires.2018.05.022
2,018
Journal of Psychiatric Research
Prospective study of chronotype and incident depression among middle- and older-aged women in the Nurses’ Health Study II
Prior cross-sectional studies have suggested that being a late chronotype is associated with depression and depressive symptoms, but prospective data are lacking.We examined the association between chronotype and incident depression (defined as self-reported physician/clinician-diagnosed depression or antidepressant medication use) in 32,470 female participants of the Nurses' Health Study II cohort who self-reported their chronotype (early, intermediate or late) and were free of depression at baseline in 2009 (average age: 55 yrs). Women updated their depression status on biennial questionnaires in 2011 and 2013. We used multivariable (MV)-adjusted Cox proportional hazards models to estimate hazard ratios (HR) and 95% confidence intervals (95%CI) for incident depression across chronotype categories (i.e., early, intermediate, and late chronotypes).Across a follow-up period of 4 years, we observed 2,581 cases of incident depression in this cohort. Compared to intermediate chronotypes, early chronotypes had a modestly lower risk of depression after MV adjustment (MVHR = 0.88, 95%CI = 0.81-0.96), whereas late chronotypes had a similar risk of 1.06 (95%CI = 0.93-1.20); the overall trend across chronotype categories was statistically significant (ptrend<0.01). Results were similar when we restricted analyses to women who reported average sleep durations (7-8 h/day) and no history of rotating night shift work at baseline.Our results suggest that chronotype may influence the risk of depression in middle-to older-aged women. Additional studies are needed to confirm these findings and examine roles of both environmental and genetic factors to further our understanding of the role of chronotype in the etiology of mood disorders.
925099
A protein-based COVID-19 vaccine that mimics the shape of the virus
Even as several safe and effective COVID-19 vaccines are being administered to people worldwide, scientists are still hard at work developing different vaccine strategies that could provide even stronger or longer-lasting immunity against SARS-CoV-2 and its variants. Now, researchers reporting in ACS Central Science have immunized mice with nanoparticles that mimic SARS-CoV-2 by displaying multiple copies of the receptor binding domain (RBD) antigen, showing that the vaccine triggers robust antibody and T cell responses. Although the first vaccines to receive Emergency Use Authorization by the U.S. Food and Drug Administration were based on mRNA, more conventional protein-based vaccines have also shown promise in clinical trials. Most train the immune system to recognize the RBD, a peptide that is the portion of the SARS-CoV-2 spike protein that binds to the ACE-2 receptor on host cell surfaces. However, not all of these vaccines elicit both antibody and T cell responses, both of which are thought to be important for longer-lasting immunity. Melody Swartz, Jeffrey Hubbell and colleagues had previously developed a vaccine delivery tool called polymersomes –– self-assembling, spherical nanoparticles that can encapsulate antigens and adjuvants (helper molecules that boost the immune response) and then release them inside immune cells. Polymersomes trigger robust T cell immunity, and the researchers wondered if they could further improve the antibody response by engineering the nanoparticles to mimic viruses by displaying multiple copies of the RBD on their surfaces. So the team made polymersomes that were similar in size to SARS-CoV-2 and decorated them with many RBDs. After characterizing the nanoparticles in vitro, they injected them into mice, along with separate polymersomes containing an adjuvant, in two doses that were three weeks apart. For comparison, they immunized another group of mice with polymersomes that encapsulated the RBD, along with the nanoparticles containing the adjuvant. Although both groups of mice produced high levels of RBD-specific antibodies, only the surface-decorated polymersomes generated neutralizing antibodies that prevented SARS-CoV-2 infection in cells. Both the surface-decorated and encapsulated RBDs triggered robust T cell responses. Although the new vaccine still needs to be tested for safety and efficacy in humans, it could have advantages over mRNA vaccines with regard to widespread distribution in resource-limited areas, the researchers say. That’s because the surface-decorated polymersomes are stable and active for at least 6 months with refrigeration, in contrast to mRNA vaccines that require subzero temperature storage. The authors acknowledge funding from the Chicago Immunoengineering Innovation Center of the University of Chicago, the Chicago Biomedical Consortium COVID-19 Response Award, the National Institutes of Health, the Canadian Institutes of Health Research and the University of Chicago Comprehensive Cancer Center. The article is freely available as an ACS AuthorChoice paper here. For more of the latest news,register for our upcoming meeting, ACS Fall 2021. Journalists and public information officers are encouraged to apply for complimentary press registration by emailing us at [email protected]. The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio. To automatically receive news releases from the American Chemical Society, contact [email protected]. Follow us: Twitter | Facebook | LinkedIn | Instagram ACS Central Science 10.1021/acscentsci.1c00596 Polymersomes Decorated with the SARS-CoV-2 Spike Protein Receptor-Binding Domain Elicit Robust Humoral and Cellular Immunity
10.1021/acscentsci.1c00596
2,021
ACS Central Science
Polymersomes Decorated with the SARS-CoV-2 Spike Protein Receptor-Binding Domain Elicit Robust Humoral and Cellular Immunity
The COVID-19 pandemic underscores the need for rapid, safe, and effective vaccines. In contrast to some traditional vaccines, nanoparticle-based subunit vaccines are particularly efficient in trafficking antigens to lymph nodes, where they induce potent immune cell activation. Here, we developed a strategy to decorate the surface of oxidation-sensitive polymersomes with multiple copies of the SARS-CoV-2 spike protein receptor-binding domain (RBD) to mimic the physical form of a virus particle. We evaluated the vaccination efficacy of these surface-decorated polymersomes (RBD
720601
Laser-heated nanowires produce micro-scale nuclear fusion
Nuclear fusion, the process that powers our sun, happens when nuclear reactions between light elements produce heavier ones. It's also happening - at a smaller scale - in a Colorado State University laboratory. Using a compact but powerful laser to heat arrays of ordered nanowires, CSU scientists and collaborators have demonstrated micro-scale nuclear fusion in the lab. They have achieved record-setting efficiency for the generation of neutrons - chargeless sub-atomic particles resulting from the fusion process. Their work is detailed in a paper published in Nature Communications, and is led by Jorge Rocca, University Distinguished Professor in electrical and computer engineering and physics. The paper's first author is Alden Curtis, a CSU graduate student. Laser-driven controlled fusion experiments are typically done at multi-hundred-million-dollar lasers housed in stadium-sized buildings. Such experiments are usually geared toward harnessing fusion for clean energy applications. In contrast, Rocca's team of students, research scientists and collaborators, work with an ultra fast, high-powered tabletop laser they built from scratch. They use their fast, pulsed laser to irradiate a target of invisible wires and instantly create extremely hot, dense plasmas - with conditions approaching those inside the sun. These plasmas drive fusion reactions, giving off helium and flashes of energetic neutrons. In their Nature Communications experiment, the team produced a record number of neutrons per unit of laser energy - about 500 times better than experiments that use conventional flat targets from the same material. Their laser's target was an array of nanowires made out of a material called deuterated polyethylene. The material is similar to the widely used polyethylene plastic, but its common hydrogen atoms are substituted by deuterium, a heavier kind of hydrogen atom. The efforts were supported by intensive computer simulations conducted at the University of Dusseldorf (Germany), and at CSU. Making fusion neutrons efficiently, at a small scale, could lead to advances in neutron-based imaging, and neutron probes to gain insight on the structure and properties of materials. The results also contribute to understanding interactions of ultra-intense laser light with matter.
10.1038/s41467-018-03445-z
2,018
Nature Communications
Micro-scale fusion in dense relativistic nanowire array plasmas
Abstract Nuclear fusion is regularly created in spherical plasma compressions driven by multi-kilojoule pulses from the world’s largest lasers. Here we demonstrate a dense fusion environment created by irradiating arrays of deuterated nanostructures with joule-level pulses from a compact ultrafast laser. The irradiation of ordered deuterated polyethylene nanowires arrays with femtosecond pulses of relativistic intensity creates ultra-high energy density plasmas in which deuterons (D) are accelerated up to MeV energies, efficiently driving D–D fusion reactions and ultrafast neutron bursts. We measure up to 2 × 10 6 fusion neutrons per joule, an increase of about 500 times with respect to flat solid targets, a record yield for joule-level lasers. Moreover, in accordance with simulation predictions, we observe a rapid increase in neutron yield with laser pulse energy. The results will impact nuclear science and high energy density research and can lead to bright ultrafast quasi-monoenergetic neutron point sources for imaging and materials studies.
698167
Does where students grow up influence where they go to college?
A new Population, Space and Place study explores how the ethnic composition of where students grow up is linked to where they attend university. Using detailed administrative data on all 412,000 students attending university in the United Kingdom in 2014-2015 combined with spatial census data from 2011, investigators calculated a "diversity score" for every UK university, which was then compared with the ethnic diversity of the surrounding area. These scores allowed for an analysis of factors influencing whether students move towards more or less ethnically diverse universities than where they have grown up. The researchers found that white students are more likely to move towards a university that is more diverse than their home neighbourhood, whereas ethnic?minority students tend to stay at universities that have a similar level of diversity to where they have grown up. These contrasting tendencies may shed light on how race is experienced in contemporary university life. "This research highlights the huge differences between UK universities in terms of the ethnic diversity of these universities. It shows how elite universities in large cities often do not reflect their ethnically diverse surroundings in their largely white student intake," said lead author Dr. Sol Gamsu, of the University of Bath. "This research also explores how the ethnic diversity of where students grow up is linked to the ethnic diversity of the university they attend." The paper forms part of a wider programme of research funded by the Economic and Social Science Research Council (ESRC) led by Dr. Michael Donnelly from the University of Bath.
10.1002/psp.2222
2,018
Population Space and Place
The spatial dynamics of race in the transition to university: Diverse cities and White campuses in U.K. higher education
Abstract Using exceptionally detailed administrative data on all 412,000 students attending university in the United Kingdom in 2014–2015 combined with spatial census data from 2011, we explore for the first time how the ethnic composition of where students grow up is linked to where they attend university. We calculate a “diversity score” for every U.K. university, which is then compared with the ethnic diversity of the surrounding area, allowing us to explore the institutional geography of ethnicity in U.K. universities. These scores provide the basis for a multilevel analysis of factors influencing whether students move towards more or less ethnically diverse universities than where they have grown up. White students are more likely than their ethnic‐minority peers to move towards a university that is more diverse than their home neighbourhood. We thus explore how students' mobility decisions for university are influenced by the uneven geography of race in U.K. cities and universities.