Dataset Viewer
Auto-converted to Parquet
eurekaalert_id
stringlengths
6
6
eurekaalert_title
stringlengths
0
254
eurekaalert_text
stringlengths
0
37.9k
doi
stringlengths
12
42
publication_year
int64
1.99k
2.02k
publication_source
stringlengths
3
123
publication_title
stringlengths
4
702
publication_abstract
stringlengths
1
50.7k
955763
Ideal nodal rings of one-dimensional photonic crystals in the visible region
Photonic Dirac cone is a special kind of degenerate state with linear dispersion, which is ubiquitous in various two-dimensional photonic crystals. By introducing the spatial inversion symmetry breaking, photonic Dirac cone will transit to valley states. Such photonic crystals are called valley photonic crystals. In the band gap of the valley photonic crystal, there are valley-protected edge states, which have been verified in silicon photonic crystal slabs. Based on these edge states, many micro-nano photonic integrated devices such as sharp bend waveguides and microcavity lasing have been realized. Correspondingly, photonic nodal ring is a kind of energy band degeneracy that exists in the three-dimensional structure, and it appears as a closed ring in the energy band. By introducing symmetry breaking, the nodal ring can transit to ridge states. Compared with the valley state, the ridge state has more abundant optical behaviors, such as negative refraction and surface-dependent Goos–Hänchen shift. However, the complex three-dimensional structure hinders the realization of the above behaviors in the optical region, and even further to obtain novel functional photonic devices. Therefore, how to realize the ridge state by a simple optical structure has become an urgent problem to be solved in this field. In a new paper published in Light Science & Applications, a team led by Professor Jianwen Dong from School of Physics and State Key Laboratory of Optoelectronic Materials and Technologies, Sun Yat-sen University, China have realized an ideal nodal ring and ridge states in visible region using simple 1D photonic crystals. They also experimentally observed the surface states in the bandgap of the ridge photonic crystal. The research highlights of this work include the following three aspects:     1. One-dimensional nodal ring photonic crystal It is generally believed that one-dimensional photonic crystals can only realize topological states in one-dimensional momentum space. Nodal ring is degenerate state in three-dimensional momentum space. It seems impossible to achieve nodal ring by one-dimensional photonic crystals. The research team studied a one-dimensional photonic crystal composed of silicon dioxide and silicon nitride materials (Fig. 1(a)). By creatively taking into account the momenta along aperiodic directions, and utilizing the rotational symmetry, they finally realized the ideal nodal ring (Fig. 1(b)). The research team prepared samples using inductance coupled plasma-enhanced chemical vapor deposition (ICP-CVD). By measuring angle-resolved reflectance spectra in the wavelength range of 500-1100 nm, they confirmed the existence of the nodal ring (Fig. 1(c)).     2. One-dimensional ridge photonic crystal Further, the research team replaced a layer of silicon nitride (n=2) with silicon-rich nitride (n=3) in the primary cell of the nodal ring photonic crystal, thereby breaking the spatial inversion symmetry of the structure (Fig. 2(a)). In this case, the nodal ring degeneracy transits to a ridge state with a bandgap. They called this kind of photonic crystal as ridge photonic crystal. Similar to the valley photonic crystal, the calculation results show that a toroidal-shaped Berry flux is formed near the position of the original nodal ring, implying the existence of topologically protected interface states in this bandgap (Fig. 2(c)). Using the low-loss silicon-rich nitride film growth process developed earlier, the research team fabricated a one-dimensional ridge photonic crystal, and observed the interface states by measuring the angle-resolved reflectance spectrum in the wavelength range of 600-1100 nm (Fig. 2(d)).     3. Intrinsic relationship between optical Tamm state and nodal ring At the beginning of this century, people discovered that there may be surface states at the interface of one-dimensional photonic crystal and the metal, which is called optical Tamm state. In this work, the research team found that the nodal ring just locates the singularity of the photonic crystal’s reflection phase, which makes the existence condition of the optical Tamm state always satisfied. Therefore, there must be optical Tamm states protected by phase singularities at the interface of metal and the nodal ring photonic crystal, which provides theoretical guidance for deterministic design of optical Tamm states. The research team deposited a silver film on the surface of the one-dimensional nodal ring photonic crystal using the electron beam evaporation method. The existence of the optical Tamm states was confirmed by measuring the angle-resolved reflectance spectra in the wavelength range of 600-1000 nm. “Our work paves the way for realizing optical phenomena such as negative refraction of surface states and surface-dependent Goos–Hänchen shift in the optical region. Furthermore, nodal ring can also transit to Weyl point degeneracy by introducing other types of symmetry breaking. Therefore, the one-dimensional nodal ring photonic crystal proposed in this work provides the possibility to explore the applications of Weyl point and its associated topological surface states in micro-nano optics.” The scientists forecast. Light Science & Applications 10.1038/s41377-022-00821-9
10.1038/s41377-022-00821-9
2,022
Light Science & Applications
Ideal nodal rings of one-dimensional photonic crystals in the visible region
Three-dimensional (3D) artificial metacrystals host rich topological phases, such as Weyl points, nodal rings, and 3D photonic topological insulators. These topological states enable a wide range of applications, including 3D robust waveguides, one-way fiber, and negative refraction of the surface wave. However, these carefully designed metacrystals are usually very complex, hindering their extension to nanoscale photonic systems. Here, we theoretically proposed and experimentally realized an ideal nodal ring in the visible region using a simple 1D photonic crystal. The π-Berry phase around the ring is manifested by a 2π reflection phase's winding and the resultant drumhead surface states. By breaking the inversion symmetry, the nodal ring can be gapped and the π-Berry phase would diffuse into a toroidal-shaped Berry flux, resulting in photonic ridge states (the 3D extension of quantum valley Hall states). Our results provide a simple and feasible platform for exploring 3D topological physics and its potential applications in nanophotonics.
843676
Identification of RNA editing profiles and their clinical relevance in lung adenocarcinoma
The incidence rate of lung adenocarcinoma (LUAD) is increasing gradually and the mortality is still high. Recent advances in the genomic profile of LUAD have identified a number of driver alterations in specific genes, enabling molecular classification and targeted therapy accordingly. However, only a fraction of LUAD patients with those driver mutations could benefit from targeted therapy, and the remaining large numbers of patients were unclassified. RNA editing events are those nucleotide changes in the RNA. Currently, the role of RNA editing events in tumorigenesis and their potential clinical utility have been reported in a series of studies. However, the profiles of the RNA editing events and their clinical relevance in LUAD remained largely unknown. "We describe a comprehensive landscape of RNA editing events in LUAD by integrating transcriptomic and genomic data from our NJLCC project and TCGA project. We find that the global RNA editing level is significantly increased in tumor tissues and is highly heterogeneous across LUAD patients. The high RNA editing level in tumors can be attributed to both RNA and DNA alterations." said Dr. Cheng Wang, the first author for this work. The results indicated that the pattern of RNA editing events could represent the global characteristics of lung adenocarcinoma. "We then define a new molecular subtype, EC3, based on most variable RNA editing sites. The patients of this subtype show the poorest prognosis. Importantly, the subtype is independent of classic molecular subtypes based on gene expression or DNA methylation. We further propose a simplified prediction model including eight RNA editing sites to accurately distinguish EC3 subtype. " said Dr. Wang. Molecular typing based on a few RNA editing sites may have enormous potential in the clinics. "By applying the simplified model, we find that the EC3 subtype is associated with the sensitivity of specific chemotherapy drugs." said Dr. Wang. "Our study comprehensively describes the general pattern of RNA editing in LUAD. More importantly, we propose a novel molecular subtyping strategy of LUAD based on RNA editing that could predict the prognosis of patients. A simplified model with a few editing sites makes the strategy potentially available in the clinics." said Professor Hongbing Shen, the corresponding author.
10.1007/s11427-020-1928-0
2,021
Science China Life Sciences
Identification of A-to-I RNA editing profiles and their clinical relevance in lung adenocarcinoma
Adenosine-to-inosine (A-to-I) RNA editing is a widespread posttranscriptional modification that has been shown to play an important role in tumorigenesis. Here, we evaluated a total of 19,316 RNA editing sites in the tissues of 80 lung adenocarcinoma (LUAD) patients from our Nanjing Lung Cancer Cohort (NJLCC) and 486 LUAD patients from the TCGA database. The global RNA editing level was significantly increased in tumor tissues and was highly heterogeneous across patients. The high RNA editing level in tumors was attributed to both RNA (ADAR1 expression) and DNA alterations (mutation load). Consensus clustering on RNA editing sites revealed a new molecular subtype (EC3) that was associated with the poorest prognosis of LUAD patients. Importantly, the new classification was independent of classic molecular subtypes based on gene expression or DNA methylation. We further proposed a simplified model including eight RNA editing sites to accurately distinguish the EC3 subtype in our patients. The model was further validated in the TCGA dataset and had an area under the curve (AUC) of the receiver operating characteristic curve of 0.93 (95%CI: 0.91-0.95). In addition, we found that LUAD cell lines with the EC3 subtype were sensitive to four chemotherapy drugs. These findings highlighted the importance of RNA editing events in the tumorigenesis of LUAD and provided insight into the application of RNA editing in the molecular subtyping and clinical treatment of cancer.
832779
Focus on context diminishes memory of negative events, researchers report
In a new study, researchers report they can manipulate how the brain encodes and retains emotional memories. The scientists found that focusing on the neutral details of a disturbing scene can weaken a person's later memories - and negative impressions - of that scene. The findings, reported in the journal Neuropsychologia, could lead to the development of methods to increase psychological resilience in people who are likely to experience traumatic events - like soldiers, police officers or firefighters. Those plagued by depression or anxiety might also benefit from this kind of strategy, the researchers said. "We were interested in different properties of memories that are typically enhanced by emotion," said Florin Dolcos, a professor of psychology at the University of Illinois at Urbana-Champaign who led the study with psychology professor Sanda Dolcos. "The idea was to see whether by engaging in an emotional-regulation strategy we can influence those types of memory properties." There are two categories of memory retrieval. A person may recall a lot of details about an event or experience, a process the researchers call "recollection." Or an individual may have a sense of familiarity with the subject matter but retain no specifics. "We and others showed a while ago that emotion tends to boost our memories," Florin Dolcos said. "We have also known that emotion specifically boosts recollected memories." This memory-enhancing quality of emotion is useful, but it can be problematic for those who recall - again and again - the details of a disturbing or traumatic event, he said. "Negative memories could lead to clinical conditions such as post-traumatic stress disorder, where something that is really traumatic stays with specific details in people's minds," he said. In the study, 19 participants had their brains scanned while they looked at photos with negative or neutral content - a bloody face or a tree, for example - superimposed on a neutral background. Functional MRI signaled which brain areas were activated during the task. An eye-tracker recorded where participants looked. Before each photo, participants were asked to focus their attention either on the foreground or on the background of the image. After viewing it for four seconds, they rated how negatively the photo made them feel (not at all, very, or somewhere in between). Participants returned to lab three to five days later to view the same photos - and a few new ones. They were asked to indicate whether the images were entirely new; familiar, but with no remembered specifics; or recollected in more detail. Not surprisingly, when they focused on the foreground of photos with negative content, participants rated the photos as more negative. When they focused instead on the neutral backgrounds of photos with negative content, they still evaluated the photos as negative, but rated them less negatively. They also retained fewer detailed memories of the negative photos a few days later, the team found. "This is the first example that we know of that focusing on the context of an emotional event while it is occurring can directly influence memory formation in the moment - and one's recall of the event a few days later," Sanda Dolcos said. The fMRI scans revealed that brain regions known to be associated with emotional memory formation were most active when participants focused on the foregrounds of negative photos. Brain activity differed, however, when participants focused on the backgrounds of negative images. "The background-focus condition was associated with decreased activity in the amygdala, hippocampus and anterior parahippocampal gyrus," the researchers wrote. These brain regions play a role in encoding memory and processing emotional information. A statistical analysis "also showed that reduced activity in these regions predicted greater reduction in emotional recollection." "It might seem counterintuitive that we are looking for ways to reduce people's memories," Sanda Dolcos said. "Usually, people are interested in improving their memories. But we are finding that strategies like this, that can be employed when we are exposed to certain distressing situations, can help a lot." Florin and Sanda Dolcos are affiliates of the Beckman Institute for Advanced Science and Technology at the U. of I. ### Editor's notes: To reach Florin Dolcos, call 217-418-3992; email [email protected]. To reach Sanda Dolcos, call 217-418-3995; email [email protected]. The paper "The impact of focused attention on subsequent emotional recollection: A functional MRI investigation" is available online and from the U. of I. News Bureau.
10.1016/j.neuropsychologia.2020.107338
2,020
Neuropsychologia
The impact of focused attention on subsequent emotional recollection: A functional MRI investigation
In his seminal works, Endel Tulving argued that functionally distinct memory systems give rise to subjective experiences of remembering and knowing (i.e., recollection- vs. familiarity-based memory, respectively). Evidence shows that emotion specifically enhances recollection, and this effect is subserved by a synergistic mechanism involving the amygdala (AMY) and hippocampus (HC). In extreme circumstances, however, uncontrolled recollection of highly distressing memories may lead to symptoms of affective disorders. Therefore, it is important to understand the factors that can diminish such detrimental effects. Here, we investigated the effects of Focused Attention (FA) on emotional recollection. FA is an emotion regulation strategy that has been proven quite effective in reducing the impact of emotional responses associated with the recollection of distressing autobiographical memories, but its impact during emotional memory encoding is not known. Functional MRI and eye-tracking data were recorded while participants viewed a series of composite negative and neutral images with distinguishable foreground (FG) and background (BG) areas. Participants were instructed to focus either on the FG or BG content of the images and to rate their emotional responses. About 4 days later, participants' memory was assessed using the R/K procedure, to indicate whether they Recollected specific contextual details about the encoded images or the images were just familiar to them - i.e., participants only Knew that they saw the pictures without being able to remember specific contextual details. First, results revealed that FA was successful in decreasing memory for emotional pictures viewed in BG Focus condition, and this effect was driven by recollection-based retrieval. Second, the BG Focus condition was associated with decreased activity in the AMY, HC, and anterior parahippocampal gyrus for subsequently recollected emotional items. Moreover, correlation analyses also showed that reduced activity in these regions predicted greater reduction in emotional recollection following FA. These results demonstrate the effectiveness of FA in mitigating emotional experiences and emotional recollection associated with unpleasant emotional events.
667182
New childhood dementia insight
Is the eye a window to the brain in Sanfilippo syndrome, an untreatable form of childhood-onset dementia, Australian researchers ask in a new publication. The findings of the NHMRC-funded project, just published in international journal Acta Neuropathologica Communications, highlight the potential for using widely available retinal imaging techniques to learn more about brain disease and monitor treatment efficacy. Sanfilippo syndrome is one of a group of about 70 inherited conditions which collectively affect 1 in 2800 children in Australia, and is more common than cystic fibrosis and better known diseases. Around the world 700,000 children and young people are living with childhood dementia. Researchers from Flinders University, with collaborators at the South Australian Health and Medical Research Institute (SAHMRI) and The University of Adelaide, studied Sanfilippo syndrome in mouse models, discovering for the first time that advancement of retinal disease parallels that occurring in the brain. "This means the retina may provide an easily accessible neural tissue via which brain disease development and its amelioration with treatment can be monitored," says Associate Professor Kim Hemsley who leads the Childhood Dementia Research Group at the Flinders Health and Medical Research Institute (FHMRI) at Flinders University. First author Helen Beard, from the Childhood Dementia Research Group at Flinders University, says there's an urgent need to find treatments and methods to monitor disease progression. Disorders that cause childhood dementia are neurodegenerative (debilitating and progressive) and impair mental function, according to the Childhood Dementia Initiative. "This study offers new hope of using the progression of lesions in the retina - which is part of the central nervous system - as a 'window to the brain'," says senior research officer Ms Beard. "We were able to show that disease lesions appear in the retina very early in the disease course, in fact much earlier than previously thought," she says. This means that in addition to ensuring potential treatments reach the brain, researchers must also confirm that they get into the retina to give patients maximum quality of life. "Our findings suggest that retinal imaging may provide a strategy for monitoring therapeutic efficacy, given that some treatments currently being trialled in children with Sanfilippo syndrome are able to access both brain and retina," Associate Professor Hemsley says. Therapeutic strategies currently being evaluated in human clinical trials include IV delivery of an AAV9-based gene therapy, and a non-invasive, quantitative measure of neurodegeneration would support development of effective treatments, she says.
10.1186/s40478-020-01070-w
2,020
Acta Neuropathologica Communications
Is the eye a window to the brain in Sanfilippo syndrome?
Abstract Sanfilippo syndrome is an untreatable form of childhood-onset dementia. Whilst several therapeutic strategies are being evaluated in human clinical trials including i.v. delivery of AAV9-based gene therapy, an urgent unmet need is the availability of non-invasive, quantitative measures of neurodegeneration. We hypothesise that as part of the central nervous system, the retina may provide a window through which to ‘visualise’ degenerative lesions in brain and amelioration of them following treatment. This is reliant on the age of onset and the rate of disease progression being equivalent in retina and brain. For the first time we have assessed in parallel, the nature, age of onset and rate of retinal and brain degeneration in a mouse model of Sanfilippo syndrome. Significant accumulation of heparan sulphate and expansion of the endo/lysosomal system was observed in both retina and brain pre-symptomatically (by 3 weeks of age). Robust and early activation of micro- and macroglia was also observed in both tissues. There was substantial thinning of retina and loss of rod and cone photoreceptors by ~ 12 weeks of age, a time at which cognitive symptoms are noted. Intravenous delivery of a clinically relevant AAV9-human sulphamidase vector to neonatal mice prevented disease lesion appearance in retina and most areas of brain when assessed 6 weeks later. Collectively, the findings highlight the previously unrecognised early and significant involvement of retina in the Sanfilippo disease process, lesions that are preventable by neonatal treatment with AAV9-sulphamidase. Critically, our data demonstrate for the first time that the advancement of retinal disease parallels that occurring in brain in Sanfilippo syndrome, thus retina may provide an easily accessible neural tissue via which brain disease development and its amelioration with treatment can be monitored.
917471
Feeding 10 billion people by 2050 within planetary limits may be achievable
A global shift towards healthy and more plant-based diets, halving food loss and waste, and improving farming practices and technologies are required to feed 10 billion people sustainably by 2050, a new study finds. Adopting these options reduces the risk of crossing global environmental limits related to climate change, the use of agricultural land, the extraction of freshwater resources, and the pollution of ecosystems through overapplication of fertilizers, according to the researchers. The study, published in the journal Nature, is the first to quantify how food production and consumption affects the planetary boundaries that describe a safe operating space for humanity beyond which Earth's vital systems could become unstable. "No single solution is enough to avoid crossing planetary boundaries. But when the solutions are implemented together, our research indicates that it may be possible to feed the growing population sustainably," says Dr Marco Springmann of the Oxford Martin Programme on the Future of Food and the Nuffield Department of Population Health at the University of Oxford, who led the study. "Without concerted action, we found that the environmental impacts of the food system could increase by 50-90% by 2050 as a result of population growth and the rise of diets high in fats, sugars and meat. In that case, all planetary boundaries related to food production would be surpassed, some of them by more than twofold." The study, funded by EAT as part of the EAT-Lancet Commission for Food, Planet and Health and by Wellcome's "Our Planet, Our Health" partnership on Livestock Environment and People, combined detailed environmental accounts with a model of the global food system that tracks the production and consumption of food across the world. With this model, the researchers analysed several options that could keep the food system within environmental limits. They found: Climate change cannot be sufficiently mitigated without dietary changes towards more plant-based diets. Adopting more plant-based "flexitarian" diets globally could reduce greenhouse gas emissions by more than half, and also reduce other environmental impacts, such as fertilizer application and the use of cropland and freshwater, by a tenth to a quarter. In addition to dietary changes, improving management practices and technologies in agriculture is required to limit pressures on agricultural land, freshwater extraction, and fertilizer use. Increasing agricultural yields from existing cropland, balancing application and recycling of fertilizers, and improving water management, could, along with other measures, reduce those impacts by around half. Finally, halving food loss and waste is needed for keeping the food system within environmental limits. Halving food loss and waste could, if globally achieved, reduce environmental impacts by up to a sixth (16%). "Many of the solutions we analysed are being implemented in some parts of the world, but it will need strong global co-ordination and rapid upscale to make their effects felt," says Springmann. "Improving farming technologies and management practices will require increasing investment in research and public infrastructure, the right incentive schemes for farmers, including support mechanisms to adopt best available practices, and better regulation, for example of fertilizer use and water quality," says Line Gordon, executive director of the Stockholm Resilience Centre and an author on the report. Fabrice de Clerck, director of science at EAT says, "Tackling food loss and waste will require measures across the entire food chain, from storage, and transport, over food packaging and labelling to changes in legislation and business behaviour that promote zero-waste supply chains." "When it comes to diets, comprehensive policy and business approaches are essential to make dietary changes towards healthy and more plant-based diets possible and attractive for a large number of people. Important aspects include school and workplace programmes, economic incentives and labelling, and aligning national dietary guidelines with the current scientific evidence on healthy eating and the environmental impacts of our diet," adds Springmann. ### The paper, Options for keeping the food system within environmental limits, will be published by Nature on 10th October 2018 at http://dx.doi.org/10.1038/s41586-018-0594-0 (the URL will go live after the embargo ends). Notes to the editor: EAT is a non-profit science-based global platform for food system transformation founded by the Stordalen Foundation, Stockholm Resilience Centre and Wellcome. The EAT-Lancet report will be published in January 2019. Wellcome is a global charitable foundation, both politically and financially independent, that supports scientists and researchers, take on big problems, fuel imaginations, and spark debate. Wellcome's "Our Planet, Our Health" partnership on Livestock Environment and People (LEAP) is a research programme based at the Oxford Martin School, University of Oxford, that aims to understand the health, environmental, social and economic effects of meat and dairy consumption to provide evidence and tools for decision makers to promote healthy and sustainable diets. The Oxford Martin School at the University of Oxford is a world-leading centre of pioneering research that addresses global challenges. It invests in research that cuts across disciplines to tackle a wide range of issues such as climate change, disease and inequality. The School supports novel, high risk and multidisciplinary projects that may not fit within conventional funding channels, because breaking boundaries can produce results that could dramatically improve the wellbeing of this and future generations. Underpinning all our research is the need to translate academic excellence into impact - from innovations in science, medicine and technology, through to providing expert advice and policy recommendations. For enquiries and interviews, please contact: Dr Marco Springmann, Senior Researcher on Environmental Sustainability and Public Health and lead author of the study, Oxford Martin Programme on the Future of Food and Nuffield Department of Population Health, University of Oxford T/M: +44 7460202512 Email: [email protected] For further information, please contact: Sally-Anne Stewart, Communication and Media Manager, Oxford Martin School T: 01865 287429 M: 07972 284146 Email: [email protected] Owen Gaffney, media and communications, Stockholm Resilience Centre M: +46 734604833 Email: [email protected]
10.1038/s41586-018-0594-0
2,018
Nature
Options for keeping the food system within environmental limits
The food system is a major driver of climate change, changes in land use, depletion of freshwater resources, and pollution of aquatic and terrestrial ecosystems through excessive nitrogen and phosphorus inputs. Here we show that between 2010 and 2050, as a result of expected changes in population and income levels, the environmental effects of the food system could increase by 50-90% in the absence of technological changes and dedicated mitigation measures, reaching levels that are beyond the planetary boundaries that define a safe operating space for humanity. We analyse several options for reducing the environmental effects of the food system, including dietary changes towards healthier, more plant-based diets, improvements in technologies and management, and reductions in food loss and waste. We find that no single measure is enough to keep these effects within all planetary boundaries simultaneously, and that a synergistic combination of measures will be needed to sufficiently mitigate the projected increase in environmental pressures.
515982
Systems pharmacology modelers accelerate drug discovery in Alzheimer's
Alzheimer's is a chronic neurodegenerative disease which leads to the senile cognitive impairment and memory loss. Every third person older than 70 years suffers from it. Such changes are caused by functional disorders and subsequent death of neurons. However triggers of processes resulting in brain cell death are still remain unknown. That's why there is no effective therapy for Alzheimer's disease. At the moment, the most common hypothesis is a theory of the toxic effect of the beta-amyloid protein, which accumulates in the brain with age, aggregating into insoluble amyloid plaques. The presence of these plaques in the brain is the main marker of Alzheimer's disease (unfortunately, often found postmortem). Soluble (not aggregated into plaques) forms of the protein are considered to be toxic too. All modern therapies act in one of the three ways: they can block production of soluble beta-amyloid, destruct protein before transformation into insoluble form, or to stimulate the plaque degradation. "Clinical trials for Alzheimer's therapies have got one significant feature - their short duration. They last for no more than 5 years, whereas the disease can progress for decades. And early Phase I-II tests last for only few weeks. With such experiment design one can affect only on the processes of distribution and degradation of the soluble beta-amyloid forms. Therefore we developed this part of our model to analyze and predict the dynamics of the new generation of drugs, for instance the inhibitors of amyloid production", says Tatiana Karelina , the head of the neurodegenerative disease modeling group, InSysBio LLC. The first difficulty encountered by drug developers is the interpretation of the results obtained in animal tests. In general, most studies of the distribution of amyloid are carried out on mice: scientists inject a labeled protein into the mouse brain and observe the distribution of the radioactive label. Alternatively, the dynamics of amyloid in the presence of drugs is studied. Based on the data obtained, researchers can calculate the "therapeutic window" for the medication - a range of doses from the minimum effective to the maximum non-toxic. Then doses for human or monkey are calculated by using mass or volume scaling (for the body, the parameters change as many times as its mass or volume is greater than the mass or volume of the mouse). The project team collected the data from the literature and derived a system of equations that fully described the existing results. Firstly the model was calibrated (i.e. the missing parameters were estimated) for the mouse, and then for the human and monkey. It turned out that one cannot use the scaling method to transfer results from rodents to primates (as it's often done). The deduced mathematical equations have shown that not only the rate of production of beta amyloid (as activity of corresponding genes) differs, but moreover the blood-brain barrier is different in rodents and higher primates. At the same time, there was no significant difference between the human and the monkey, and the standard scaling can be used to translate predictions between them. The next big question in Alzheimer's clinical trials is how to understand if the drug affects specific target on the short term. It is impossible to observe the processes that occur in the human brain directly. Usually a cerebrospinal fluid probe is taken for analysis of the change in the concentration of beta-amyloid. Actually, these data strongly differ from the values of amyloid concentration in the brain, since the cerebrospinal fluid is strongly influenced by the processes taking place in the blood plasma, and amyloid demonstrates another dynamics. "If there is such a big structural model calibrated on the big amount of data one can easily match the results of cerebrospinal fluid sample analysis with the real processes in the patient brain. This will greatly accelerate the development of new drugs and improve the accuracy of the therapy selection", explains Tatiana Karelina. Scientists report that their model allows to predict how these new drugs must be administered. Total daily dose can be diminished but should be split for several parts during the day, providing optimal brain efficacy. InSysBio team is confident that the systems-pharmacological modeling can greatly improve the development of drugs from Alzheimer's disease and are already negotiating the introduction of technology with their partners in the pharmaceutical industry.
10.1002/psp4.12211
2,017
CPT Pharmacometrics & Systems Pharmacology
A Translational Systems Pharmacology Model for Aβ Kinetics in Mouse, Monkey, and Human
A mechanistic model of amyloid beta production, degradation, and distribution was constructed for mouse, monkey, and human, calibrated and externally verified across multiple datasets. Simulations of single-dose avagacestat treatment demonstrate that the Aβ42 brain inhibition may exceed that in cerebrospinal fluid (CSF). The dose that achieves 50% CSF Aβ40 inhibition for humans (both healthy and with Alzheimer's disease (AD)) is about 1 mpk, one order of magnitude lower than for mouse (10 mpk), mainly because of differences in pharmacokinetics. The predicted maximal percent of brain Aβ42 inhibition after single-dose avagacestat is higher for AD subjects (about 60%) than for healthy individuals (about 45%). The probability of achieving a normal physiological level for Aβ42 in brain (1 nM) during multiple avagacestat dosing can be increased by using a dosing regimen that achieves higher exposure. The proposed model allows prediction of brain pharmacodynamics for different species given differing dosing regimens.
975218
New study models the transmission of foreshock waves towards Earth
An international team of scientists led by Lucile Turc, an Academy Research Fellow at the University of Helsinki and supported by the International Space Science Institute in Bern has studied the propagation of electromagnetic waves in near-Earth space for three years. The team has studied the waves in the area where the solar wind collides with Earth’s magnetic field called foreshock region, and how the waves are transmitted to the other side of the shock. The results of the study are now published in Nature Physics. “How the waves would survive passing through the shock has remained a mystery since the waves were first discovered in the 1970s. No evidence of those waves has ever been found on the other side of the shock”, says Turc. The team has used a cutting-edge computer model, Vlasiator, developed at the University of Helsinki by a group led by professor Minna Palmroth, to recreate and understand the physical processes at play in the wave transmission. A careful analysis of the simulation revealed the presence of waves on the other side of the shock, with almost identical properties as in the foreshock. “Once it was known what and where to look for, clear signatures of the waves were found in satellite data, confirming the numerical results”, says Lucile Turc. Around our planet is a magnetic bubble, the magnetosphere, which shields us from the solar wind, a stream of charged particles coming from the Sun. Electromagnetic waves, appearing as small oscillations of the Earth's magnetic field, are frequently recorded by scientific observatories in space and on the ground. These waves can be caused by the impact of the changing solar wind or come from the outside of the magnetosphere. The electromagnetic waves play an important role in creating adverse space weather around our planet: they can for example accelerate particles to high energies, which can then damage spacecraft electronics, and cause these particles to fall into the atmosphere. On the side of Earth facing the Sun, scientific observatories frequently record oscillations at the same period as those waves that form ahead of the Earth’s magnetosphere, singing a clear magnetic song in a region of space called the foreshock. This has led space scientists to think that there is a connection between the two, and that the waves in the foreshock can enter the Earth’s magnetosphere and travel all the way to the Earth’s surface. However, one major obstacle lies in their way: the waves must cross the shock before reaching the magnetosphere. “At first, we thought that the initial theory proposed in the 1970s was correct: the waves could cross the shock unchanged. But there was an inconsistency in the wave properties that this theory could not reconcile, so we investigated further”, says Turc. “Eventually, it became clear that things were much more complicated than it seemed. The waves we saw behind the shock were not the same as those in the foreshock, but new waves created at the shock by the periodic impact of foreshock waves.” When the solar wind flows through the shock, it is compressed and heated. The shock strength determines how much compression and heating take place. Turc and her colleagues showed that foreshock waves are able to tune the shock, making it alternatively stronger or weaker when wave troughs or crests arrive at the shock. As a result, the solar wind behind the shock changes periodically and creates new waves, in concert with the foreshock waves. The numerical model also pinpointed that these waves could only be detected in a narrow region behind the shock, and that they could easily be hidden by the turbulence in this region. This likely explains why they had not been observed before. While the waves originating from the foreshock only play a limited role in space weather at Earth, they are of great importance to understand the fundamental physics of our universe. Lucile Turc, Academy Research Fellow, University of Helsinki [email protected] Tel. +358 50 311 9499 Nature Physics 10.1038/s41567-022-01837-z Computational simulation/modeling Not applicable Transmission of foreshock waves through Earth’s bow shock
10.1038/s41567-022-01837-z
2,022
Nature Physics
Transmission of foreshock waves through Earth’s bow shock
The Earth's magnetosphere and its bow shock, which is formed by the interaction of the supersonic solar wind with the terrestrial magnetic field, constitute a rich natural laboratory enabling in situ investigations of universal plasma processes. Under suitable interplanetary magnetic field conditions, a foreshock with intense wave activity forms upstream of the bow shock. So-called 30 s waves, named after their typical period at Earth, are the dominant wave mode in the foreshock and play an important role in modulating the shape of the shock front and affect particle reflection at the shock. These waves are also observed inside the magnetosphere and down to the Earth's surface, but how they are transmitted through the bow shock remains unknown. By combining state-of-the-art global numerical simulations and spacecraft observations, we demonstrate that the interaction of foreshock waves with the shock generates earthward-propagating, fast-mode waves, which reach the magnetosphere. These findings give crucial insight into the interaction of waves with collisionless shocks in general and their impact on the downstream medium.
912619
Tungsten offers nano-interconnects a path of least resistance
As microchips become ever smaller and therefore faster, the shrinking size of their copper interconnects leads to increased electrical resistivity at the nanoscale. Finding a solution to this impending technical bottleneck is a major problem for the semiconductor industry. One promising possibility involves reducing the resistivity size effect by altering the crystalline orientation of interconnect materials. A pair of researchers from Rensselaer Polytechnic Institute conducted electron transport measurements in epitaxial single-crystal layers of tungsten (W) as one such potential interconnect solution. They performed first-principles simulations, finding a definite orientation-dependent effect. The anisotropic resistivity effect they found was most marked between layers with two particular orientations of the lattice structure, namely W(001) and W(110). The work is published this week in the Journal of Applied Physics, from AIP Publishing. Author Pengyuan Zheng noted that both the 2013 and 2015 International Technology Roadmap for Semiconductors (ITRS) called for new materials to replace copper as interconnect material to limit resistance increase at reduced scale and minimize both power consumption and signal delay. In their study, Zheng and co-author Daniel Gall chose tungsten because of its asymmetric Fermi surface--its electron energy structure. This made it a good candidate to demonstrate the anisotropic resistivity effect at the small scales of interest. "The bulk material is completely isotropic, so the resistivity is the same in all directions," Gall said. "But if we have thin films, then the resistivity varies considerably." To test the most promising orientations, the researchers grew epitaxial W(001) and W(110) films on substrates and conducted resistivity measurements of both while immersed in liquid nitrogen at 77 Kelvin (about -196 degrees Celsius) and at room temperature, or 295 Kelvin. "We had roughly a factor of 2 difference in the resistivity between the 001 oriented tungsten and 110 oriented tungsten," Gall said, but they found considerably smaller resistivity in the W(011) layers. Although the measured anisotropic resistance effect was in good agreement with what they expected from calculations, the effective mean free path--the average distance electrons can move before scattering against a boundary--in the thin film experiments was much larger than the theoretical value for bulk tungsten. "An electron travels through a wire on a diagonal, it hits a surface, gets scattered, and then continues traveling until it hits something else, maybe the other side of the wire or a lattice vibration," Gall said. "But this model looks wrong for small wires." The experimenters believe this may be explained by quantum mechanical processes of the electrons that arise at these limited scales. Electrons may be simultaneously touching both sides of the wire or experiencing increased electron-phonon (lattice vibrations) coupling as the layer thickness decreases, phenomena that could affect the search for another metal to replace copper interconnects. "The envisioned conductivity advantages of rhodium, iridium, and nickel may be smaller than predicted," said Zheng. Findings like these will prove increasingly important as quantum mechanical scales become more commonplace for the demands of interconnects. The research team is continuing to explore the anisotropic size effect in other metals with nonspherical Fermi surfaces, such as molybdenum. They found that the orientation of the surface relative to the layer orientation and transport direction is vital, as it determines the actual increase in resistivity at these reduced dimensions. "The results presented in this paper clearly demonstrate that the correct choice of crystalline orientation has the potential to reduce nanowire resistance," said Zheng. The importance of the work extends beyond current nanoelectronics to new and developing technologies, including transparent flexible conductors, thermoelectrics and memristors that can potentially store information. "It's the problem that defines what you can do in the next technology," Gall said. ### The article, "The anisotropic size effect of the electrical resistivity of metal thin films: Tungsten," is authored by Pengyuan Zheng and Daniel Gall. The article appeared in Applied Physics Letters Oct. 3, 2017 (DOI: 10.1063/1.5004118) and can be accessed at: http://aip.scitation.org/doi/full/10.1063/1.5004118. ABOUT THE JOURNAL Journal of Applied Physics features full length reports on significant new findings in applied physics. The journal covers new experimental and theoretical research on applications of physics phenomena related to all branches of science, engineering, and modern technology. See http://jap.aip.org.
10.1063/1.5004118
2,017
Journal of Applied Physics
The anisotropic size effect of the electrical resistivity of metal thin films: Tungsten
The resistivity of nanoscale metallic conductors is orientation dependent, even if the bulk resistivity is isotropic and electron scattering cross-sections are independent of momentum, surface orientation, and transport direction. This is demonstrated using a combination of electron transport measurements on epitaxial tungsten layers in combination with transport simulations based on the ab initio predicted electronic structure, showing that the primary reason for the anisotropic size effect is the non-spherical Fermi surface. Electron surface scattering causes the resistivity of epitaxial W(110) and W(001) layers measured at 295 and 77 K to increase as the layer thickness decreases from 320 to 4.5 nm. However, the resistivity is larger for W(001) than W(110) which, if describing the data with the classical Fuchs-Sondheimer model, yields an effective electron mean free path λ* for bulk electron-phonon scattering that is nearly a factor of two smaller for the 110 vs the 001-oriented layers, with λ(011)*= 18.8 ± 0.3 nm vs λ(001)* = 33 ± 0.4 nm at 295 K. Boltzmann transport simulations are done by integration over real and reciprocal space of the thin film and the Brillouin zone, respectively, describing electron-phonon scattering by momentum-independent constant relaxation-time or mean-free-path approximations, and electron-surface scattering as a boundary condition which is independent of electron momentum and surface orientation. The simulations quantify the resistivity increase at the reduced film thickness and predict a smaller resistivity for W(110) than W(001) layers with a simulated ratio λ(011)*/λ(001)* = 0.59 ± 0.01, in excellent agreement with 0.57 ± 0.01 from the experiment. This agreement suggests that the resistivity anisotropy in thin films of metals with isotropic bulk electron transport is fully explained by the non-spherical Fermi surface and velocity distribution, while electron scattering at phonons and surfaces can be kept isotropic and independent of the surface orientation. The simulations correctly predict the anisotropy of the resistivity size effect, but underestimate its absolute magnitude. Quantitative analyses suggest that this may be due to (i) a two-fold increase in the electron-phonon scattering cross-section as the layer thickness is reduced to 5 nm or (ii) a variable wave-vector dependent relaxation time for electron-phonon scattering.
504278
Sunfleck use research needs appropriate experimental leaves
"All the roads of learning begin in the darkness and go out into the light." This quote is often attributed to Hippocrates and exhibits a double level of relevance in photosynthesis research. The use of light by plant leaves to drive photosynthesis is often studied in steady state environments, but most plant leaves are required to adjust to fluctuations in incident light every day. The research into use of fluctuating light by plant leaves has expanded in recent decades. A study from the Western Pacific Tropical Research Center at the University of Guam has shown that accurate results in this subdiscipline of plant physiology can only be obtained when methods employ leaves that were grown in fluctuating light prior to experimental methods. The results have been published in a recent issue of the journal Plants (doi: 10.3390/plants9070905). The experimental results confirmed that leaves which were constructed under homogeneous shade such as commercial shade fabric did not respond to fluctuating light in a manner that was similar to leaves which were constructed under fluctuating light. To expand the applicability of the results, three model species were employed for this study. Soybean represented eudicot angiosperms, corn represented monocot angiosperms, and the native cycad species in Guam represented gymnosperms. The experimental approach called on traditional response variables to ensure applicability of the results to the established literature. One response variable was the speed of increase in photosynthesis when a leaf that is acclimated to deep shade is suddenly challenged with saturating incident light, a response that physiologists call induction. A second response variable was the influence of a short sunfleck on photosynthetic induction during a subsequent sunfleck, a response that physiologists call priming. "As expected, the leaves that developed under fluctuating light exhibited more rapid photosynthetic induction and more successful priming than the leaves that developed in homogeneous shade," said Thomas Marler, author of the paper. This new knowledge indicates a substantial percentage of the established leaf physiology literature concerning use of sunflecks includes results that are dubious because the sunfleck methods used experimental leaves that were grown under shadecloth. The study also reveals the value of off-site conservation germplasm collections. "Ubiquitous invasive insect herbivores in Guam create difficulties for research on the native cycad species," said Marler. "The ex situ germplasm collections in several countries allow scientists to sustain relevant research on this important cycad species." This study, for example, was conducted in one of these managed gardens in the Philippines where the plants are not threatened by the insects. When new knowledge illuminates a fallacy in established experimental methods, a search for an empirical approach for salvaging the published information is appropriate. If a universal conversion factor could be identified, for example, then the published data could be corrected with that conversion. Unfortunately, there were quantitative differences among the three model species with regard to how the homogeneous shade leaves behaved compared to the heterogeneous shade leaves. Therefore, the published sunfleck use literature based on methods that employed homogeneous shade-grown leaves should be interpreted with caution.
10.3390/plants9070905
2,020
Plants
Artifleck: The Study of Artifactual Responses to Light Flecks with Inappropriate Leaves
Methods in sunfleck research commonly employ the use of experimental leaves which were constructed in homogeneous light. These experimental organs may behave unnaturally when they are challenged with fluctuating light. Photosynthetic responses to heterogeneous light and leaf macronutrient relations were determined for Cycas micronesica, Glycine max, and Zea mays leaves that were grown in homogeneous shade, heterogeneous shade, or full sun. The speed of priming where one light fleck increased the photosynthesis during a subsequent light fleck was greatest for the leaves grown in heterogeneous shade. The rate of induction and the ultimate steady-state photosynthesis were greater for the leaves that were grown in heterogeneous shade versus the leaves grown in homogeneous shade. The leaf mass per area, macronutrient concentration, and macronutrient stoichiometry were also influenced by the shade treatments. The amplitude and direction in which the three developmental light treatments influenced the response variables were not universal among the three model species. The results indicate that the historical practice of using experimental leaves which were constructed under homogeneous light to study leaf responses to fluctuating light may produce artifacts that generate dubious interpretations.
891834
If cancer were easy, every cell would do it
A new Scientific Reports paper puts an evolutionary twist on a classic question. Instead of asking why we get cancer, Leonardo Oña of Osnabrück University and Michael Lachmann of the Santa Fe Institute use signaling theory to explore how our bodies have evolved to keep us from getting more cancer. It isn't obvious why, when any cancer arises, it doesn't very quickly learn to take advantage of the body's own signaling mechanisms for quick growth. After all, unlike an infection, cancers can easily use the body's own chemical language. "Any signal that the body uses, an infection has to evolve to make," says Lachmann. "If a thief wants to unlock your house, they have to figure out how to pick the lock on the door. But cancer cells have the keys to your house. How do you protect against that? How do you protect against an intruder who knows everything you know, and has all the tools and keys you have?" Their answer: You make the keys very costly to use. Oña and Lachmann's evolutionary model reveals two factors in our cellular architecture that thwart cancer: the expense of manufacturing growth factors ("keys") and the range of benefits delivered to cells nearby. Individual cancer cells are kept in check when there's a high energetic cost for creating growth factors that signal cell growth. To understand the evolutionary dynamics in the model, the authors emphasize the importance of thinking about the competition between a mutant cancerous cell and surrounding cells. When a mutant cell arises and puts out a signal for growth, that signal also provides resources to adjacent, non-mutated cells. Thus, when the benefits are distributed to a radius around the signaling cell, the mutant cells have a hard time out-competing their neighbors and can't get established. The cancer loses the ability to give the signal. The work represents a novel application of evolutionary biology toward a big-picture understanding of cancer. Oña and Lachmann draw from the late biologist Amos Zahavi's handicap principle, which explains how evolutionary systems are stabilized against "cheaters" when dishonest signals are costlier to produce than the benefit they provide. The male peacock's elaborate tail is the classic example of a costly signal - an unhealthy bird would not have the energetic resources to grow an elaborate tail, and thus could not "fake" a signal of their evolutionary fitness. By the handicap principle, a cancer cell would be analogous to the unhealthy peacock that can't afford to signal for attention. So how do some cancer cells overcome these evolutionary constraints? The authors point out that their model only addresses the scenario of an individual cancer trying to invade a healthy population. Once a cancer has overcome the odds of extinction and reached a certain critical size, other dynamics prevail. "Many mechanisms seem to have evolved to prevent cancer -- from immune system control, cell death, limits on cell proliferation, to tissue architecture," the authors write. "Our model only studies the reduced chance for invasion." "Cancer is incredibly complex," Lachmann says, "and our model is relatively simple. Still, we believe it's an important step toward understanding cancer and cancer prevention in evolutionary terms."
10.1038/s41598-020-57494-w
2,020
Scientific Reports
Signalling architectures can prevent cancer evolution
Abstract Cooperation between cells in multicellular organisms is preserved by an active regulation of growth through the control of cell division. Molecular signals used by cells for tissue growth are usually present during developmental stages, angiogenesis, wound healing and other processes. In this context, the use of molecular signals triggering cell division is a puzzle, because any molecule inducing and aiding growth can be exploited by a cancer cell, disrupting cellular cooperation. A significant difference is that normal cells in a multicellular organism have evolved in competition between high-level organisms to be altruistic, being able to send signals even if it is to their detriment. Conversely, cancer cells evolve their abuse over the cancer’s lifespan by out-competing their neighbours. A successful mutation leading to cancer must evolve to be adaptive, enabling a cancer cell to send a signal that results in higher chances to be selected. Using a mathematical model of such molecular signalling mechanism, this paper argues that a signal mechanism would be effective against abuse by cancer if it affects the cell that generates the signal as well as neighbouring cells that would receive a benefit without any cost, resulting in a selective disadvantage for a cancer signalling cell. We find that such molecular signalling mechanisms normally operate in cells as exemplified by growth factors. In scenarios of global and local competition between cells, we calculate how this process affects the fixation probability of a mutant cell generating such a signal, and find that this process can play a key role in limiting the emergence of cancer.
954216
New light-powered catalysts could aid in manufacturing
Chemical reactions that are driven by light offer a powerful tool for chemists who are designing new ways to manufacture pharmaceuticals and other useful compounds. Harnessing this light energy requires photoredox catalysts, which can absorb light and transfer the energy to a chemical reaction. MIT chemists have now designed a new type of photoredox catalyst that could make it easier to incorporate light-driven reactions into manufacturing processes. Unlike most existing photoredox catalysts, the new class of materials is insoluble, so it can be used over and over again. Such catalysts could be used to coat tubing and perform chemical transformations on reactants as they flow through the tube. “Being able to recycle the catalyst is one of the biggest challenges to overcome in terms of being able to use photoredox catalysis in manufacturing. We hope that by being able to do flow chemistry with an immobilized catalyst, we can provide a new way to do photoredox catalysis on larger scales,” says Richard Liu, an MIT postdoc and the joint lead author of the new study. The new catalysts, which can be tuned to perform many different types of reactions, could also be incorporated into other materials including textiles or particles. Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT, is the senior author of the paper, which appears today in Nature Communications. Sheng Guo, an MIT research scientist, and Shao-Xiong Lennon Luo, an MIT graduate student, are also authors of the paper. Hybrid materials Photoredox catalysts work by absorbing photons and then using that light energy to power a chemical reaction, analogous to how chlorophyll in plant cells absorbs energy from the sun and uses it to build sugar molecules. Chemists have developed two main classes of photoredox catalysts, which are known as homogenous and heterogenous catalysts. Homogenous catalysts usually consist of organic dyes or light-absorbing metal complexes. These catalysts are easy to tune to perform a specific reaction, but the downside is that they dissolve in the solution where the reaction takes place. This means they can’t be easily removed and used again. Heterogenous catalysts, on the other hand, are solid minerals or crystalline materials that form sheets or 3D structures. These materials do not dissolve, so they can be used more than once. However, these catalysts are more difficult to tune to achieve a desired reaction. To combine the benefits of both of these types of catalysts, the researchers decided to embed the dyes that make up homogenous catalysts into a solid polymer. For this application, the researchers adapted a plastic-like polymer with tiny pores that they had previously developed for performing gas separations. In this study, the researchers demonstrated that they could incorporate about a dozen different homogenous catalysts into their new hybrid material, but they believe it could work more many more. “These hybrid catalysts have the recyclability and durability of heterogeneous catalysts, but also the precise tunability of homogeneous catalysts,” Liu says. “You can incorporate the dye without losing its chemical activity, so, you can more or less pick from the tens of thousands of photoredox reactions that are already known and get an insoluble equivalent of the catalyst you need.” The researchers found that incorporating the catalysts into polymers also helped them to become more efficient. One reason is that reactant molecules can be held in the polymer’s pores, ready to react. Additionally, light energy can easily travel along the polymer to find the waiting reactants. “The new polymers bind molecules from solution and effectively preconcentrate them for reaction,” Swager says. “Also, the excited states can rapidly migrate throughout the polymer. The combined mobility of the excited state and partitioning of the reactants in the polymer make for faster and more efficient reactions than are possible in pure solution processes.” Higher efficiency The researchers also showed that they could tune the physical properties of the polymer backbone, including its thickness and porosity, based on what application they want to use the catalyst for. As one example, they showed that they could make fluorinated polymers that would stick to fluorinated tubing, which is often used for continuous flow manufacturing. During this type of manufacturing, chemical reactants flow through a series of tubes while new ingredients are added, or other steps such as purification or separation are performed. Currently, it is challenging to incorporate photoredox reactions into continuous flow processes because the catalysts are used up quickly, so they have to be continuously added to the solution. Incorporating the new MIT-designed catalysts into the tubing used for this kind of manufacturing could allow photoredox reactions to be performed during continuous flow. The tubing is clear, allowing light from an LED to reach the catalysts and activate them. “The idea is to have the catalyst coating a tube, so you can flow your reaction through the tube while the catalyst stays put. In that way, you never get the catalyst ending up in the product, and you can also get a lot higher efficiency,” Liu says. The catalysts could also be used to coat magnetic beads, making them easier to pull out of a solution once the reaction is finished, or to coat reaction vials or textiles. The researchers are now working on incorporating a wider variety of catalysts into their polymers, and on engineering the polymers to optimize them for different possible applications.
10.1038/s41467-022-29811-6
2,022
Nature Communications
Solution-processable microporous polymer platform for heterogenization of diverse photoredox catalysts
In contemporary organic synthesis, substances that access strongly oxidizing and/or reducing states upon irradiation have been exploited to facilitate powerful and unprecedented transformations. However, the implementation of light-driven reactions in large-scale processes remains uncommon, limited by the lack of general technologies for the immobilization, separation, and reuse of these diverse catalysts. Here, we report a new class of photoactive organic polymers that combine the flexibility of small-molecule dyes with the operational advantages and recyclability of solid-phase catalysts. The solubility of these polymers in select non-polar organic solvents supports their facile processing into a wide range of heterogeneous modalities. The active sites, embedded within porous microstructures, display elevated reactivity, further enhanced by the mobility of excited states and charged species within the polymers. The independent tunability of the physical and photochemical properties of these materials affords a convenient, generalizable platform for the metamorphosis of modern photoredox catalysts into active heterogeneous equivalents.
726114
New genetic knowledge on the causes of severe COVID-19
Worldwide, otherwise healthy adolescents and young people without underlying conditions are sometimes severely affected by COVID-19, with the viral infection in the worst cases quickly becoming life-threatening. But why is this happening? A world-wide consortium of researchers is determined to investigate this - and they have now made so much progress that Science has just published two scientific articles describing some of their results. Professor Trine Mogensen from the Department of Biomedicine at Aarhus University is co-author on the two research articles in Science. She conducts research into rare immunodeficiencies that lead to increased susceptibility to viral infections and, together with her research group, participates in the steering committee of the research consortium Covid human genetic effort (covidhge) as the only Danish representativ. She explains that in the vast majority of people, infection with the COVID-19 causing coronavirus leads to an anti-viral response in which interferon plays a crucial role. Interferon is an importantimmune signaling hormone that slows the division of the virus and prevents it from penetrating the surrounding cells. In the event of a viral infection, the body normally quickly begins producing interferon, and the virus can be brought under control withing a few hours. In popular terms, interferon is our first safeguard against an infection. "However, if there are defects in the interferon signalling pathways, there is nothing to inhibit the virus dividing, and while the coronavirus usually remains in the cells in the throat, it can in this case also infect other parts of the body such as the lungs, kidneys and perhaps even the brain," explains Trine Mogensen, who also is Medical Specialist at the Department of Infectious Diseases, Aarhus University Hospital, Denmark Genetic and immunological analyses of blood samples from 650 patients from all over the world with severe COVID-19 show that some of these patients have an inherited immunodeficiency which leads to the anti-viral interferon either not being produced or not working on the body's cells. Blood samples from 1,226 healthy individuals have functioned as a control group - with all of the samples being taken prior to the COVID-19 pandemic. The researchers have obtained consent to collect blood samples and carry out a genetic analysis from hospitalized and severely ill COVID-19 patients. From the blood samples, the researchers have purified immune cells from the 650 patients and subsequently infected these immune cells with coronavirus, which enabled them to ascertain that the immune system was not properly activated. In addition, a genetic sequencing of DNA from the 650 patients has been carried out, with some of this work being carried out at Aarhus University Hospital. "Our DNA consists of approximately 20,000 genes, and we have found defects in thirteen different genes. This means that the proteins which the genes encode become defective and therefore cannot perform their role in the immune system. We're already aware of some of these genetic defects from patients affected by severe influenza, but some are new and specific to COVID-19," says Trine Mogensen. The next task for the international research consortium is to translate - i.e. transfer - the basic immunological findings to the treatment of patients, and the first clinical trials are on the way. Medical doctors will be able to measure whether the patients have autoantibodies in their blood as these are relatively easy to measure, and if they are, they can be filtered from the blood. It will also be possible to screen for the thirteen critical genes identified and in this way have the ability to identify particularly vulnerable individuals. This group will then be able to receive preventative medical treatment and a vaccine once this is available. "The goal is to prevent the very severe cases of COVID-19 with high mortality rates," summarizes Trine Mogensen, who is optimistic and hopes that the clinical trials will demonstrate positive results - perhaps already within a year. She does not only base her optimism on the unique international collaboration which exists in the COVID Human Genetic Effort, as the international research consortium is named. "I've never experienced anything like it before in my field of immunology and infectious diseases. We share knowledge and work together in a very altruistic spirit," she adds. The consortium comprises more than 250 researchers under the overall leadership of Professor Jean-Laurent Casanova from The Rockefeller University in the United States - with the professor also serving as an Honorary Skou professor at Aarhus University since 2019. ###
10.1126/science.abd4570
2,020
Science
Inborn errors of type I IFN immunity in patients with life-threatening COVID-19
The genetics underlying severe COVID-19 The immune system is complex and involves many genes, including those that encode cytokines known as interferons (IFNs). Individuals that lack specific IFNs can be more susceptible to infectious diseases. Furthermore, the autoantibody system dampens IFN response to prevent damage from pathogen-induced inflammation. Two studies now examine the likelihood that genetics affects the risk of severe coronavirus disease 2019 (COVID-19) through components of this system (see the Perspective by Beck and Aksentijevich). Q. Zhang et al. used a candidate gene approach and identified patients with severe COVID-19 who have mutations in genes involved in the regulation of type I and III IFN immunity. They found enrichment of these genes in patients and conclude that genetics may determine the clinical course of the infection. Bastard et al. identified individuals with high titers of neutralizing autoantibodies against type I IFN-α2 and IFN-ω in about 10% of patients with severe COVID-19 pneumonia. These autoantibodies were not found either in infected people who were asymptomatic or had milder phenotype or in healthy individuals. Together, these studies identify a means by which individuals at highest risk of life-threatening COVID-19 can be identified. Science , this issue p. eabd4570 , p. eabd4585 ; see also p. 404
864005
RUDN mathematicians confirmed the possibility of data transfer via gravitational waves
RUDN mathematicians analyzed the properties of gravitational waves in a generalized affine- metrical space (an algebraic construction operating the notions of a vector and a point) similarly to the properties of electromagnetic waves in Minkowski space-time. It turned out that there is the possibility of transmitting information with the help of nonmetricity waves and transferring it spatially without distortions. The discovery can help the scientists master new means of data transfer in space, e.g. between space stations. The article of the scientists was published in the Classical and Quantum Gravity journal. The recently discovered gravitational waves are waves of curvature of space-time, which according to Einstein's general relativity theory is completely determined by the space-time itself. However, currently there are reasons to consider space-time as a more complex structure with additional geometrical characteristics such as torsion and nonmetricity. In this case, geometrically speaking, space-time turns from a Riemannian space envisaged by the General Relativity (GR) into a generalized affine - metrical space. Respective gravitational field equations that generalize Einstein's equations show that torsion and nonmetricity can also spread in the form of waves (in particular plane waves at a great distance from wave sources). In order to describe gravitational waves RUDN researchers used mathematical abstraction - an affine space, i.e. a usual vector space but without an origin of coordinates. It was proved that in such mathematical representation of gravitational waves there are functions which remain invariable in process of distribution of a wave. It is possible to set arbitrary function so that it encoded any information - in approximately same way electromagnetic waves transfer a radio signal. It means that if to find a way to set these constructions in a wave source, they will be reach any point of space without changes. Thus, gravitational waves could be used for data transfer. The study consisted of three stages. On the first one RUDN mathematicians calculated the Lie derivative - a function that binds the properties of bodies in two different spaces: an affine space and a Minkowski space. It allowed them to pass from the description of waves in the real space to their mathematical interpretation. At the second stage the researchers determined five arbitrary functions of time, i.e. the constructions which are not changing in process of distribution of a wave. With their help, the characteristics of a wave can be set in a source, encoding thus any information. In other point of space this information can be decoded back. These two phenomena also provide possibility of information transfer. At last, at the third stage the researchers proved the theorem on the structure of plane nonmetricity gravitational waves. It turned out that from four dimensions of a wave (three spatial ones and one time dimension) three can be used to encode an informational signal using only one function, and the fourth dimension - with use of two functions. "We found out that waves of this type (nonmetricity waves) are able to transmit data, similarly to the recently discovered curvature waves, because their description contains arbitrary functions of delayed time which can be encoded in the source of such waves (in a perfect analogy to electromagnetic waves). With this circumstance it is connected possible prospect of our research which can be however realized only if the nonmetricity will be open as the physical phenomenon, and not just as mathematical generalization of Einstein theory of relativity," says Nina V. Markova, a co-author of the work, candidate of physical and mathematical sciences, assistant professor of C.M. Nikolsky Mathematical Institute, and a staff member of RUDN. ### The work was carried out in collaboration with scientists from MCPU and MARCU.
10.1088/1361-6382/aace79
2,018
Classical and Quantum Gravity
Structure of plane gravitational waves of nonmetricity in affine-metric space
A definition of an affine-metric space of the plane wave type is given using the analogy with the properties of plane electromagnetic waves in Minkowski space. The action of the Lie derivative on the 40 components of the nonmetricity 1-form in the 4-dimensional affine-metric space leads to the conclusion that the nonmetricity of a plane wave type is determined by five arbitrary functions of delayed time. A theorem is proved that parts of the nonmetricity 1-form irreducible with respect to the Lorentz transformations of the tangent space, such as the Weyl 1-form, the trace 1-form, and the spin 3 1-form, are defined by one arbitrary function each, and the spin 2 1-form is defined by two arbitrary functions. This proves the possibility of transmitting information with the help of nonmetricity waves.
781128
Compression garments reduce strength loss after training
Regular training enhances your strength, but recovery is equally important. Elastic bandages and compression garments are widely used in sports to facilitate recovery and prevent injuries. Now, a research team from Tohoku University has determined that compression garments also reduce strength loss after strenuous exercise. Their research findings were published in the European Journal of Applied Physiology. The team - led by assistant professor János Négyesi and professor Ryoichi Nagatomi from the Graduate School of Biomedical Engineering - used a computerized dynamometer to train healthy subjects until they became fatigued. The same equipment was used to detect changes in the maximal strength and knee joint position sense straight after, 24 hours after and one week after the training. Their results revealed that using a below-knee compression garment during training compensates for fatigue effects on maximal strength immediately following the exercise and once 24 hours has elapsed. In other words, one can begin the next maximal intensity strength training earlier if one has used a below-knee compression garment in the previous workout. Although compression garments reduce strength loss, their findings reaffirmed that they afford no protection against knee joint position sense errors. "Our previous studies focused only on the effects of compression garments on joint position sense," said Dr. Négyesi. "The present study found such garments to have the potential to reduce strength loss after a fatiguing exercise, which may help us better understand how applying a compression garment during exercise can decrease the risk of musculoskeletal injuries during sports activities." The researchers believe wearing a below-knee compression garment during regular workouts is beneficial because of the mechanical support and tissue compression it provides. Looking ahead, the team aims to detect whether maximal intensity programs that last for weeks produce different outcomes than the current findings to determine the longitudinal effects of compression garments.
10.1007/s00421-020-04507-1
2,020
European Journal of Applied Physiology
A below-knee compression garment reduces fatigue-induced strength loss but not knee joint position sense errors
We examined the possibility that wearing a below-knee compression garment (CG) reduces fatigue-induced strength loss and joint position sense (JPS) errors in healthy adults.Subjects (n = 24, age = 25.5 ± 4 years) were allocated to either one of the treatment groups that performed 100 maximal isokinetic eccentric contractions at 30°-1 with the right-dominant knee extensors: (1) with (EXPCG) or (2) without CG (EXP) or to (3) a control group (CONCG: CG, no exercise). Changes in JPS errors, and maximal voluntary isometric contraction (MVIC) torque were measured immediately post-, 24 h post-, and 1 week post-intervention in each leg. All testing was done without the CG.CG afforded no protection against JPS errors. Mixed analysis of variance (ANOVA) revealed that absolute JPS errors increased post-intervention in EXPCG and EXP not only in the right-exercised (52%, p = 0.013; 57%, p = 0.007, respectively) but also in the left non-exercised (55%, p = 0.001; 58%, p = 0.040, respectively) leg. Subjects tended to underestimate the target position more in the flexed vs. extended knee positions (75-61°: - 4.6 ± 3.6°, 60-50°: - 4.2 ± 4.3°, 50-25°: - 2.9 ± 4.2°), irrespective of group and time. Moreover, MVIC decreased in EXP but not in EXPCG and CONCG at immediately post-intervention (p = 0.026, d = 0.52) and 24 h post-intervention (p = 0.013, d = 0.45) compared to baseline.Altogether, a below-knee CG reduced fatigue-induced strength loss at 80° knee joint position but not JPS errors in healthy younger adults.
941664
Simpler and reliable ALS diagnosis with blood tests
Blood tests may enable more accurate diagnosis of ALS at an earlier stage of the disease. As described in a study by researchers at University of Gothenburg and Umeå University, it involves measuring the blood level of a substance that, as they have also shown, varies in concentration depending on which variant of ALS the patient has. The study, published in Scientific Reports, include Fani Pujol-Calderón, postdoctoral fellow at Sahlgrenska Academy, University of Gothenburg, and Arvin Behzadi, doctoral student at Umeå University and medical intern at Örnsköldsvik Hospital, as shared first authors. Currently, it is difficult to diagnose amyotrophic lateral sclerosis (ALS), the most common form of motor neuron disease, early in the course of the disease. Even after a prolonged investigation, there is a risk of misdiagnosis due to other diseases that may resemble ALS in early stages. Much would be gained from earlier correct diagnosis and, according to the researchers, the current findings look promising. Neurofilaments — proteins with a special role in the cells and fibers of nerves — are the substances of interest. When the nervous system is damaged, neurofilaments leak into the cerebrospinal fluid (CSF) and in lower concetrations in blood compared to CSF. In their study, scientists at Umeå University and the University Hospital of Umeå, as well as at the University of Gothenburg and Sahlgrenska University Hospital in Gothenburg, demonstrated that CSF and blood levels of neurofilaments can differentiate ALS from other diseases that may resemble early ALS. More sensitive methods of analysis Compared with several other neurological diseases, previous studies have shown higher concentrations of neurofilaments in CSF in ALS. Measuring neurofilament levels in the blood has previously been difficult since they occur at much lower concentrations compared to CSF. In recent years, however, new and more sensitive analytical methods have generated new scope for doing so. The current study shows a strong association, in patients with ALS, between the quantity of neurofilaments in the blood and in CSF. The study is based on blood and CSF samples collected from 287 patients who had been referred to the Department of Neurology at the University Hospital of Umeå for investigation of possible motor neuron disease. After extensive investigation, 234 of these patients were diagnosed with ALS. These had significantly higher levels of neurofilaments in CSF and blood compared to patients who were not diagnosed with ALS. Higher concentrations Differences among various subgroups of ALS were also investigated and detected. Patients whose pathological symptoms started in the head and neck region had higher neurofilament concentrations in the blood and worse survival than patients whose disease onset began in an arm or a leg. The study has also succeeded in quantifying differences in blood levels of neurofilaments and survival for the two most common mutations associated to ALS. “Finding suspected cases of ALS through a blood test opens up completely new opportunities for screening and measuring neurofilaments in blood collected longitudinally enables easier quantification of treatment effects in clinical drug trials compared to longitudinal collection of CSF. Finding ALS early in the disease course may facilitate earlier administration of pharmaceutical treatment, before the muscles have atrophied,” Arvin Behzadi says. ALS is a neurodegenerative syndrome that leads to loss of nerve cells in both the brain and the spinal cord, resulting in muscle weakness and atrophy. Most of these patients die within two to four years after the symptom onset, but roughly one in ten survive more than ten years after the symptoms first appeared. Several genetic mutations have been associated to ALS. At present, there is no curative treatment. Nevertheless, the current drug available has been shown to prolong the survival in some ALS patients if it is administered in time. Scientific Reports 10.1038/s41598-021-01499-6 Case study People Neurofilaments can differentiate ALS subgroups and ALS from common diagnostic mimics 11-Nov-2021
10.1038/s41598-021-01499-6
2,021
Scientific Reports
Neurofilaments can differentiate ALS subgroups and ALS from common diagnostic mimics
Abstract Delayed diagnosis and misdiagnosis are frequent in people with amyotrophic lateral sclerosis (ALS), the most common form of motor neuron disease (MND). Neurofilament light chain (NFL) and phosphorylated neurofilament heavy chain (pNFH) are elevated in ALS patients. We retrospectively quantified cerebrospinal fluid (CSF) NFL, CSF pNFH and plasma NFL in stored samples that were collected at the diagnostic work-up of ALS patients (n = 234), ALS mimics (n = 44) and controls (n = 9). We assessed the diagnostic performance, prognostication value and relationship to the site of onset and genotype. CSF NFL, CSF pNFH and plasma NFL levels were significantly increased in ALS patients compared to patients with neuropathies & myelopathies, patients with myopathies and controls. Furthermore, CSF pNFH and plasma NFL levels were significantly higher in ALS patients than in patients with other MNDs. Bulbar onset ALS patients had significantly higher plasma NFL levels than spinal onset ALS patients. ALS patients with C9orf72HRE mutations had significantly higher plasma NFL levels than patients with SOD1 mutations. Survival was negatively correlated with all three biomarkers. Receiver operating characteristics showed the highest area under the curve for CSF pNFH for differentiating ALS from ALS mimics and for plasma NFL for estimating ALS short and long survival. All three biomarkers have diagnostic value in differentiating ALS from clinically relevant ALS mimics. Plasma NFL levels can be used to differentiate between clinical and genetic ALS subgroups.
951771
Quantum mechanics could explain why DNA can spontaneously mutate
The molecules of life, DNA, replicate with astounding precision, yet this process is not immune to mistakes and can lead to mutations. Using sophisticated computer modelling, a team of physicists and chemists at the University of Surrey have shown that such errors in copying can arise due to the strange rules of the quantum world.   The two strands of the famous DNA double helix are linked together by subatomic particles called protons – the nuclei of atoms of hydrogen – which provide the glue that bonds molecules called bases together. These so-called hydrogen bonds are like the rungs of a twisted ladder that makes up the double helix structure discovered in 1952 by James Watson and Francis Crick based on the work of Rosalind Franklin and Maurice Wilkins.   Normally, these DNA bases (called A, C, T and G) follow strict rules on how they bond together: A always bonds to T and C always to G. This strict pairing is determined by the molecules' shape, fitting them together like pieces in a jigsaw, but if the nature of the hydrogen bonds changes slightly, this can cause the pairing rule to break down, leading to the wrong bases being linked and hence a mutation. Although predicted by Crick and Watson, it is only now that sophisticated computational modelling has been able to quantify the process accurately.  The team, part of Surrey's research programme in the exciting new field of quantum biology, have shown that this modification in the bonds between the DNA strands is far more prevalent than has hitherto been thought. The protons can easily jump from their usual site on one side of an energy barrier to land on the other side. If this happens just before the two strands are unzipped in the first step of the copying process, then the error can pass through the replication machinery in the cell, leading to what is called a DNA mismatch and, potentially, a mutation.   In a paper published this week in the journal Nature Communications Physics, the Surrey team based in the Leverhulme Quantum Biology Doctoral Training Centre used an approach called open quantum systems to determine the physical mechanisms that might cause the protons to jump across between the DNA strands. But, most intriguingly, it is thanks to a well-known yet almost magical quantum mechanism called tunnelling – akin to a phantom passing through a solid wall – that they manage to get across.   It had previously been thought that such quantum behaviour could not occur inside a living cell's warm, wet and complex environment. However, the Austrian physicist Erwin Schrödinger had suggested in his 1944 book What is Life? that quantum mechanics can play a role in living systems since they behave rather differently from inanimate matter. This latest work seems to confirm Schrödinger's theory.    In their study, the authors determine that the local cellular environment causes the protons, which behave like spread out waves, to be thermally activated and encouraged through the energy barrier. In fact, the protons are found to be continuously and very rapidly tunnelling back and forth between the two strands. Then, when the DNA is cleaved into its separate strands, some of the protons are caught on the wrong side, leading to an error.  Dr Louie Slocombe, who performed these calculations during his PhD, explains that:  “ The protons in the DNA can tunnel along the hydrogen bonds in DNA and modify the bases which encode the genetic information. The modified bases are called "tautomers" and can survive the DNA cleavage and replication processes, causing "transcription errors" or mutations.”  Dr Slocombe's work at the Surrey's Leverhulme Quantum Biology Doctoral Training Centre was supervised by Prof Jim Al-Khalili (Physics, Surrey) and Dr Marco Sacchi (Chemistry, Surrey) and published in Communications Physics.  Prof Al-Khalili comments:   “Watson and Crick speculated about the existence and importance of quantum mechanical effects in DNA well over 50 years ago, however, the mechanism has been largely overlooked.”   Dr Sacchi continues:   “Biologists would typically expect tunnelling to play a significant role only at low temperatures and in relatively simple systems. Therefore, they tended to discount quantum effects in DNA. With our study, we believe we have proved that these assumptions do not hold.”  Click here to read the full report.  Click here for more information on the Leverhulme Quantum Biology Doctoral Training Centre.  The Leverhulme Trust was established by the Will of William Hesketh Lever, the founder of Lever Brothers. Since 1925 the Trust has provided grants and scholarships for research and education. Today, it is one of the largest all-subject providers of research funding in the UK, distributing approximately £100m a year. For more information about the Trust, please visit www.leverhulme.ac.uk and follow the Trust on Twitter @LeverhulmeTrust  <ends>  Professor Jim Al-Khalili and Dr Louie Slocombe are available for interview upon request  Contact the University of Surrey Press Office via [email protected]  Communications Physics 10.1038/s42005-022-00881-8 Computational simulation/modeling Cells An open quantum systems approach to proton tunnelling in DNA 5-May-2022
10.1038/s42005-022-00881-8
2,022
Communications Physics
An open quantum systems approach to proton tunnelling in DNA
Abstract One of the most important topics in molecular biology is the genetic stability of DNA. One threat to this stability is proton transfer along the hydrogen bonds of DNA that could lead to tautomerisation, hence creating point mutations. We present a theoretical analysis of the hydrogen bonds between the Guanine-Cytosine (G-C) nucleotide, which includes an accurate model of the structure of the base pairs, the quantum dynamics of the hydrogen bond proton, and the influence of the decoherent and dissipative cellular environment. We determine that the quantum tunnelling contribution to the proton transfer rate is several orders of magnitude larger than the classical over-the-barrier hopping. Due to the significance of the quantum tunnelling even at biological temperatures, we find that the canonical and tautomeric forms of G-C inter-convert over timescales far shorter than biological ones and hence thermal equilibrium is rapidly reached. Furthermore, we find a large tautomeric occupation probability of 1.73 × 10 −4 , suggesting that such proton transfer may well play a far more important role in DNA mutation than has hitherto been suggested. Our results could have far-reaching consequences for current models of genetic mutations.
691436
Dynamic model helps understand healthy lakes to heal sick ones
Development of a dynamic model for microbial populations in healthy lakes could help scientists understand what's wrong with sick lakes, prescribe cures and predict what may happen as environmental conditions change. Those are among the benefits expected from an ambitious project to model the interactions of some 18,000 species in a well-studied Wisconsin lake. The research produced what is believed to be largest dynamic model of microbial species interactions ever created. Analyzing long-term data from Lake Mendota near Madison, Wisconsin, a Georgia Tech research team identified and modeled interactions among 14 sub-communities, that is, collections of different species that become dominant at specific times of the year. Key environmental factors affecting these sub-communities included water temperature and the levels of two nutrient classes: ammonia/phosphorus and nitrates/nitrites. The effects of these factors on the individual species were, in general, more pronounced than those of species-species interactions. Beyond understanding what's happening in aquatic microbial environments, the model might also be used to study other microbial populations - perhaps even human microbiomes. The research was reported on March 24 in the journal Systems Biology and Applications, a Nature partner journal. The work was sponsored by the National Science Foundation's Dimensions of Biodiversity program. "Ultimately, we want to understand why some microbial populations are declining and why some are increasing at certain times of the year," said Eberhard Voit, the paper's corresponding author and The David D. Flanagan Chair Professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University. "We want to know why these populations are changing - whether it is because of environmental conditions alone, or interactions between the different species. Importantly, we also look at the temporal development: how interactions change over time." Because of the large number of different microorganisms involved, creating such a model was a monumental task. To make it more manageable, the researchers segmented the most abundant species into groups that had significant interactions at specific times of the year. Georgia Tech Research Scientist Phuongan Dam created 14 such categories or sub-communities - corresponding to roughly one per month - and mapped the relationships between them during different times of the year. Two of the 14 groups had two population peaks per year. "The exciting part about this work is that we are now able to model hundreds of species," said Kostas Konstantinidis, a co-author on the paper and the Carlton S. Wilder associate professor in Georgia Tech's School of Civil and Environmental Engineering. "The ability to dynamically model microbial communities containing hundreds or even thousands of species as those interactions change over time or after environmental perturbations will have numerous implications and applications for other research areas." In the past, researchers have created static models of interactions between large numbers of microorganisms, but those provided only snapshots in time and couldn't be used to model interactions as they change throughout the year. Scientists might want to know, for example, what would happen if a community lost one species, if a flood of nutrients hit the lake or if the temperature rose. As with many communities, the lake includes organisms from different species and families that are highly interconnected, playing a variety of interrelated roles, such as fixing nitrogen, carrying out photosynthesis, degrading pollutants and providing metabolic services used by other organisms. Information about the microbes came from a long-term data set compiled by other scientists who study the lake on a regular basis. Voit, a bio-mathematician, said the model, although itself nonlinear, uses algorithms based on linear regression, which can be analyzed using standard computer clusters. Using their 14 sub-communities, the researchers found 196 interactions that could describe the species interactions - a far easier task than analyzing the 300 million potential interactions between the full 18,642 species in the lake. Reducing the number of potential interactions was possible only due to the strategy of defining sub-communities and a clever modeling approach. The researchers initially tried to organize the microbes into genetically related organisms, but that strategy failed. "At any time of the year, the lake needs species that can do certain tasks," said Voit. "Closely-related species tend to play essentially the same roles, so that putting them all together into the same group results in having many organisms doing the same things - but not executing other tasks that are needed at a specific time. By looking at the 14 sub-communities, we were able to get a smorgasbord of every task that needed to be done using different combinations of the microorganisms at each time." By looking at sub-communities present at specific times of the year, the research team was able to study interactions that occurred naturally - and avoided having to study interactions that rarely took place. The model examines interactions at two levels: among the 14 sub-communities, and between the sub-communities and individual species. The research depended heavily on metagenomics, the use of genomic analysis to identify the microorganisms present. Only 1 percent of microbial species can be cultured in the laboratory, but metagenomics allows scientists to obtain the complete inventory of species present by identifying specific sections of their DNA. Because they are not fully characterized species, the components of genomic data are termed "operational taxonomic units" (OTUs), which the team used as a "proxy" for species. The next step in the research will be to complete a similar study of Lake Lanier, located north of Atlanta. In addition to the information studied for Lake Mendota, that study will gather data about the enzymatic and metabolic activities of the microorganism communities. Lake Lanier feeds the Chattahoochee River and a series of other lakes, and the researchers hope to study the entire river system to assess how different environments and human activities affect the microbial populations. The work could lead to a better understanding of what interactions are necessary for a healthy lake, which may help scientists determine what might be needed to address problems in sick lakes. The modeling technique might also help scientists with other complex microbial systems. "Our work right now is with the lake community, but the methods could be applicable to other microbial communities, including the human microbiome," said Konstantinidis. "As with sick lakes, understanding what is healthy might one day allow scientists to diagnose microbiome-related disease conditions and address them by adjusting the populations of different microorganism sub-communities." ### This material is based upon work supported by the National Science Foundation under Grant No. DEB-1241046. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. CITATION: Phuongan Dam, Luis L. Fonseca, Konstantinos T. Konstantinidis and Eberhard O. Voit, "Dynamic models of the complex microbial metapopulation of Lake Mendota," (Nature Partner Journal Systems Biology and Applications, 2016). http://dx.doi.org/10.1038/npjsba.2016.7
10.1038/npjsba.2016.7
2,016
npj Systems Biology and Applications
Dynamic models of the complex microbial metapopulation of lake mendota
Like many other environments, Lake Mendota, WI, USA, is populated by many thousand microbial species. Only about 1,000 of these constitute between 80 and 99% of the total microbial community, depending on the season, whereas the remaining species are rare. The functioning and resilience of the lake ecosystem depend on these microorganisms, and it is therefore important to understand their dynamics throughout the year. We propose a two-layered set of dynamic mathematical models that capture and interpret the yearly abundance patterns of the species within the metapopulation. The first layer analyzes the interactions between 14 subcommunities (SCs) that peak at different times of the year and together contain all species whereas the second layer focuses on interactions between individual species and SCs. Each SC contains species from numerous families, genera, and phyla in strikingly different abundances. The dynamic models quantify the importance of environmental factors in shaping the dynamics of the lake's metapopulation and reveal positive or negative interactions between species and SCs. Three environmental factors, namely temperature, ammonia/phosphorus, and nitrate+nitrite, positively affect almost all SCs, whereas by far the most interactions between SCs are inhibitory. As far as the interactions can be independently validated, they are supported by literature information. The models are quite robust and permit predictions of species abundances over many years both, under the assumption that conditions do not change drastically, or in response to environmental perturbations. A lake microbe population model developed by US researchers reveals how environmental factors affect community dyamics. Metagenomic sequencing now provides huge datasets on the abundances of species in an ecosystem, but computational models are needed to understand how all the species interact. Eberhard Voit and co-workers at Georgia Institute of Technology used 11 years worth of data collected from Lake Mendota, Wisconsin, to inform a new model based on the famous Lotka-Volterra equations for predator-prey interactions. To make their task manageable, they grouped the 1140 most abundant microbe species into 14 sub-communities. Their model can predict the effects of annual cycles of temperature and nutrients on community dynamics, as well as quantify interactions between sub-communities. It will be a useful tool for assessing the health of lake ecosystems now and in the future.
589555
Quality of life with those with advanced cancer improved through walking
Walking for just 30 minutes three times per week could improve the quality of life for those with advanced cancer, a new study published in the BMJ Open journal has found. Researchers from the University of Surrey collaborated with those form the Florence Nightingale Faculty of Nursing & Midwifery at King's College London to explore the impact of walking on the quality of life and symptom severity in patients with advanced cancer. Despite growing evidence of significant health benefits of exercise to cancer patients, physical activity commonly declines considerably during treatment and remains low afterwards. Initiatives in place to promote physical activity for those suffering with cancer are normally supervised and require travel to specialist facilities, placing an additional burden on patients. During this study 42 cancer patients were split into two groups. Group one received coaching from an initiative by Macmillan Cancer which included a short motivational interview, the recommendation to walk for at least 30 minutes on alternate days and attend a volunteer-led group walk weekly. The health benefits of walking are well documented, with improved cardio vascular strength and increased energy levels. Group two were encouraged to maintain their current level of activity. Researchers found that those in group one reported an improvement in physical, emotional and psychological wellbeing having completed the programme. Many participants noted that walking provided an improved positive attitude towards their illness and spoke of the social benefits of participating in group walks. One of the participants commented: "The impact has been immense! It gave me the motivation to not only increase walking activity from minutes to 3-4 hours per week but also to reduce weight by altering diet, reducing sweets/sugars. Great boost to morale. No longer dwell on being terminal - I'm just on getting on with making life as enjoyable as possible, greatly helped by friends made on regular 'walks for life'." Professor Emma Ream, co-author of the paper and Professor of Supportive Cancer Care and Director of Research in the School of Health Sciences at the University of Surrey, said: "The importance of exercise in preventing cancer recurrence and managing other chronic illnesses is becoming clear. "Findings from this important study show that exercise is valued by, suitable for, and beneficial to people with advanced cancer. "Rather than shying away from exercise people with advanced disease should be encouraged to be more active and incorporate exercise into their daily lives where possible." Dr Jo Armes, lead researcher and Senior Lecturer at the Florence Nightingale Faculty of Nursing & Midwifery, King's College London, said: "This study is a first step towards exploring how walking can help people living with advanced cancer. Walking is a free and accessible form of physical activity, and patients reported that it made a real difference to their quality of life. "Further research is needed with a larger number of people to provide definitive evidence that walking improves both health outcomes and social and emotional wellbeing in this group of people." ### This study was funded by Dimbleby Cancer Care.
10.1136/bmjopen-2016-013719
2,017
BMJ Open
CanWalk: a feasibility study with embedded randomised controlled trial pilot of a walking intervention for people with recurrent or metastatic cancer
Objectives Walking is an adaptable, inexpensive and accessible form of physical activity. However, its impact on quality of life (QoL) and symptom severity in people with advanced cancer is unknown. This study aimed to assess the feasibility and acceptability of a randomised controlled trial (RCT) of a community-based walking intervention to enhance QoL in people with recurrent/metastatic cancer. Design We used a mixed-methods design comprising a 2-centre RCT and nested qualitative interviews. Participants Patients with advanced breast, prostate, gynaecological or haematological cancers randomised 1:1 between intervention and usual care. Intervention The intervention comprised Macmillan's ‘Move More’ information, a short motivational interview with a recommendation to walk for at least 30 min on alternate days and attend a volunteer-led group walk weekly. Outcomes We assessed feasibility and acceptability of the intervention and RCT by evaluating study processes (rates of recruitment, consent, retention, adherence and adverse events), and using end-of-study questionnaires and qualitative interviews. Patient-reported outcome measures (PROMs) assessing QoL, activity, fatigue, mood and self-efficacy were completed at baseline and 6, 12 and 24 weeks. Results We recruited 42 (38%) eligible participants. Recruitment was lower than anticipated (goal n=60), the most commonly reported reason being unable to commit to walking groups (n=19). Randomisation procedures worked well with groups evenly matched for age, sex and activity. By week 24, there was a 45% attrition rate. Most PROMs while acceptable were not sensitive to change and did not capture key benefits. Conclusions The intervention was acceptable, well tolerated and the study design was judged acceptable and feasible. Results are encouraging and demonstrate that exercise was popular and conveyed benefit to participants. Consequently, an effectiveness RCT is warranted, with some modifications to the intervention to include greater tailoring and more appropriate PROMs selected. Trial registration number ISRCTN42072606 .
646613
New research shows effectiveness of laws for protecting imperiled species, remaining gaps
New research from the Center for Conservation Innovation (CCI) at Defenders of Wildlife, published in the journal Nature Communications, shows for the first time the importance of expert agencies to protecting imperiled species. This paper, "Data Indicate the Importance of Expert Agencies in Conservation Policy," empirically supports the need for strong oversight of federal activities. It also suggests data-driven ways to improve efficiency without sacrificing protections. This is critical at a time when conservation laws and policies are under attack: understanding what works in conservation is essential in combatting the global biodiversity crisis. The data analyzed by Defenders of Wildlife included every Endangered Species Act section 7 consultation between federal agencies and the National Marine Fisheries Service (NMFS) from 2000 through 2017. The analysis showed that agencies and NMFS agreed on how proposed federal projects would affect listed species most of the time, and that the consultation process rarely stops projects. Importantly, however, federal agencies underestimated the effects of their actions on listed species in 15% of consultations, relative to what species experts at NMFS concluded. This included 22 extreme cases where NMFS concluded the action would jeopardize the very existence of 14 species after the agency had determined its action would do no harm. In 6% of cases, agencies overestimated the effects of their actions, which meant additional resources may have been unnecessarily spent in analyses. "This study emphasizes the critical role that the expert biologists at the Services play in assessing the impacts of proposed federal actions on threatened and endangered species," said Michael Evans, CCI Senior Conservation Data Scientist and lead author on the study. "Our findings show that limiting or removing the Services from the consultation process could have disastrous consequences for imperiled species. And at the same time, we were able to identify areas where the consultation process could be made more efficient, without sacrificing protections to listed species." "Recent proposals to 'streamline' consultations by removing the species experts in the National Marine Fisheries Service from the process could be devastating to the species who need protection the most," said Jacob Malcom, Director of the Center for Conservation Innovation at Defenders of Wildlife and an author on the study. "Rather than try to cut protections, Congress should be strengthening and fully funding the expert agencies--National Marine Fisheries Service and the U.S. Fish and Wildlife Service--who ensure the protections for threatened and endangered species." ### The data and analyses can be explored using an interactive web app hosted on the CCI webpage: https://defenders-cci.org/shiny/open/NMFS_s7/ Background The U.S. Endangered Species Act (ESA), passed with overwhelming bipartisan support under the Nixon administration in 1973, is widely considered the strongest wildlife protection law in the world. The law is incredibly successful: more than 95% of listed species are still with us today and hundreds are on the path to recovery. Section 7 of the ESA requires federal agencies to conserve listed species by not taking, funding, or authorizing any actions that would jeopardize their existence. They consult with either the U.S. Fish and Wildlife Service or National Marine Fisheries Services on any proposed actions that may affect listed species to fulfill this obligation. If the Services determine an action may jeopardize a listed species, the Services must suggest "reasonable and prudent alternatives" that agencies can implement to reduce, or offset harm caused by the proposed action. If adopted, the agencies may legally proceed with the action. By asking "How often do federal agencies overestimate or underestimate the effects of their actions on listed species?" this research evaluates whether proposals to reduce the role of the Services in consultations. Future research measuring the outcomes of consultation in terms of actions taken and species status would help determine the effectiveness of the program. The 14 species for which NMFS issued jeopardy determinations after federal agencies determined their actions would not detrimentally affect the species were: boulder star coral, elkhorn coral, lobed star coral, mountainous star coral, pillar coral, rough cactus coral, staghorn coral, Nassau grouper, Chinook salmon, chum salmon, coho salmon, sockeye salmon, steelhead and southern resident killer whale. Figures Figures from the report can be downloaded for reporter use at https://defenders.org/newsroom/new-research-shows-effectiveness-of-laws-protecting-imperiled-species-remaining-gaps: U.S. Endangered Species Act section 7 consultation outcomes, Frequencies of determinations proposed by action agencies vs. final determinations made by NMFS, and pairs of species that received jeopardy determinations from the same proposed federal action more or less than expected. Defenders of Wildlife is dedicated to the protection of all native animals and plants in their natural communities. With over 1.8 million members and activists, Defenders of Wildlife is a leading advocate for innovative solutions to safeguard our wildlife heritage for generations to come. For more information, visit defenders.org/newsroom and follow us on Twitter @DefendersNews.
10.1038/s41467-019-11462-9
2,019
Nature Communications
Novel data show expert wildlife agencies are important to endangered species protection
Abstract To protect biodiversity, conservation laws should be evaluated and improved using data. We provide a comprehensive assessment of how a key provision of the U.S. Endangered Species Act (ESA) is implemented: consultation to ensure federal actions do not jeopardize the existence of listed species. Data from all 24,893 consultations recorded by the National Marine Fisheries Service (NMFS) from 2000–2017 show federal agencies and NMFS frequently agreed (79%) on how federal actions would affect listed species. In cases of disagreement, agencies most often (71%) underestimated effects relative to the conclusions of species experts at NMFS. Such instances can have deleterious consequences for imperiled species. In 22 consultations covering 14 species, agencies concluded that an action would not harm species while NMFS determined the action would jeopardize species’ existence. These results affirm the importance of the role of NMFS in preventing federal actions from jeopardizing listed species. Excluding expert agencies from consultation compromises biodiversity conservation, but we identify approaches that improve consultation efficiency without sacrificing species protections.
968908
Study shows inexpensive, readily available chemical may limit impact of COVID-19
Preclinical studies in mice that model human COVID-19 suggest that an inexpensive, readily available amino acid might limit the effects of the disease and provide a new off-the-shelf therapeutic option for infections with SARS-CoV-2 variants and perhaps future novel coronaviruses. A team led by researchers at the David Geffen School of Medicine at UCLA report that an amino acid called GABA, which is available over-the-counter in many countries, reduced disease severity, viral load in the lungs, and death rates in SARS-CoV-2-infected mice. This follows up on their previous finding that GABA consumption also protected mice from another lethal mouse coronavirus called MHV-1. In both cases, GABA treatment was effective when given just after infection or several days later near the peak of virus production. The protective effects of GABA against two different types of coronaviruses suggest that GABA may provide a generalizable therapy to help treat diseases induced by new SARS-CoV-2 variants and novel beta-coronaviruses.   “SARS-CoV-2 variants and novel coronaviruses will continue to arise, and they may not be efficiently controlled by available vaccines and antiviral medications. Furthermore, the generation of new vaccines is likely to be much slower than the spread of new variants,” said senior author Daniel L. Kaufman, a researcher and professor in Molecular and Medical Pharmacology at the David Geffen School of Medicine at UCLA. Accordingly, new therapeutic options are needed to limit the severity of these infections. Their previous studies showed that GABA administration protected mice from developing severe disease after infection with a mouse coronavirus called MHV-1. To more stringently test the potential of GABA as a therapy for COVID-19, they studied transgenic mice that when infected with SARS-CoV-2 develop severe pneumonia with a high mortality rate. “If our observations of the protective effects of GABA therapy in SARS-CoV-2-infected mice are confirmed in clinical trials, GABA could provide an off-the-shelf treatment to help ameliorate infections with SARS-CoV-2 variants. GABA is inexpensive and stable at room temperature, which could make it widely and easily accessible, and especially beneficial in developing countries.” The researchers said that GABA and GABA receptors are most often thought of as a major neurotransmitter system in the brain. Years ago, they, as well as other researchers, found that cells of the immune system also possessed GABA receptors and that the activation of these receptors inhibited the inflammatory actions of immune cells. Taking advantage of this property, the authors reported in a series of studies that GABA administration inhibited autoimmune diseases such as type 1 diabetes, multiple sclerosis, and rheumatoid arthritis in mouse models of these ailments. Other scientists who study gas anesthetics have found that lung epithelial cells also possess GABA receptors and that drugs that activate these receptors could limit lung injuries and inflammation in the lung. The dual actions of GABA in inflammatory immune cells and lung epithelial cells, along with its safety for clinical use, made GABA a theoretically appealing candidate for limiting the overreactive immune responses and lung damage due to coronavirus infection. Working with colleagues at the University of Southern California, the UCLA research team in this study administered GABA to the mice just after infection with SARS-CoV-2, or two days later when the virus levels are near their peak in the mouse lungs. While the vast majority of untreated mice did not survive this infection, those given GABA just after infection, or two days later, had less illness severity and a lower mortality rate over the course of the study. Treated mice also displayed reduced levels of virus in their lungs and changes in circulating immune signaling molecules, known as cytokines and chemokines, toward patterns that were associated with better outcomes in COVID-19 patients. Thus, GABA receptor activation had multiple beneficial effects in this mouse model that are also desirable for the treatment of COVID-19. The authors hope that their new findings will provide a springboard for testing the efficacy of GABA treatment in clinical trials with COVID-19 patients. Since GABA has an excellent safety record, is inexpensive and available worldwide, clinical trials of GABA treatment for COVID-19 can be initiated rapidly. The authors also suspect that the anti-inflammatory properties of GABA-receptor activating drugs may also be useful for limiting inflammation in the central nervous system that is associated with long-COVID. Indeed, this approach was very successful in their previous studies of therapeutics for multiple sclerosis in mice, a disease which is caused by an inflammatory autoimmune response in the brain. The authors speculate that such drugs may reduce both the deleterious effects of coronavirus infection in the periphery and limit inflammation in the central nervous system. Unfortunately, there has been no pharmaceutical interest pursuing GABA therapy for COVID-19, presumably because it is not patentable and widely available as a dietary supplement. The authors hope for federal funding to continue this line of study. The researchers emphasize that unless clinical trials are conducted and GABA is approved for treating COVID-19 by relevant governing bodies, it should not be consumed for the treatment of COVID-19 since it could pose health risks, such as dampening beneficial immune or physiological responses. Article: A GABA-receptor agonist reduces pneumonitis severity, viral load, and death rate in SARS-CoV-2-infected mice Front. Immunol. Sec. Viral Immunology. DOI: 10.3389/fimmu.2022.1007955 Additional authors include Jide Tian, Barbara Dillion, High Containment Program at UCLA, and Jill Henley and Lucio Comai, Keck School of Medicine at USC. Funding: This work was supported by a grant to DLK from the UCLA DGSOM-Broad Stem Cell Research Center, the Department of Molecular and Medical Pharmacology, and the Immunotherapeutics Research Fund. Work at USC was supported by a grant from the COVID-19 Keck Research Fund to LC.  Conflicts of interests: DLK and JT are inventors of GABA-related patents. DLK serves on the Scientific Advisory Board of Diamyd Medical. BD, LC and JH have no financial conflicts of interest. Frontiers in Immunology 10.3389/fimmu.2022.1007955 Experimental study Animals A GABA-receptor agonist reduces pneumonitis severity, viral load, and death rate in SARS-CoV-2-infected mice 24-Oct-2022 DLK and JT are inventors of GABA-related patents. DLK serves on the Scientific Advisory Board of Diamyd Medical. BD, LC and JH have no financial conflicts of interest.
10.3389/fimmu.2022.1007955
2,022
Frontiers in Immunology
A GABA-receptor agonist reduces pneumonitis severity, viral load, and death rate in SARS-CoV-2-infected mice
Gamma-aminobutyric acid (GABA) and GABA-receptors (GABA-Rs) form a major neurotransmitter system in the brain. GABA-Rs are also expressed by 1) cells of the innate and adaptive immune system and act to inhibit their inflammatory activities, and 2) lung epithelial cells and GABA-R agonists/potentiators have been observed to limit acute lung injuries. These biological properties suggest that GABA-R agonists may have potential for treating COVID-19. We previously reported that GABA-R agonist treatments protected mice from severe disease induced by infection with a lethal mouse coronavirus (MHV-1). Because MHV-1 targets different cellular receptors and is biologically distinct from SARS-CoV-2, we sought to test GABA therapy in K18-hACE2 mice which develop severe pneumonitis with high lethality following SARS-CoV-2 infection. We observed that GABA treatment initiated immediately after SARS-CoV-2 infection, or 2 days later near the peak of lung viral load, reduced pneumonitis severity and death rates in K18-hACE2 mice. GABA-treated mice had reduced lung viral loads and displayed shifts in their serum cytokine/chemokine levels that are associated with better outcomes in COVID-19 patients. Thus, GABA-R activation had multiple effects that are also desirable for the treatment of COVID-19. The protective effects of GABA against two very different beta coronaviruses (SARS-CoV-2 and MHV-1) suggest that it may provide a generalizable off-the-shelf therapy to help treat diseases induced by new SARS-CoV-2 variants and novel coronaviruses that evade immune responses and antiviral medications. GABA is inexpensive, safe for human use, and stable at room temperature, making it an attractive candidate for testing in clinical trials. We also discuss the potential of GABA-R agonists for limiting COVID-19-associated neuroinflammation.
514157
Rare congenital heart defect rescued by protease inhibition
Greenwood, SC (October 15, 2020) - A research team at the Greenwood Genetic Center (GGC) has successfully used small molecules to restore normal heart and valve development in an animal model for Mucolipidosis II (ML II), a rare genetic disorder. Progressive heart disease is commonly associated with ML II. The study is reported in this month's JCI Insight. The small molecules included the cathepsin protease K inhibitor, odanacatib, and an inhibitor of TGFß growth factor signaling. Cathepsin proteases have been associated with later-onset heart disease including atherosclerosis, cardiac hypertrophy, and valvular stenosis, but their role in congenital heart defects has been unclear. The current study offers new insight into how mislocalizing proteases like cathepsin K alter embryonic heart development in a zebrafish model of ML II. "Mutations in GNPTAB, the gene responsible for ML II, alter the localization and increase the activity of cathepsin proteases. This disturbs growth factor signaling and disrupts heart and valve development in our GNPTAB deficient zebrafish embryos," said Heather Flanagan-Steet, PhD, Director of the Hazel and Bill Allin Aquaculture Facility and Director of Functional Studies at GGC. "By inhibiting this process, normal cardiac development was restored. This finding highlights the potential of small molecules and validates the need for further studies into their efficacy." Flanagan-Steet noted that she hopes the current work with ML II zebrafish will provide the basis to move one step closer to a treatment.
10.1172/jci.insight.133019
2,020
JCI Insight
Inappropriate cathepsin K secretion promotes its enzymatic activation driving heart and valve malformation
Although congenital heart defects (CHDs) represent the most common birth defect, a comprehensive understanding of disease etiology remains unknown. This is further complicated since CHDs can occur in isolation or as a feature of another disorder. Analyzing disorders with associated CHDs provides a powerful platform to identify primary pathogenic mechanisms driving disease. Aberrant localization and expression of cathepsin proteases can perpetuate later-stage heart diseases, but their contribution toward CHDs is unclear. To investigate the contribution of cathepsins during cardiovascular development and congenital disease, we analyzed the pathogenesis of cardiac defects in zebrafish models of the lysosomal storage disorder mucolipidosis II (MLII). MLII is caused by mutations in the GlcNAc-1-phosphotransferase enzyme (Gnptab) that disrupt carbohydrate-dependent sorting of lysosomal enzymes. Without Gnptab, lysosomal hydrolases, including cathepsin proteases, are inappropriately secreted. Analyses of heart development in gnptab-deficient zebrafish show cathepsin K secretion increases its activity, disrupts TGF-β-related signaling, and alters myocardial and valvular formation. Importantly, cathepsin K inhibition restored normal heart and valve development in MLII embryos. Collectively, these data identify mislocalized cathepsin K as an initiator of cardiac disease in this lysosomal disorder and establish cathepsin inhibition as a viable therapeutic strategy.
644655
Half of vision impairment in first world is preventable
Around half of vision impairment in Western Europe is preventable, according to a new study published in the British Journal of Ophthalmology. The study was carried out by the Vision Loss Expert Group, led by Professor Rupert Bourne of Anglia Ruskin University, and shows the prevalence and causes of vision loss in high income countries worldwide as well as other European nations in 2015, based on a systematic review of medical literature over the previous 25 years. A comparison of countries in the study shows that, based on the available data, UK has the fifth lowest prevalence of blindness in the over 50s out of the 50 countries surveyed, with 0.52% of men and women in that age group affected. Belgium had the lowest prevalence at 0.46%. However, in terms of the percentage of population with moderate to severe vision impairment (MSVI), the UK ranked in the bottom half of the table with 6.1%, a higher prevalence than non-EU countries such as Andorra, Serbia and Switzerland. Cataract was found to be the most common cause of blindness in Western Europe in 2015 (21.9%), followed by age-related macular degeneration (16.3%) and glaucoma (13.5%), but the main cause of MSVI was uncorrected refractive error - which is a condition that can be treated simply by wearing glasses. This condition made up 49.6% of all MSVI in Western Europe. Cataract was the next main cause in this region, with 15.5%, followed by age-related macular degeneration. The research also predicts that the contribution of the surveyed countries to the world's vision impaired is expected to lessen slightly by 2020, although the number of people in these nations with impaired sight will rise overall to 69 million due to a rising overall population. Professor Bourne, Professor of Ophthalmology at Anglia Ruskin University's Vision and Eye Research Unit, said: "Vision impairment is of great importance for quality of life and for the socioeconomics and public health of societies and countries. "Overcoming barriers to services which would address uncorrected refractive error could reduce the burden of vision impairment in high-income countries by around half. This is an important public health issue even in the wealthiest of countries and more research is required into better treatments, better implementation of the tools we already have, and ongoing surveillance of the problem. "This work has exposed gaps in the global data, given that many countries have not formally surveyed their populations for eye disease. That is the case for the UK and a more robust understanding of people's needs would help bring solutions." The work by the study team contributes to the wider Global Burden of Disease (GBD) Study, a comprehensive regional and global research program of disease burden that assesses mortality and disability from major diseases, injuries, and risk factors. ###
10.1136/bjophthalmol-2017-311258
2,018
British Journal of Ophthalmology
Prevalence and causes of vision loss in high-income countries and in Eastern and Central Europe in 2015: magnitude, temporal trends and projections
Background Within a surveillance of the prevalence and causes of vision impairment in high-income regions and Central/Eastern Europe, we update figures through 2015 and forecast expected values in 2020. Methods Based on a systematic review of medical literature, prevalence of blindness, moderate and severe vision impairment (MSVI), mild vision impairment and presbyopia was estimated for 1990, 2010, 2015, and 2020. Results Age-standardised prevalence of blindness and MSVI for all ages decreased from 1990 to 2015 from 0.26% (0.10–0.46) to 0.15% (0.06–0.26) and from 1.74% (0.76–2.94) to 1.27% (0.55–2.17), respectively. In 2015, the number of individuals affected by blindness, MSVI and mild vision impairment ranged from 70 000, 630 000 and 610 000, respectively, in Australasia to 980 000, 7.46 million and 7.25 million, respectively, in North America and 1.16 million, 9.61 million and 9.47 million, respectively, in Western Europe. In 2015, cataract was the most common cause for blindness, followed by age-related macular degeneration (AMD), glaucoma, uncorrected refractive error, diabetic retinopathy and cornea-related disorders, with declining burden from cataract and AMD over time. Uncorrected refractive error was the leading cause of MSVI. Conclusions While continuing to advance control of cataract and AMD as the leading causes of blindness remains a high priority, overcoming barriers to uptake of refractive error services would address approximately half of the MSVI burden. New data on burden of presbyopia identify this entity as an important public health problem in this population. Additional research on better treatments, better implementation with existing tools and ongoing surveillance of the problem is needed.
562952
Neurons: 'String of lights' indicates excitation propagation
A type of novel molecular voltage sensor makes it possible to watch nerve cells at work. The principle of the method has been known for some time. However, researchers at the University of Bonn and the University of California in Los Angeles have now succeeded in significantly improving it. It allows the propagation of electrical signals in living nerve cells to be observed with high temporal and spatial resolution. This enables investigations into completely new questions that were previously closed to research. The study has now been published in the journal PNAS. When we smell a bottle of suntan lotion, electrical pulses are generated in the sensory cells of the nose. Via the olfactory bulb in the brain, they enter the primary olfactory cortex, which then distributes them to various brain centers. Memories such as summer vacations by the sea long ago are then conjured up in the hippocampus and other regions. In recent decades, brain researchers have gained an increasingly precise idea of how stimuli are processed in the brain and which path the electrical excitation takes in the process. However, in many aspects these insights are still very approximate. The method now presented by researchers at the University of Bonn and the University of California in Los Angeles may help solve this problem. Nerve cells transmit electrical signals to other nerve cells via biological "cables" known as axons. Each nerve cell is encased in a thin membrane that separates it from its environment. In the resting state, there are many positively charged ions on the outside of this membrane, significantly more than on the inside. There is therefore an electrical voltage between the inside and the outside. Neuroscientists also speak of a membrane potential. Light chain for nerve cells When a signal passes a certain point on the axon, this potential changes there for a short time. "And we can make this change visible," explains Prof. Dr. Istvan Mody of the Institute for Experimental Epileptology and Cognition Research (IEECR) at the University of Bonn Medical Center. To do this, the researchers drape a chain of lights around the nerve cells, so to speak. The special thing about it: Each lamp of this chain carries a voltage-dependent dimmer. This means that it gets darker when the membrane potential at the location of the lamp changes. This makes excitation propagation visible as a kind of "dark drop" running along the axon. The researchers use fluorescent proteins as a light chain. "We introduced the gene for this into the cells," Mody explains. The researchers also tagged the genetic makeup with a kind of shipping label. "This label ensures that the fluorescent dyes are transported to the outside of the membrane immediately after they are produced. A kind of anchor then ensures they stay put." The dimmer is not part of the nano lamp, but another molecule: a so-called "dark quencher". This is normally located on the inside of the membrane. However, due to the voltage change during signal forwarding, it changes to the outside. There it meets the fluorescent proteins and shields them. The nano lamp becomes darker as a result. As soon as the potential normalizes, the dark quencher moves back to the inside, and the luminosity increases again. "This method is not really anything new," Mody says. "However, we have fundamentally improved it in two respects." Until now, the fluorescent proteins were integrated directly into the membrane, which significantly disrupted the function of the neurons. The new nano lamps, in contrast, sit outside the membrane. They also do not fade as quickly, but retain their luminosity for 40 minutes, four times as long as conventional fluorescent dyes. Highly explosive dimmer The second change concerns the dark quencher: The compound normally used for this purpose is toxic and also highly combustible. It was even used as an explosive during the Second World War. "Our quencher, on the other hand, is completely harmless," Mody emphasizes. "It also reacts even faster and more sensitively to the smallest changes in potential. This allows our method to visualize up to 100 electrical pulses per second." The method permits the function of nerve cells to be observed without disturbing them. This makes it possible, for instance, to gain a more precise insight into the associated malfunctions in certain neuronal diseases. It is ultimately a promising new tool to better understand the workings of the brain.
10.1073/pnas.2020235118
2,021
Proceedings of the National Academy of Sciences
A dark quencher genetically encodable voltage indicator (dqGEVI) exhibits high fidelity and speed
Significance Voltage sensing with genetically expressed optical probes is highly desirable for large-scale recordings of neuronal activity and detection of localized voltage signals in single neurons. Here we describe a method for a two-component (hybrid) genetically encodable fluorescent voltage sensing in neurons. The approach uses a glycosylphosphatidylinositol-tagged fluorescent protein (enhanced green fluorescent protein) that ensures the fluorescence to be specifically confined to the outside of the plasma membrane and D3, a voltage-dependent quencher. Previous hybrid genetically encoded voltage sensing approaches relied on a single quenching molecule, dipycrilamine (DPA), which is toxic, increases membrane capacitance, interferes with neurotransmitters, and is explosive. Our method uses a nontoxic and nonexplosive compound that performs better than DPA in all aspects of fluorescent voltage sensing.
974327
Producing ‘green’ energy — literally — from living plant ‘bio-solar cells’
Though plants can serve as a source of food, oxygen and décor, they’re not often considered to be a good source of electricity. But by collecting electrons naturally transported within plant cells, scientists can generate electricity as part of a “green,” biological solar cell. Now, researchers reporting in ACS Applied Materials & Interfaces have, for the first time, used a succulent plant to create a living “bio-solar cell” that runs on photosynthesis. In all living cells, from bacteria and fungi to plants and animals, electrons are shuttled around as part of natural, biochemical processes. But if electrodes are present, the cells can actually generate electricity that can be used externally. Previous researchers have created fuel cells in this way with bacteria, but the microbes had to be constantly fed. Instead, scientists, including Noam Adir’s team, have turned to photosynthesis to generate current. During this process, light drives a flow of electrons from water that ultimately results in the generation of oxygen and sugar. This means that living photosynthetic cells are constantly producing a flow of electrons that can be pulled away as a “photocurrent” and used to power an external circuit, just like a solar cell. Certain plants — like the succulents found in arid environments — have thick cuticles to keep water and nutrients within their leaves. Yaniv Shlosberg, Gadi Schuster and Adir wanted to test, for the first time, whether photosynthesis in succulents could create power for living solar cells using their internal water and nutrients as the electrolyte solution of an electrochemical cell. The researchers created a living solar cell using the succulent Corpuscularia lehmannii, also called the “ice plant.” They inserted an iron anode and platinum cathode into one of the plant’s leaves and found that its voltage was 0.28V. When connected into a circuit, it produced up to 20 µA/cm2 of photocurrent density, when exposed to light and could continue producing current for over a day. Though these numbers are less than that of a traditional alkaline battery, they are representative of just a single leaf. Previous studies on similar organic devices suggest that connecting multiple leaves in series could increase the voltage. The team specifically designed the living solar cell so that protons within the internal leaf solution could be combined to form hydrogen gas at the cathode, and this hydrogen could be collected and used in other applications. The researchers say that their method could enable the development of future sustainable, multifunctional green energy technologies. The authors acknowledge funding from a “Nevet” grant from the Grand Technion Energy Program (GTEP) and a Technion VPR Berman Grant for Energy Research and support from the Technion’s Hydrogen Technologies Research Laboratory (HTRL). The American Chemical Society (ACS) is a nonprofit organization chartered by the U.S. Congress. ACS’ mission is to advance the broader chemistry enterprise and its practitioners for the benefit of Earth and all its people. The Society is a global leader in promoting excellence in science education and providing access to chemistry-related information and research through its multiple research solutions, peer-reviewed journals, scientific conferences, eBooks and weekly news periodical Chemical & Engineering News. ACS journals are among the most cited, most trusted and most read within the scientific literature; however, ACS itself does not conduct chemical research. As a leader in scientific information solutions, its CAS division partners with global innovators to accelerate breakthroughs by curating, connecting and analyzing the world’s scientific knowledge. ACS’ main offices are in Washington, D.C., and Columbus, Ohio. To automatically receive news releases from the American Chemical Society, contact [email protected]. Follow us: Twitter | Facebook | LinkedIn | Instagram ACS Applied Materials & Interfaces 10.1021/acsami.2c15123 “Self-Enclosed Bio-Photoelectrochemical Cell in Succulent Plants” 23-Nov-2022
10.1021/acsami.2c15123
2,022
ACS Applied Materials & Interfaces
Self-Enclosed Bio-Photoelectrochemical Cell in Succulent Plants
Harvesting an electrical current from biological photosynthetic systems (live cells or isolated complexes) is typically achieved by immersion of the system into an electrolyte solution. In this study, we show that the aqueous solution found in the tissues of succulent plants can be used directly as a natural bio-photo electrochemical cell. Here, the thick water-preserving outer cuticle of the succulent Corpuscularia lehmannii serves as the electrochemical container, the inner water content as the electrolyte into which an iron anode and platinum cathode are introduced. We produce up to 20 μA/cm2 bias-free photocurrent. When 0.5 V bias is added to the iron anode, the current density increases ∼10-fold, and evolved hydrogen gas can be collected with a Faradaic efficiency of 2.1 and 3.5% in dark or light, respectively. The addition of the photosystem II inhibitor 3-(3,4-dichlorophenyl)-1,1-dimethylurea inhibits the photocurrent, indicating that water oxidation is the primary source of electrons in the light. Two-dimensional fluorescence measurements show that NADH and NADPH serve as the major mediating electron transfer molecules, functionally connecting photosynthesis to metal electrodes. This work presents a method to simultaneously absorb CO2 while producing an electrical current with minimal engineering requirements.
477513
Potential new therapy takes aim at a lethal esophageal cancer's glutamine addiction
Researchers at the Medical University of South Carolina (MUSC) have found a way to target drug-resistant esophageal cancer cells by exploiting the different energy needs of cancerous versus healthy cells. This breakthrough is now opening the doorway to new treatments for an otherwise lethal cancer. The findings of the National Institutes of Health (NIH)-funded study are reported in Nature Communications. Only about 20 percent of patients diagnosed with esophageal squamous cell carcinoma (ESCC) are still alive five years later, according to the American Cancer Society. Unfortunately, this disease is usually found at a late or advanced stage, meaning that, for many patients with ESCC, the cancer has already spread to other parts of their bodies. The severity of the disease is compounded by its high rate of recurrence. "[It's] an aggressive, lethal cancer," says Shuo Qie, M.D., Ph.D., a postdoctoral fellow at MUSC Hollings Cancer Center and first author on the article. "[S]urgery is the only and the best choice. But some patients, especially patients with metastasis, need chemotherapy or other additional treatments." For the study, Qie aimed to further characterize and ideally address the cancer-driving pathway previously discovered by J. Alan Diehl, Ph.D., his mentor and the senior author on the article. Diehl is the SmartState Endowed Chair in Lipidomics and Pathobiology and Associate Director of Basic Science at MUSC Hollngs Cancer Center. This pathway, the Cyclin D1 axis, is an intersection at which several cancer-promoting changes occur. The protein Fbxo4, which usually prevents cancer by controlling cyclin D1 degradation, no longer exerts its protective effects. This allow cells to spiral out of control. Qie discovered that the axis activates a metabolic switch that causes ESCC cells to depend much more on glutamine than glucose. Healthy cells break down both glucose and glutamine for their energy needs, but ESCC cells are virtually addicted to glutamine. "The cancer cells have to have glutamine. You can bathe them in glucose and they're still going to die without glutamine," explains Diehl. These findings point to a vulnerability in these cancer cells and suggest a new therapeutic possibility--the use of glutaminase inhibitors. Glutaminase is an enzyme required for the cellular digestion of glutamine. Inhibiting it effectively blocks the cell's ability to process glutamine. The MUSC researchers tested the efficacy of a combination regimen that included a glutaminase inhibitor (Telaglenastat; Calithera, San Francisco, CA) and metformin in cancer cell lines and mice. They found that the combination regimen effectively treated tumors with the molecular signature that Diehl had previously described. Importantly, the treatment was effective even against tumors that had developed resistance to CDK4/6 inhibitors. Indeed, the resistant cancer cells were even more vulnerable to this treatment than non-resistant ones. "It's quite remarkable that the tumor cells that we have that are resistant to CDK4/6 inhibitors are actually five-, six-fold more sensitive to this combination therapy than they were before they developed resistance," says Diehl. The promising findings for this combination regimen in both cellular and animal models suggest that it could have therapeutic potential for patients diagnosed with this traditionally dangerous and difficult cancer. Having moved this treatment from concept to reality in the laboratory, Qie and Diehl hope to move forward with clinical trials for their combination treatment and are currently seeking funding to do so. The MUSC researchers' curiosity about a biological pathway has led to a potential new therapeutic approach for patients with ESCC. "You'll hear the term 'an Achilles heel,'" explains Diehl. "Can you find the Achilles heel that's in the cancer but not in the normal cell? And that's what Qie has done. Just from trying to understand the biology of the pathway, he and I have identified a unique therapeutic opportunity." ### The content of the article summarized by this release is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. About MUSC Founded in 1824 in Charleston, MUSC is the oldest medical school in the South, as well as the state's only integrated, academic health sciences center with a unique charge to serve the state through education, research and patient care. Each year, MUSC educates and trains more than 3,000 students and 700 residents in six colleges: Dental Medicine, Graduate Studies, Health Professions, Medicine, Nursing and Pharmacy. The state's leader in obtaining biomedical research funds, in fiscal year 2018, MUSC set a new high, bringing in more than $276.5 million. For information on academic programs, visit http://musc.edu. As the clinical health system of the Medical University of South Carolina, MUSC Health is dedicated to delivering the highest quality patient care available, while training generations of competent, compassionate health care providers to serve the people of South Carolina and beyond. Comprising some 1,600 beds, more than 100 outreach sites, the MUSC College of Medicine, the physicians' practice plan, and nearly 275 telehealth locations, MUSC Health owns and operates eight hospitals situated in Charleston, Chester, Florence, Lancaster and Marion counties. In 2018, for the fourth consecutive year, U.S. News & World Report named MUSC Health the number one hospital in South Carolina. To learn more about clinical patient services, visit http://muschealth.org. MUSC and its affiliates have collective annual budgets of $3 billion. The more than 17,000 MUSC team members include world-class faculty, physicians, specialty providers and scientists who deliver groundbreaking education, research, technology and patient care. About Hollings Cancer Center The Hollings Cancer Center at the Medical University of South Carolina is a National Cancer Institute-designated cancer center and the largest academic-based cancer research program in South Carolina. The cancer center comprises more than 100 faculty cancer scientists and 20 academic departments. It has an annual research funding portfolio of more than $40 million and a dedication to reducing the cancer burden in South Carolina. Hollings offers state-of-the-art diagnostic capabilities, therapies and surgical techniques within multidisciplinary clinics that include surgeons, medical oncologists, radiation therapists, radiologists, pathologists, psychologists and other specialists equipped for the full range of cancer care, including more than 200 clinical trials. For more information, visit http://www.hollingscancercenter.org
10.1038/s41467-019-09179-w
2,019
Nature Communications
Targeting glutamine-addiction and overcoming CDK4/6 inhibitor resistance in human esophageal squamous cell carcinoma
Abstract The dysregulation of Fbxo4-cyclin D1 axis occurs at high frequency in esophageal squamous cell carcinoma (ESCC), where it promotes ESCC development and progression. However, defining a therapeutic vulnerability that results from this dysregulation has remained elusive. Here we demonstrate that Rb and mTORC1 contribute to Gln-addiction upon the dysregulation of the Fbxo4-cyclin D1 axis, which leads to the reprogramming of cellular metabolism. This reprogramming is characterized by reduced energy production and increased sensitivity of ESCC cells to combined treatment with CB-839 (glutaminase 1 inhibitor) plus metformin/phenformin. Of additional importance, this combined treatment has potent efficacy in ESCC cells with acquired resistance to CDK4/6 inhibitors in vitro and in xenograft tumors. Our findings reveal a molecular basis for cancer therapy through targeting glutaminolysis and mitochondrial respiration in ESCC with dysregulated Fbxo4-cyclin D1 axis as well as cancers resistant to CDK4/6 inhibitors.
578270
Low fitness may indicate poor arterial health in adolescents
A recent Finnish study conducted at the University of Jyväskylä showed that adolescents with better aerobic fitness have more compliant arteries than their lower fit peers do. The study also suggests that a higher anaerobic threshold is linked to better arterial health. The results were published in the European Journal of Applied Physiology. Arterial stiffness is one of the first signs of cardiovascular disease, and adults with increased arterial stiffness are at higher risk of developing clinical cardiovascular disease. However, arterial stiffening may have its origin already in childhood and adolescence. "In our study we showed for the first time that the anaerobic threshold is also related to arterial stiffness," says Dr Eero Haapala, PhD, from the University Of Jyväskylä. Anaerobic threshold describes the exercise intensity that can be sustained for long periods of time without excess accumulation of lactic acid. The study showed that adolescents with a higher anaerobic threshold also had lower arterial stiffness than other adolescents did. "The strength of determining anaerobic threshold is that it does not require maximal effort," Haapala explains. "The results of our study can be used to screen increased arterial stiffness in adolescents who cannot perform maximal exercise tests." Fitness and arterial health can be improved The results showed that both peak oxygen uptake and anaerobic threshold were related to arterial stiffness in adolescents between the ages of 16 and 19 years. Genetics may explain part of the observed associations but moderate and especially vigorous physical activity improve fitness and arterial health already in adolescence. "Because the development of cardiovascular disease is a long process, sufficiently intense physical activity starting in childhood may be the first line in prevention of early arterial aging." The study investigated the associations of directly measured peak oxygen uptake and anaerobic threshold with arterial stiffness among 55 Finnish adolescents between the ages of 16 and 19 years. Peak oxygen uptake and anaerobic threshold were assessed using a maximal exercise test on a cycle ergometer. Arterial stiffness was measured using pulse wave analysis based on non-invasive oscillometric tonometry. Various confounding factors, including body fat percentage and systolic blood pressure, were controlled for in the analyses.
10.1007/s00421-018-3963-3
2,018
European Journal of Applied Physiology
Peak oxygen uptake, ventilatory threshold, and arterial stiffness in adolescents
To investigate the associations of peak oxygen uptake ([Formula: see text]) and [Formula: see text] at ventilatory threshold ([Formula: see text] at VT) with arterial stiffness in adolescents.The participants were 55 adolescents (36 girls, 19 boys) aged 16-19 years. Aortic pulse wave velocity (PWVao) and augmentation index (AIx%) were measured by non-invasive oscillometric device from right brachial artery level. [Formula: see text] was directly measured during a maximal ramp test on a cycle ergometer. [Formula: see text] at VT was determined using the equivalents for ventilation ([Formula: see text]/[Formula: see text] and [Formula: see text]/[Formula: see text]). [Formula: see text] and [Formula: see text] at VT were normalised for body mass (BM) and lean mass (LM). Data were analysed using linear regression analyses and analysis of covariance adjusted for age and sex.[Formula: see text] normalised for BM (β = - 0.445, 95% CI - 0.783 to - 0.107) and [Formula: see text] normalised for LM (β = - 0.386, 95% CI - 0.667 to - 0.106) were inversely associated with PWVao. A higher [Formula: see text] at VT normalised for BM (β = - 0.366, 95% CI - 0.646 to - 0.087) and LM (β = - 0.321, 95% CI - 0.578 to - 0.064) was associated with lower PWVao. Adolescents in the lowest third of [Formula: see text] by LM (6.6 vs. 6.1 m/s, Cohen's d = 0.33) and [Formula: see text] at VT by LM (6.6 vs. 6.0 m/s, Cohen's d = 0.33) had a higher PWVao than those in the highest third of [Formula: see text] or [Formula: see text] at VT by LM.Higher [Formula: see text] and [Formula: see text] at VT by BM and LM were related to lower arterial stiffness in adolescents. Normalising [Formula: see text] and [Formula: see text] at VT for LM would provide the most appropriate measure of cardiorespiratory fitness in relation to arterial stiffness.
923928
A Highly-Accurate and Broadband Terahertz Counter Eyes "Beyond 5G / 6G"
[Abstract] The National Institute of Information and Communications Technology (NICT, President: TOKUDA Hideyuki, Ph.D.) has developed a broadband and high-precision terahertz (THz) frequency counter based on a semiconductor-superlattice harmonic mixer. It showed a measurement uncertainty of less than 1 x 10-16 from 0.12 THz to 2.8 THz. This compact and easy-to-handle THz counter operating at room temperature suits for various THz applications supported by the next-generation ICT infrastructure, “Beyond 5G / 6G”. This achievement was published in Metrologia as an open-access paper on July 19, 2021. [Achievements] A highly-accurate and wide-measurable band THz frequency counter will become a key metrological instrument for allocation of THz spectrum among huge range of users on the next-generation information and communications infrastructure, or Beyond 5G / 6G. It is also indispensable to high-resolution spectroscopy of ultracold molecules. NICT developed the counter by generating a THz frequency comb from a semiconductor-superlattice harmonic mixer, which is simple, easy to use and operational at room temperature. Consequently, more compact and easy-to-handle counter than previously reported ones requiring an ultrashort pulse laser or a bulky cryogenic refrigerator. To evaluate the precision of the system over a four-octave range from 0.12 THz to 2.8 THz, NICT set up test benches designed carefully to push down the measurement limit. This revealed a measurement uncertainty of less than 1 x 10-16, which corresponds to the frequency determination capability for a 1 THz microwave signal with an accuracy of 100 μHz, for instance. We believe that the THz counter developed has the world’s top performance from the viewpoint of the measurable range and precision. [Future Prospects] NICT endeavors to lead the R&D on the Beyond 5G / 6G as well as the cutting-edge THz technology. A high-precision and broadband THz frequency counter with compactness and easy operability could become a vital metrological tool to accelerate the exploitation of THz frequency domain. Such THz counter developed here will be used for projected extension of the NICT’s calibration service currently supporting only microwave atomic clocks. Using the counter, we will also pursue the verification of the fundamental physics by means of an ultra-accurate THz frequency standard based on ultracold molecules, or a THz molecular clock.
10.1088/1681-7575/ac0712
2,021
Metrologia
Terahertz frequency counter based on a semiconductor-superlattice harmonic mixer with four-octave measurable bandwidth and 16-digit precision
Abstract We have developed a broadband and high-precision terahertz (THz) frequency counter based on a semiconductor-superlattice harmonic mixer (SLHM). Comparison of two THz frequencies determined using two independent counters and direct measurement of frequency-stabilized THz-quantum cascade lasers by a single counter showed a measurement uncertainty of less than 1 × 10 −16 over a four-octave range from 120 GHz to 2.8 THz. Further extension of this measurable range was indicated by the research regarding the higher-harmonics generation of a local oscillator for the SLHM. This compact and easy-to-handle THz counter operating at room temperature is available for high-resolution spectroscopy of ultracold molecules proposed for detecting temporal changes in physics constants as well as many THz applications requiring a wide measurement range without a bulky cryogenic apparatus.
665216
Evidence-based patient-psychotherapist matching improves mental health care
In first-of-its kind research led by a University of Massachusetts Amherst psychotherapy researcher, mental health care patients matched with therapists who had a strong track record of treating the patients' primary concerns had better results than patients who were not so matched. In addition, this "match effect" was even more beneficial and pronounced for patients with more severe problems and for those who identified as racial or ethnic minorities. The findings are published in JAMA Psychiatry and the Journal of Consulting and Clinical Psychology. "One of the things we've been learning in our field is that who the therapist is matters," says lead author Michael Constantino, professor of clinical psychology and director of the Psychotherapy Research Lab, who seeks to understand the variability of outcomes among patients receiving mental health treatment. "We've become very interested in this so-called therapist effect. Earlier on, there was a heavier emphasis on what the treatment was as opposed to who was delivering it." Constantino and colleagues have discovered, for example, that psychotherapists possess relative strengths and weaknesses in treating different types of mental health problems. Such performance "report cards" hold promise, then, for personalizing treatment toward what therapists do well. The researchers conducted a randomized clinical trial involving 48 therapists and 218 outpatients at six community clinics in a health care system in Cleveland, Ohio. They used a matching system based on how well a therapist has historically treated patients with the same concerns. The matching relied on a multidimensional outcomes tool called the Treatment Outcome Package (TOP), which assesses 12 symptomatic or functional domains: depression, quality of life, mania, panic or somatic anxiety, psychosis, substance misuse, social conflict, sexual functioning, sleep, suicidality, violence and work functioning. The matched group was compared to a group of patients who were case-assigned as usual, such as by therapist availability or convenience of office location. "By collecting TOP data from enough patients treated by a given therapist, this outcomes tool can establish the domains in which that therapist is stably effective (historically, on average, their patients' symptoms reliably improved), neutral (historically, on average, their patients' symptoms neither reliably improved nor deteriorated), or ineffective (historically, on average, their patients' symptoms reliably deteriorated)," the paper states. To qualify for matching, the therapists had to have completed a minimum of 15 cases with patients who had completed the TOP before and after treatment. For the trial, neither the patients nor the therapists knew if they had been matched or were case-assigned as usual. "We think there would be an even stronger positive impact if the patients knew they were empirically well-matched versus assigned by chance," Constantino says. "Such knowledge might cultivate more positive expectations, which are generally associated with better therapy outcomes." Post-therapy reports by patients showed that those in the matched group experienced significantly greater reductions in general impairment compared with those who were randomly assigned a therapist. "We showed that with this matching system you can get a big bump in improvement rates," Constantino says. The finding that the improvement in the matched group was even greater among people who identified as racial or ethnic minorities may provide a way to address and improve mental health care access and quality in traditionally underserved populations, Constantino says. The JAMA Psychiatry paper concludes, "Notably, the good fit in this study came not from changing what the therapists did in their treatment, but rather who they treated. Capitalizing on whatever it is that a therapist historically does well when treating patients with certain mental health problems, the current data indicate that our match system can improve the effectiveness of that care, even with neither therapist nor patient being aware of their match status."
10.1037/ccp0000644
2,021
Journal of Consulting and Clinical Psychology
For whom does a match matter most? Patient-level moderators of evidence-based patient–therapist matching.
A double-blind, randomized controlled trial tested the effectiveness of a personalized Match System in which patients are assigned to therapists with a "track record" of effectively treating a given patient's primary concern(s) (e.g., anxiety). Matched patients demonstrated significantly better outcomes than those assigned through usual pragmatic means. The present study examined patient-level moderators of this match effect. We hypothesized that the match benefits would be especially pronounced for patients who presented with (a) greater overall problem severity, and (b) greater problem complexity (i.e., number of elevated problem domains). We also explored if patient racial/ethnic minority status moderated the condition effect.Patients were 218 adults randomized to the Match or as-usual assignment condition, and then treated naturalistically by 48 therapists. The primary outcome was the Treatment Outcome Package (TOP), a multidimensional assessment tool that also primed the Match algorithm (based on historical, therapist-level effectiveness data), and assessed trial patients' symptoms/functioning and demographic information at baseline. Moderator effects were tested as patient-level interactions in three-level hierarchical linear models.The beneficial match effect was significantly more pronounced for patients with higher initial severity (-0.03, 95% CI -0.05, -0.01) and problem complexity (-0.01, 95% CI -0.02, -0.004), yet the high correlation between severity and complexity called into question the uniqueness of the complexity moderator effect. Moreover, the match effect was more pronounced for racial/ethnic minority patients (i.e., nonwhite; -0.05, 95% CI -0.09, -0.01).Measurement-based matching is especially effective for patients with certain characteristics, which further informs mental health treatment personalization. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
781599
Stressed lemurs have worse chances of survival
a sign of long-term stress--are associated with reduced survival in wild grey mouse lemurs (Microcebus murinus), according to a study published in the open access journal BMC Ecology. Researchers at the German Primate Centre and Georg-August University Göttingen, Germany found that grey mouse lemurs with high levels of the stress hormone cortisol in their fur were less likely to survive both long-term and over the reproductive season. Dr Josué Rakotoniaina, the corresponding author said "Despite the wide use of stress hormone levels as an index of health and condition, this study is among the first to correlate an index of chronic stress with survival in a wild population of lemurs. This was only possible by combining hair cortisol levels with several years of life history data that was gathered from a long-term monitoring project of mouse lemurs." Lemurs with low hair cortisol levels had on average a 13.9% higher chance to survive than those with high levels of hair cortisol. Lemurs with very good body condition--that is optimal body mass and size--survived on average 13.7% better than lemurs with poor body condition and females survived, on average, better than males. Variations in parasitism, such as the number of parasite infections, were not linked to survival. Dr Rakotoniaina added: "Our findings indicate that hair cortisol concentrations are a much better predictor of survival, and thus a better index of health, than other commonly used health indicators. Cortisol is taken up by hair as it grows so its concentration in a hair sample allows assessment of overall cortisol levels over time rather than--as single samples of blood, saliva or urine do--at one time point." To test their hypothesis that high hair cortisol concentration as a measure of long-term stress is related to individual survival, the researchers studied a population of grey mouse lemurs in Kirindy Forest, Madagascar from 2012 to 2014. They assessed the relationship between hair cortisol concentration and long-term survival in 171 lemurs, while the effect of body condition on long-term survival was assessed in a sub-sample of 149, and the link between all health indicators (hair cortisol level, body condition and parasitism) and survival during the mating season was assessed in a group of 48 lemurs. The researchers suggest that the benefits of having low stress levels may be even more pronounced prior to the mating season. Individuals that are more affected by challenging conditions may not be able to cope with the additional stress during mating season which is particularly challenging for male mouse lemurs. Although the exact mechanism by which cortisol is built into hair is not yet fully understood and the observational nature of the study does not allow conclusions about the causes of mortality, the findings suggest that hair cortisol concentration may be a valid indicator of health in wild lemur populations. Dr. Rakotoniaina said: "This important information could facilitate conservation decisions as it provides conservationists with an essential tool that could be used to detect issues emerging at the population level and ultimately predict wild populations' responses to environmental challenges." ### Media Contact Anne Korn Press Officer BioMed Central T: 44-0-20-3192-2744 E: [email protected] Notes to editors: 1. Images of the grey mouse lemurs can be found here: http://bit.ly/2w42V1X Please credit Anni Hämäläinen in any re-use. 2. Research article: Hair cortisol concentrations correlate negatively with survival in a wild primate population Rakotoniaina et al. BMC Ecology 2017 DOI: 10.1186/s12898-017-0140-1 For an embargoed copy of the research article please contact Anne Korn at BioMed Central. After the embargo lifts, the article will be available at the journal website: https://bmcecol.biomedcentral.com/articles/10.1186/s12898-017-0140-1 Please name the journal in any story you write. If you are writing for the web, please link to the article. All articles are available free of charge, according to BioMed Central's open access policy. 3. BMC Ecology is an open access, peer-reviewed journal that considers articles on environmental, behavioral and population ecology as well as biodiversity of plants, animals and microbes. 4. A pioneer of open access publishing, BMC has an evolving portfolio of high quality peer-reviewed journals including broad interest titles such as BMC Biology and BMC Medicine, specialist journals such as Malaria Journal and Microbiome, and the BMC series. At BMC, research is always in progress. We are committed to continual innovation to better support the needs of our communities, ensuring the integrity of the research we publish, and championing the benefits of open research. BMC is part of Springer Nature, giving us greater opportunities to help authors connect and advance discoveries across the world. 5. Springer Nature Data Support Services The Springer Nature Data Support Services help authors enhance their peer-reviewed publications and comply with funder and journal policies by preparing their data files for deposition in a repository and improving associated metadata. For the present study, the dataset and basic contextual metadata were uploaded by the author, and initial checks were undertaken by a Data Support Services Research Data Editor to make sure that the data were complete and uncorrupted. A Research Data Editor also rewrote the title of the dataset to support discoverability, and added a comprehensive description and research method to provide context for the data. To further enhance discovery and to allow granular search of the data, categories from the Australian Fields of Research classification system and relevant keywords were added to the metadata. The dataset's author list was cross-referenced with the associated manuscript to ensure accurate citation of the data, and a DOI was generated to provide a persistent link. The dataset was linked to the author's manuscript and the publication date was coordinated for release with the publication of its associated article in BMC Ecology. All checks and enhancements were approved by the author, and the dataset will be made available through the Springer Nature section of the figshare repository. 6. Dataset: Capture-mark-recapture data modelling survival rates of Microcebus murinus in relation to glucocorticoid level, parasite infection and body condition Rakotoniaina et al. 2017 DOI: 10.6084/m9.figshare.5259415. During the embargo period the dataset is available here: https://figshare.com/s/dd48825bb24115b17db6 After the embargo lifts, please reference the dataset using the following link:http://dx.doi.org/10.6084/m9.figshare.5259415
10.1186/s12898-017-0140-1
2,017
BMC Ecology
Hair cortisol concentrations correlate negatively with survival in a wild primate population
Glucocorticoid hormones are known to play a key role in mediating a cascade of physiological responses to social and ecological stressors and can therefore influence animals' behaviour and ultimately fitness. Yet, how glucocorticoid levels are associated with reproductive success or survival in a natural setting has received little empirical attention so far. Here, we examined links between survival and levels of glucocorticoid in a small, short-lived primate, the grey mouse lemur (Microcebus murinus), using for the first time an indicator of long-term stress load (hair cortisol concentration). Using a capture-mark-recapture modelling approach, we assessed the effect of stress on survival in a broad context (semi-annual rates), but also under a specific period of high energetic demands during the reproductive season. We further assessed the power of other commonly used health indicators (body condition and parasitism) in predicting survival outcomes relative to the effect of long-term stress. We found that high levels of hair cortisol were associated with reduced survival probabilities both at the semi-annual scale and over the reproductive season. Additionally, very good body condition (measured as scaled mass index) was related to increased survival at the semi-annual scale, but not during the breeding season. In contrast, variation in parasitism failed to predict survival. Altogether, our results indicate that long-term increased glucocorticoid levels can be related to survival and hence population dynamics, and suggest differential strength of selection acting on glucocorticoids, body condition, and parasite infection.
804711
Plant-based diets high in carbs improve type 1 diabetes, according to new case studies
10.35248/2155-6156.20.11.847
2,020
Journal of Diabetes & Metabolism
Plant-Based Diets for Type 1 Diabetes
Type 1 diabetes is a chronic autoimmune disease characterized by hyperglycemia resulting from the destruction of insulin-producing pancreatic beta-cells. The increasing incidence (at a worldwide rate of 3-5% a year) suggests that in addition to the genetic component, the risk may be influenced by environmental factors, including the diet. A plantbased diet has been shown to improve glycemic control in individuals with type 2 diabetes and to improve beta-cell function in overweight people but has not been thoroughly tested in type 1 diabetes due to its high carbohydrate content. We present two case studies of individuals with type 1 diabetes who adopted a plant-based diet and experienced a significant increase in insulin sensitivity, reductions in insulin dose, and improvements in cardiovascular risk factors.
489496
'Biggest loser' study reveals how dieting affects long-term metabolism
While it's known that metabolism slows when people diet, new research indicates that metabolism remains suppressed even when people regain much of the weight they lost while dieting. The findings come from a study of contestants in "The Biggest Loser" television series. Despite substantial weight regain in the 6 years following participation, resting metabolic rate remained at the same low level that was measured at the end of the weight loss competition. The average rate was approximately 500 calories per day lower than expected based on individuals' body composition and age. "Long-term weight loss requires vigilant combat against persistent metabolic adaptation that acts to proportionally counter ongoing efforts to reduce body weight," wrote the authors of the Obesity study.
10.1002/oby.21538
2,016
Obesity
Persistent metabolic adaptation 6 years after “The Biggest Loser” competition
Objective To measure long‐term changes in resting metabolic rate (RMR) and body composition in participants of “The Biggest Loser” competition. Methods Body composition was measured by dual energy X‐ray absorptiometry, and RMR was determined by indirect calorimetry at baseline, at the end of the 30‐week competition and 6 years later. Metabolic adaptation was defined as the residual RMR after adjusting for changes in body composition and age. Results Of the 16 “Biggest Loser” competitors originally investigated, 14 participated in this follow‐up study. Weight loss at the end of the competition was (mean ± SD) 58.3 ± 24.9 kg ( P &lt; 0.0001), and RMR decreased by 610 ± 483 kcal/day ( P = 0.0004). After 6 years, 41.0 ± 31.3 kg of the lost weight was regained ( P = 0.0002), while RMR was 704 ± 427 kcal/day below baseline ( P &lt; 0.0001) and metabolic adaptation was −499 ± 207 kcal/day ( P &lt; 0.0001). Weight regain was not significantly correlated with metabolic adaptation at the competition's end ( r = − 0.1, P = 0.75), but those subjects maintaining greater weight loss at 6 years also experienced greater concurrent metabolic slowing ( r = 0.59, P = 0.025). Conclusions Metabolic adaptation persists over time and is likely a proportional, but incomplete, response to contemporaneous efforts to reduce body weight.
875016
MIPT biophysicists found a way to take a peek at how membrane receptors work
In a study published in Current Opinion in Structural Biology, MIPT biophysicists explained ways to visualize membrane receptors in their different states. Detailed information on the structure and dynamics of these proteins will enable developing effective and safe drugs to treat many sorts of conditions. Every second, living cells receive myriads of signals from their environment, which are usually transmitted through dedicated signaling molecules such as hormones. Most of these molecules are incapable of penetrating the cell membrane so for the most part, such signals are identified at the membrane. For that purpose, the cell membrane is equipped with cell-surface receptors. That receive outside signals and "interpret" them into the language the cell can understand. Cell surface receptors are vital for proper functioning of cell and the organism as a whole. Should the receptors stop working as intended, the communication between cells becomes disrupted, which leads to the organism developing a medical condition. GPCRs (G protein-coupled receptors) is a large family of membrane receptors that share a common structure; they all have seven protein spirals that cross the membrane and couple the receptor to a G protein situated on the inside of the cell. Interaction of the signal molecule with the receptor triggers a change in the 3D structure, or conformation, of the receptor, which activates the G protein. The activated G protein, in turn, triggers a signal cascade inside the cell, which results in a response to the signal. GPCR family membrane proteins have been linked to a lot of neurodegenerative and cardiovascular diseases and certain types of cancer. GPCR proteins have also been proven to contribute to conditions such as obesity, diabetes, mental disorders, and others (fig. 2). As a result, GPCRs have become a popular drug target, with a large number of drugs currently on the market targeting this particular family of receptors. One of the modern approaches to drug development involves analyzing 3D structures of GPCR molecules. But membrane receptor analysis is a slow and extremely laborious process and even when successful, it does not completely reveal the molecule's behavior inside the cell. "Currently, scientists have two options when it comes to studying proteins. They can either 'freeze' a protein and have its precise static snapshot, or study its dynamics at the cost of losing details. The former approach uses methods such as crystallography and cryogenic electron microscopy; the latter uses spectroscopic techniques," comments Anastasia Gusach, a research fellow at the MIPT Laboratory of Structural Biology of G-protein Coupled Receptors. The authors of the study demonstrated how combining both the structural and the spectroscopic approaches result in "the best of both worlds" in terms of obtaining precise information on functioning of GPCRs (fig. 3). For instance, the double electron-electron resonance (DEER) and the Förster resonance energy transfer (FRET) techniques act as an "atomic ruler", ensuring precise measurements of distances between separate atoms and their groups within the protein. The nuclear magnetic resonance method enables visualizing the overall shape of the receptor molecule while modified mass spectrometry methods (MRF-MS, HDX-MS) help trace the susceptibility of separate groups of atoms constituting the protein to the solvent, thus indicating which parts of the molecule face outwards. "Studying the GPCR dynamics uses cutting-edge methods of experimental biophysical analysis such as nuclear magnetic resonance (NMR) spectroscopy, electron paramagnetic resonance (EPR) spectroscopy, and advanced fluorescence microscopy techniques including single-molecule microscopy," says Alexey Mishin, deputy head of the MIPT Laboratory for Structural Biology of G-protein Coupled Receptors. "Biophysicists that use different methods to study GPCRs have been widely organizing collaborations that already bore some fruitful results. We hope that this review will help scientists specializing in different methods to find some new common ground and work together to obtain a better understanding of receptors' functioning." adds Anastasia Gusach. The precise information on how the membrane receptors function and transfer between states will greatly expand the capabilities for structure-based drug design.### The study was carried out with support from Russian Foundation for Basic Research and the Russian Science Foundation.
10.1016/j.sbi.2020.03.004
2,020
Current Opinion in Structural Biology
Beyond structure: emerging approaches to study GPCR dynamics
G protein-coupled receptors (GPCRs) constitute the largest superfamily of membrane proteins that are involved in regulation of sensory and physiological processes and implicated in many diseases. The last decade revolutionized the GPCR field by unraveling multiple high-resolution structures of many different receptors in complexes with various ligands and signaling partners. A complete understanding of the complex nature of GPCR function is, however, impossible to attain without combining static structural snapshots with information about GPCR dynamics obtained by complementary spectroscopic techniques. As illustrated in this review, structure and dynamics studies are now paving the way for understanding important questions of GPCR biology such as partial and biased agonism, allostery, oligomerization, and other fundamental aspects of GPCR signaling.
934580
Competing quantum interactions enable single molecules to stand up
Nanoscale machinery has many uses, including drug delivery, single-atom transistor technology, or memory storage. However, the machinery must be assembled at the nanoscale which is a considerable challenge for researchers. For nanotechnology engineers the ultimate goal is to be able to assemble functional machinery part-by-part at the nanoscale. In the macroscopic world, we can simply grab items to assemble them. It is not impossible to “grab” single molecules anymore, but their quantum nature makes their response to manipulation unpredictable, limiting the ability to assemble molecules one by one. This prospect is now a step closer to reality thanks to an international effort led by the Research Centre Jülich of the Helmholtz society in Germany including researchers from the Department of Chemistry at the University of Warwick. In the paper, ‘The stabilization potential of a standing molecule’, published on the 10th November 2021 in the journal Science Advances, an international team of researchers have been able to reveal the generic stabilisation mechanism of a single standing molecule, which can be used in the rational design and construction of three-dimensional molecular devices at surfaces. The scanning probe microscope (SPM) has brought the vision of molecular-scale fabrication closer to reality, because it offers the capability to rearrange atoms and molecules on surfaces, thereby allowing the creation of metastable structures that do not form spontaneously. Using SPM, Dr Christian Wagner and his team were able to interact with a single standing molecule, perylene-tetracarboxylic dianhydride (PTCDA) on a surface to study the thermal stability and temperature at which the molecule would cease to be stable and would drop back into its natural state where it adsorbs flat on the surface. This temperature stands at -259.15 Celsius, only 14 degrees above the absolute zero-temperature point. Quantum chemical calculations performed in collaboration with Dr. Reinhard Maurer from the Department of Chemistry at the University of Warwick were able to reveal that the subtle stability of the molecule stems from the competition of two strong counteracting quantum forces, namely the long-range attraction from the surface and the short-range restoring force arising from the anchor point between molecule and the surface. Dr Reinhard Maurer, from the Department of Chemistry at the University of Warwick comments: “The balance of interactions that keeps the molecule from falling over is very subtle and a true challenge for our quantum chemical simulation methods. In addition to teaching us about the fundamental mechanisms that stabilise such unusual nanostructures, the project also helped us to assess and improve the capabilities of our methods.” Dr Christian Wagner from the Peter Grünberg Institute for Quantum Nanoscience (PGI-3) at Research Centre Jülich comments: “To make technological use of the fascinating quantum properties of individual molecules, we need to find the right balance: They must be immobilized on a surface, but without fixing them too strongly, otherwise they would lose these properties. Standing molecules are ideal in that respect. To measure how stable they actually are, we had to stand them up over and over again with a sharp metal needle and time how long they survived at different temperatures.” Now that the interactions that give rise to a stable standing molecule are known, future research can work towards designing better molecules and molecule-surface links to tune those quantum interactions. This can help to increase stability and the temperature at which molecules can be switched into standing arrays towards workable conditions. This raises the prospect of nanofabrication of machinery at the nanoscale. You can also read Jülich's press release here: https://www.fz-juelich.de/SharedDocs/Pressemitteilungen/UK/EN/2021/notifications/2021-11-11-nanodomino.html Science Advances 10.1126/sciadv.abj9751 Experimental study Not applicable The stabilization potential of a standing molecule 10-Nov-2021 The authors declare that they have no competing interests.
10.1126/sciadv.abj9751
2,021
Science Advances
The stabilization potential of a standing molecule
The part-by-part assembly of functional nanoscale machinery is a central goal of nanotechnology. With the recent fabrication of an isolated standing molecule with a scanning probe microscope, the third dimension perpendicular to the surface will soon become accessible to molecule-based construction. Beyond the flatlands of the surface, a wealth of structures and functionalities is waiting for exploration, but issues of stability are becoming more critical. Here, we combine scanning probe experiments with ab initio potential energy calculations to investigate the thermal stability of a prototypical standing molecule. We reveal its generic stabilization mechanism, a fine balance between covalent and van der Waals interactions including the latter’s long-range screening by many-body effects, and find a remarkable agreement between measured and calculated stabilizing potentials. Beyond their relevance for the design and construction of three-dimensional molecular devices at surfaces, our results also indicate that standing molecules may serve as tunable mechanical gigahertz oscillators.
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
5