text
stringlengths 1.81k
125k
| id
stringlengths 47
47
| metadata
dict | input_text
stringlengths 1.28k
3.13k
| target_text
stringlengths 1
19k
|
---|---|---|---|---|
Mike DeWine: From Holocaust to Nazis at Columbus drag brunch, history teaches, can repeat
"It is our responsibility as Americans, as Ohioans, and just as human beings to never let this dark chapter of history be forgotten — or repeated," Mike DeWine
Mike DeWine is governor of Ohio.
Louise Gips of Cleveland will turn 90 in December. She was almost six years-old when her family fled to Ukraine following the Nazi invasion of her native Poland, and eventually ended up in Siberia as Jewish prisoners of war.
More:Oldest Holocaust memorial in US gets federal protection amid rise of antisemitism
Louise told her family’s story at the Governor’s annual Holocaust Remembrance Ceremony in April. She also told her now deceased husband, Harry’s, story. While she and her family were fortunate to have survived the Holocaust, most of her husband’s family did not.
Harry was 10 years-old when Nazis captured his family in Poland. They were sent to a Nazi ghetto, where countless Jews died of disease or starvation or were loaded on trucks daily to be taken directly to death camps where they were executed.
Harry had six siblings. He and his oldest three siblings managed to escape the ghetto. Harry, then about 12 years-old, went to a farm and told the owner that he was a Christian boy and offered to work for him.
A fight for survival
The farmer looked out for Harry until someone recognized him and alerted the farmer that Harry was Jewish.
Knowing he and his family could be killed for harboring a Jew, the farmer told Harry to hide in the woods. It was there, in the forest, that Harry spent the next 18 months fighting to survive.
When the War was over, he went back to his hometown in Poland, only to learn that his family had all been killed except for two of his sisters. Nazis executed his youngest sister just three months before the War’s end.
Louise told these stories because, in her words, “If all of us don’t speak, history will definitely repeat itself.” She is right.
History is a teacher
It is both incomprehensible and scary to me that there are people today in America who think that discrimination, hate, and bigotry are OK. According to the Anti-Defamation League, there were over 204 recorded antisemitic incidents of vandalism or propaganda in Ohio in 2022, often perpetrated by either neo-Nazi or white supremacist groups. Antisemitic incidents, alone, increased by 22% in Ohio between 2021 and 2022.
More:Nazis protest at Land-Grant drag brunch: Here's what you need to know now
Most recently, protestors purported to belong to the neo-Nazi group, “Blood Tribe,” gathered outside a Columbus brewery hosting a drag brunch fundraiser for the Kaleidoscope Youth Center. Donning red shirts and black ski masks to hide their cowardly faces, they waved a black flag with a Nazi swastika and a banner with the words "there will be blood."
History is a teacher. We must use it, as Louise Gips has so eloquently done, to educate our young people and remind everyone what Hitler and the Nazis stood for and the atrocities they committed.
'Some bastard will say this never happened.'
I told my dad, Dick DeWine’s, story here in the Dispatch a few years ago, following the violent demonstrations of white supremacists in Charlottesville. It bears repeating.
Dad was a Private in the Army during World War II. He used to tell me about what he and his Army company saw when they arrived at the Dachau concentration camp just days after it had been liberated. What my dad and his fellow soldiers witnessed in Hitler’s Germany was burned on their brains for the rest of their lives.
More:Mike DeWine: Remind neo-Nazis of the horrors of Dachau
Dachau was a camp where more than 30,000 people perished at the hand of the Nazis. When Dad was there, he saw the ovens that the Nazis used to burn the bodies of so many of the prisoners, many still containing ashes and skeletal remains.
Even into his 80s, Dad still vividly pictured the devices the Nazis used to slide the bodies into the ovens. He told me about going into a room next to the ovens and seeing fixtures on the walls that looked like showerheads. Those at the camp told him that prisoners were taken into these rooms and told they were going to take showers. Instead of water coming out of the nozzles, deadly poisonous gas was emitted.
Dad also remembered walking down the road near the camp and encountering a very weak, emaciated man who had just been a prisoner. My dad and his buddies talked to the man and gave him food and cigarettes. They asked him if they could take his picture. He said yes — if it was with an American soldier. So, they did.
Recollections of the Holocaust are difficult to read and even more difficult to imagine. It’s one of the reasons that when the concentration camps were being liberated, Gen. Dwight Eisenhower instructed that photos be taken and films be made of the prisoners and of the unimaginable conditions in which they lived and in which so many were murdered.
He wanted to make sure that we told these stories because, in his words, “Some bastard will say this never happened.”
There will come a time, in the not-too-distant future, when there will be no more Holocaust survivors — no more American liberators. If we don’t continue to tell their stories, who will?
It is our responsibility as Americans, as Ohioans, and just as human beings to never let this dark chapter of history be forgotten — or repeated.
Mike DeWine is governor of Ohio.
|
<urn:uuid:1c58e883-c678-4f6c-bfb1-7a518a2fa6d6>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648245.63/warc/CC-MAIN-20230602003804-20230602033804-00720.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9856396317481995,
"pii_count": 0,
"score": 2.671875,
"token_count": 1213,
"url": "https://www.dispatch.com/story/opinion/columns/guest/2023/05/15/from-holocaust-to-nazis-at-columbus-ohio-drag-brunch-protest-history-teaches-can-repeat-mike-dewine/70212876007/"
}
|
Mike DeWine: From Holocaust to Nazis at Columbus drag brunch, history teaches, can repeat
"It is our responsibility as Americans, as Ohioans, and just as human beings to never let this dark chapter of history be forgotten — or repeated," Mike DeWine
Mike DeWine is governor of Ohio.
Louise Gips of Cleveland will turn 90 in December. She was almost six years-old when her family fled to Ukraine following the Nazi invasion of her native Poland, and eventually ended up in Siberia as Jewish prisoners of war.
More:Oldest Holocaust memorial in US gets federal protection amid rise of antisemitism
Louise told her family’s story at the Governor’s annual Holocaust Remembrance Ceremony in April. She also told her now deceased husband, Harry’s, story. While she and her family were fortunate to have survived the Holocaust, most of her husband’s family did not.
Harry was 10 years-old when Nazis captured his family in Poland. They were sent to a Nazi ghetto, where countless Jews died of disease or starvation or were loaded on trucks daily to be taken directly to death camps where they were executed.
Harry had six siblings. He and his oldest three siblings managed to escape the ghetto. Harry, then about 12 years-old, went to a farm and told the owner that he was a Christian boy and offered to work for him.
A fight for survival
The farmer looked out for Harry until someone recognized him and alerted the farmer that Harry was Jewish.
Knowing he and his family could be killed for harboring a Jew, the farmer told Harry to hide in the woods. It was there, in the forest, that Harry spent the next 18 months fighting to survive.
When the War was over, he went back to his hometown in Poland, only to learn that his family had all been killed except for two of his sisters. Nazis executed his youngest sister just three months before the War’s end.
Louise told these stories because, in her words, “If all of us don’t speak, history will definitely repeat itself.” She is right.
History is a teacher
It is both incomprehensible and scary to me that there are people today in America who think that discrimination, hate, and bigotry are OK. According to the Anti-Defamation League, there were over 204 recorded antisemitic incidents of vandalism or propaganda in Ohio in 2022, often perpetrated by either neo-Nazi or white suprem
|
acist groups. Antisemitic incidents, alone, increased by 22% in Ohio between 2021 and 2022.
More:Nazis protest at Land-Grant drag brunch: Here's what you need to know now
Most recently, protestors purported to belong to the neo-Nazi group, “Blood Tribe,” gathered outside a Columbus brewery hosting a drag brunch fundraiser for the Kaleidoscope Youth Center. Donning red shirts and black ski masks to hide their cowardly faces, they waved a black flag with a Nazi swastika and a banner with the words "there will be blood."
History is a teacher. We must use it, as Louise Gips has so eloquently done, to educate our young people and remind everyone what Hitler and the Nazis stood for and the atrocities they committed.
'Some bastard will say this never happened.'
I told my dad, Dick DeWine’s, story here in the Dispatch a few years ago, following the violent demonstrations of white supremacists in Charlottesville. It bears repeating.
Dad was a Private in the Army during World War II. He used to tell me about what he and his Army company saw when they arrived at the Dachau concentration camp just days after it had been liberated. What my dad and his fellow soldiers witnessed in Hitler’s Germany was burned on their brains for the rest of their lives.
More:Mike DeWine: Remind neo-Nazis of the horrors of Dachau
Dachau was a camp where more than 30,000 people perished at the hand of the Nazis. When Dad was there, he saw the ovens that the Nazis used to burn the bodies of so many of the prisoners, many still containing ashes and skeletal remains.
Even into his 80s, Dad still vividly pictured the devices the Nazis used to slide the bodies into the ovens. He told me about going into a room next to the ovens and seeing fixtures on the walls that looked like showerheads. Those at the camp told him that prisoners were taken into these rooms and told they were going to take showers. Instead of water coming out of the nozzles, deadly poisonous gas was emitted.
Dad also remembered walking down the road near the camp and encountering a very weak, emaciated man who had just been a prisoner. My dad and his buddies talked to the man and gave him food and cigarettes. They asked him if they could take his picture. He said yes — if it was with an American soldier. So, they did.
Recollections of the Holocaust are difficult to read and even more difficult to imagine. It’s one of the reasons that when the concentration camps were being liberated, Gen. Dwight Eisenhower instructed that photos be taken and films be made of the prisoners and of the unimaginable conditions in which they lived and in which so many were murdered.
He wanted to make sure that we told these stories because, in his words, “Some bastard will say this never happened.”
There will come a time, in the not-too-distant future, when there will be no more Holocaust survivors — no more American liberators. If we don’t continue to tell their stories, who will?
It is our responsibility as Americans, as Ohioans, and just as human beings to never let this dark chapter of history be forgotten — or repeated.
Mike DeWine is governor of Ohio.
|
By Shivam Patel
NEW DELHI (Reuters) – A Himalayan glacial lake in India probably burst its banks this week after chunks of ice fell into it in an apparent avalanche following heavy rains, triggering deadly flash floods in a small mountain state, experts told Reuters.
At least 18 people have died and nearly 100 remain missing as heavy rains and water gushing out of Lhonak Lake unleashed flash floods on Wednesday in the Teesta river in Sikkim state, which borders Nepal, Bhutan and China.
It is the latest in a series of extreme weather events that have caused widespread damage in the Himalayas of South Asia in recent years and have been blamed by scientists and officials on climate change.
WHAT CAUSED THE FLOODING?
The primary reason for the destruction in Sikkim was excess rainfall and a glacial lake outburst flood (GLOF) in Lhonak Lake at an altitude of 5,200 metres (17,060 ft), India’s National Disaster Management Authority said.
Glacial lakes are formed when a glacier melts and fills the depression left behind. There are 503 glacial lakes in the Indian Himalayas that are larger than 10 hectares – close to the size of 10 rugby fields put together, according to a 2011 assessment by India’s Central Water Commission.
Satellite images released by India’s space agency on Thursday showed that Lhonak Lake more than halved in size after it flooded.
The images also show a lot of ice floating on the lake surface, said Jakob F. Steiner, a fellow at the Kathmandu-based International Centre for Integrated Mountain Development (ICIMOD), part of the Himalayan University Consortium.
“On the south of the Lhonak Lake there are steep slopes, glaciers as well, there is still snow. It could be that a chunk of either ice or unstable slopes has detached and dropped into the lake water,” Steiner said, adding that the images indicate something like this happened.
This can create shockwaves across the water surface that would be enough to topple the lake’s dam, Steiner added, since the lake is up to 120 metres (395 ft) deep.
Finu Shrestha, a remote sensing and geo-information analyst at ICIMOD, said a combination of factors including rainfall, a snow avalanche and increasing pressure on the lake due to its growing size could have caused it to flood.
Sharad Chandra, director of the flood forecasting division at India’s Central Water Commission, said two earthquakes that struck Nepal on Tuesday, of magnitudes 6.3 and 5.3, could also be one of the probable reasons behind the GLOF event.
WERE THERE WARNINGS AND SIMILAR EVENTS IN THE PAST?
Indian scientists at the National Remote Sensing Centre had said in a 2013 report that there was a 42% chance of Lhonak Lake bursting its banks.
It said the lake had grown in size, from 17.54 hectares (43.34 acres) in 1977 to 98.73 hectares (243.97 acres) in 2008. Latest satellite imagery showed the lake area as approximately 162.7 hectares before it burst and shrank to 60.30 hectares.
Deadly flash floods have previously been triggered multiple times due to GLOF events in the Himalayas, including a 2013 disaster in Kedarnath in Uttarakhand state that killed about 5,000 people.
Floods caused by days of heavy rains in Sikkim in 1968 killed an estimated 1,000 people.
(Editing by YP Rajesh; editing by Mark Heinrich)
|
<urn:uuid:4f94f177-cf05-4b99-81ba-eb390066cdac>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00227.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9479397535324097,
"pii_count": 0,
"score": 2.953125,
"token_count": 752,
"url": "https://ec2-15-188-152-128.eu-west-3.compute.amazonaws.com/2023/10/05/explainer-deadly-himalayan-lake-flooding/"
}
|
By Shivam Patel
NEW DELHI (Reuters) – A Himalayan glacial lake in India probably burst its banks this week after chunks of ice fell into it in an apparent avalanche following heavy rains, triggering deadly flash floods in a small mountain state, experts told Reuters.
At least 18 people have died and nearly 100 remain missing as heavy rains and water gushing out of Lhonak Lake unleashed flash floods on Wednesday in the Teesta river in Sikkim state, which borders Nepal, Bhutan and China.
It is the latest in a series of extreme weather events that have caused widespread damage in the Himalayas of South Asia in recent years and have been blamed by scientists and officials on climate change.
WHAT CAUSED THE FLOODING?
The primary reason for the destruction in Sikkim was excess rainfall and a glacial lake outburst flood (GLOF) in Lhonak Lake at an altitude of 5,200 metres (17,060 ft), India’s National Disaster Management Authority said.
Glacial lakes are formed when a glacier melts and fills the depression left behind. There are 503 glacial lakes in the Indian Himalayas that are larger than 10 hectares – close to the size of 10 rugby fields put together, according to a 2011 assessment by India’s Central Water Commission.
Satellite images released by India’s space agency on Thursday showed that Lhonak Lake more than halved in size after it flooded.
The images also show a lot of ice floating on the lake surface, said Jakob F. Steiner, a fellow at the Kathmandu-based International Centre for Integrated Mountain Development (ICIMOD), part of the Himalayan University Consortium.
“On the south of the Lhonak Lake there are steep slopes, glaciers as well, there is still snow. It could be that a chunk of either ice or unstable slopes has detached and dropped into the lake water,” Steiner said, adding that the images indicate something like this happened.
This can create shockwaves across the water surface that would be enough to topple the lake’s dam, Steiner added, since the lake is up to 120 metres (395 ft) deep.
Finu Shrestha, a remote sensing and geo-information analyst at ICIMOD, said a combination of factors including rainfall, a snow avalanche and increasing pressure on the lake due to its growing size could have caused it to flood
|
.
Sharad Chandra, director of the flood forecasting division at India’s Central Water Commission, said two earthquakes that struck Nepal on Tuesday, of magnitudes 6.3 and 5.3, could also be one of the probable reasons behind the GLOF event.
WERE THERE WARNINGS AND SIMILAR EVENTS IN THE PAST?
Indian scientists at the National Remote Sensing Centre had said in a 2013 report that there was a 42% chance of Lhonak Lake bursting its banks.
It said the lake had grown in size, from 17.54 hectares (43.34 acres) in 1977 to 98.73 hectares (243.97 acres) in 2008. Latest satellite imagery showed the lake area as approximately 162.7 hectares before it burst and shrank to 60.30 hectares.
Deadly flash floods have previously been triggered multiple times due to GLOF events in the Himalayas, including a 2013 disaster in Kedarnath in Uttarakhand state that killed about 5,000 people.
Floods caused by days of heavy rains in Sikkim in 1968 killed an estimated 1,000 people.
(Editing by YP Rajesh; editing by Mark Heinrich)
|
The recent bank problems associated with Silicon Valley Bank and Credit Suisse have highlighted the important distinction between a bank's liquidity and its capital. Liquidity and capital are two crucial aspects of a bank's financial health, and both play a vital role in its ability to withstand unexpected shocks. However, the two are also routinely conflated and lead to a lot of misconceptions as well.
Liquidity refers to a bank's ability to meet its short-term obligations. It is the ability of a bank to pay its liabilities as they come due. For a bank, this means having enough cash or easily sellable assets on hand to meet customer demands for withdrawals or other cash needs of the bank, such as making debt payments.
Capital, on the other hand, refers to the bank's net worth. It is the value of the bank’s assets above and beyond what the bank owes. Much like a homeowner may have a house worth $300,000 while still owing $200,000 on a mortgage, leaving $100,000 in net equity, bank capital constitutes investors’ equity in the firm. Capital is critical because it serves as a buffer against losses, ensuring that the bank can absorb losses without risking insolvency (i.e., having negative net worth).
The recent banking problems have highlighted the importance of these two aspects of a bank's financial health—bank liquidity and bank capital—and also how the two concepts are related. Silicon Valley Bank's liquidity problems, for example, began when it experienced large losses on its bond portfolio. These were only paper losses at first. As interest rates rose rapidly in recent months, the value of SVB’s
However, the paper losses led to a loss of confidence and a run on the bank, with customers withdrawing funds, leading to a liquidity crisis. Had SVB been able to hold its bonds to maturity, it might never have had an issue. However, problems arose when it had to sell its bonds on the open market at a loss in order to obtain the cash it needed to meet customer withdrawals. This is how a liquidity problem can quickly morph into a capital problem.
Credit Suisse's problems were similar although they have taken longer to unfold. Credit Suisse’s share price has fallen more than 75% over the last year, largely as a result of some bad investment decisions, as well as turnover in top management. The company has been trying to turn things around, but the recent banking problems in America seem to have scared investors, leading some to think Credit Suisse may be the next domino to fall.
As Credit Suisse began facing withdrawals, Swiss authorities stepped in to help engineer a takeover of the company by its long-time rival, UBS. Note that UBS is unlikely to agree to such a takeover if it thought Credit Suisse was insolvent (although the Swiss government has agreed to share in some of Credit Suisse’s losses). Presumably UBS executives believe Credit Suisse’s net worth exceeds the $3.2 billion they have agreed to pay for the company, otherwise they would not sign off on the sale.
In response to the liquidity problems faced by European banks, the Federal Reserve and other central banks have ramped up swap lines to provide access to U.S. dollars. The swap lines allow foreign banks to borrow dollars from the Federal Reserve or their own central bank in exchange for their own currency or in exchange for some other asset, such as U.S. Treasury bonds. This provides the foreign banks with access to liquidity in U.S. dollars, which they can use to meet their short-term obligations.
Some have criticized the Federal Reserve in recent weeks for “printing money” or bailing out irresponsible investors. However, it is essential to note that providing liquidity to banks is not the same as a bailout. A bailout involves the injection of capital into an insolvent bank to keep it afloat, while liquidity provision aims to prevent a liquidity crisis from spiraling out of control.
The Federal Reserve acts as a lender of last resort, providing liquidity to banks during times of financial stress. This function is essential to ensuring that the banking system remains stable. Without the central bank acting as lender of last resort, banks might not have access to the funds they need to meet their obligations. This could lead to a self-fulfilling prophecy whereby customers lose confidence, banks sell assets at a loss to meet customer withdrawals, and customers lose further confidence as bank capital is eroded.
So the next time someone tells you that a bank has been bailed out, ask yourself if it is bank liquidity or bank capital that is being provided. The distinction is critical, and the ability to distinguish between the two is an important test of the seriousness of any financial pundit.
|
<urn:uuid:2c87dde4-07a5-41b7-a292-124736484bf3>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654097.42/warc/CC-MAIN-20230608035801-20230608065801-00359.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.968105673789978,
"pii_count": 0,
"score": 2.734375,
"token_count": 974,
"url": "https://www.forbes.com/sites/jamesbroughel/2023/03/20/the-one-thing-you-need-to-know-to-understand-any-banking-crisis/"
}
|
The recent bank problems associated with Silicon Valley Bank and Credit Suisse have highlighted the important distinction between a bank's liquidity and its capital. Liquidity and capital are two crucial aspects of a bank's financial health, and both play a vital role in its ability to withstand unexpected shocks. However, the two are also routinely conflated and lead to a lot of misconceptions as well.
Liquidity refers to a bank's ability to meet its short-term obligations. It is the ability of a bank to pay its liabilities as they come due. For a bank, this means having enough cash or easily sellable assets on hand to meet customer demands for withdrawals or other cash needs of the bank, such as making debt payments.
Capital, on the other hand, refers to the bank's net worth. It is the value of the bank’s assets above and beyond what the bank owes. Much like a homeowner may have a house worth $300,000 while still owing $200,000 on a mortgage, leaving $100,000 in net equity, bank capital constitutes investors’ equity in the firm. Capital is critical because it serves as a buffer against losses, ensuring that the bank can absorb losses without risking insolvency (i.e., having negative net worth).
The recent banking problems have highlighted the importance of these two aspects of a bank's financial health—bank liquidity and bank capital—and also how the two concepts are related. Silicon Valley Bank's liquidity problems, for example, began when it experienced large losses on its bond portfolio. These were only paper losses at first. As interest rates rose rapidly in recent months, the value of SVB’s
However, the paper losses led to a loss of confidence and a run on the bank, with customers withdrawing funds, leading to a liquidity crisis. Had SVB been able to hold its bonds to maturity, it might never have had an issue. However, problems arose when it had to sell its bonds on the open market at a loss in order to obtain the cash it needed to meet customer withdrawals. This is how a liquidity problem can quickly morph into a capital problem.
Credit Suisse's problems were similar although they have taken longer to unfold. Credit Suisse’s share price has fallen more than 75% over the last year, largely as a result of some bad investment decisions, as well as turnover in top management. The company has been trying to turn things around, but the recent banking
|
problems in America seem to have scared investors, leading some to think Credit Suisse may be the next domino to fall.
As Credit Suisse began facing withdrawals, Swiss authorities stepped in to help engineer a takeover of the company by its long-time rival, UBS. Note that UBS is unlikely to agree to such a takeover if it thought Credit Suisse was insolvent (although the Swiss government has agreed to share in some of Credit Suisse’s losses). Presumably UBS executives believe Credit Suisse’s net worth exceeds the $3.2 billion they have agreed to pay for the company, otherwise they would not sign off on the sale.
In response to the liquidity problems faced by European banks, the Federal Reserve and other central banks have ramped up swap lines to provide access to U.S. dollars. The swap lines allow foreign banks to borrow dollars from the Federal Reserve or their own central bank in exchange for their own currency or in exchange for some other asset, such as U.S. Treasury bonds. This provides the foreign banks with access to liquidity in U.S. dollars, which they can use to meet their short-term obligations.
Some have criticized the Federal Reserve in recent weeks for “printing money” or bailing out irresponsible investors. However, it is essential to note that providing liquidity to banks is not the same as a bailout. A bailout involves the injection of capital into an insolvent bank to keep it afloat, while liquidity provision aims to prevent a liquidity crisis from spiraling out of control.
The Federal Reserve acts as a lender of last resort, providing liquidity to banks during times of financial stress. This function is essential to ensuring that the banking system remains stable. Without the central bank acting as lender of last resort, banks might not have access to the funds they need to meet their obligations. This could lead to a self-fulfilling prophecy whereby customers lose confidence, banks sell assets at a loss to meet customer withdrawals, and customers lose further confidence as bank capital is eroded.
So the next time someone tells you that a bank has been bailed out, ask yourself if it is bank liquidity or bank capital that is being provided. The distinction is critical, and the ability to distinguish between the two is an important test of the seriousness of any financial pundit.
|
Exercise as a treatment for severe depression is at least as effective as standard drugs or psychotherapy and by some measures better, according to the largest study to date of exercise as “medicine” for depression.
“We found large, significant results,” said Andreas Heissel, an exercise scientist at the University of Potsdam in Germany, who led the study.
For people struggling with depression, he said, the findings show you don’t have to run marathons or otherwise train strenuously to benefit. “Something is better than nothing,” Heissel said.
The effects were robust enough that the study’s authors hope the finding will spur a move to make exercise a standard, prescribed therapy for depression.
That approach would represent a notable shift. The American Psychological Association’s clinical practice guidelines, updated in 2019, recommend seven types of psychotherapy and several antidepressants for the treatment of depression, but they do not mention exercise. The World Health Organization promotes exercise for mental health as an add-on to traditional treatments — not on its own.
But the study’s authors are confident. “We expect this review to lead to updated guidelines and recommendations for exercise as a first-line treatment option,” Heissel said.
Some depression experts are hesitant. “I think that exercise should be prescribed for everyone with depression,” said Murray B. Stein, a professor and vice chair for clinical research in the department of psychiatry at the University of California at San Diego who was not involved in the study. “I still feel the evidence is weak, though, that exercise be considered a first-line treatment for depression.”
The research behind exercise and depression
Scientists and clinicians have known for some time that exercise protects us against developing depression. In large-scale epidemiological studies, active men and women become depressed at much lower rates than sedentary people, even if they exercise for only a few minutes a day or a few days a week.
But it’s trickier to test exercise as a treatment for existing depression. You have to study it like any medicine, by recruiting people with the condition and randomly assigning them to the intervention — in this case, exercise — or a control group and scrupulously tracking what happens.
Because these experiments tend to be complicated and expensive, past studies using exercise against depression have been small, typically involving only a few dozen people, making it difficult to draw firm conclusions about whether and how well exercise treats depression.
In the study published in February in the British Journal of Sports Medicine, a global group of researchers pulled together every recent experiment using physical activity as depression therapy. They wound up with data from 41 studies of about 2,265 volunteers, representing the largest sample yet on this topic.
The studies’ exercise programs included walking, running and weight training. Some consisted of group classes, others solo workouts, some supervised, some not. But all featured people with depression getting up and moving more.
Exercise of any kind treats depression
Pooled, the effects were potent. Overall, people with depression who exercised in any way improved their symptoms by almost five points, using one widely recognized diagnostic scale, and by about 6.5 points using another. For both scales, an improvement of three points or more is considered clinically meaningful, the study’s authors write.
In practical terms, these numbers suggest that, for every two people with depression who start to exercise, one of them should experience “a large-magnitude reduction in depressive symptoms,” Heissel said.
Those statistics represent “somewhat better” outcomes than those seen in recent studies of psychotherapy and drug treatment for depression, said Felipe Schuch, a professor researching exercise and mental health at the University of Santa Maria in Brazil and senior author of the study.
In general, the effects were best if people exercised moderately, such as by walking, although more vigorous workouts, including running, cycling and weight training, were almost as effective, and even light activities such as gardening eased symptoms.
Overall, the study “shows that exercise is a further, effective, stand-alone treatment option” for depression, Heissel said.
The study did not look at how exercise might be improving mental health. In past research with depressed mice, as well as with people, exercise raised levels in the brain and bloodstreams of various biochemicals known to be involved in mood enhancement. It also often elevated people’s self-efficacy, which is the sense that you are capable of more than you once believed, a change typically associated with better mental health.
But the precise mechanisms by which bodily movements alter brain functions to improve moods remain unclear, as do the differences in people’s responses. In every study the researchers included, some people’s depression lifted, while others’ symptoms remained stubbornly unchanged. (Similar disparities are seen in almost all studies of psychotherapy and drug treatments, too.)
What is the right dose of exercise?
“To formulate exercise as a prescription for medicine, we still need more research to understand the optimal type, frequency and amount of exercise for different people,” said Karmel Choi, a clinical psychologist and assistant professor at the Center for Precision Psychiatry at Harvard Medical School and Massachusetts General Hospital. She has studied exercise and depression but was not involved in the new review.
“Depression is not one size fits all,” she said, “so treatment should always be tailored to the individual.”
Murray Stein of UCSD agrees. “Exercise has so many health benefits that it should be prescribed for virtually anything that ails humankind,” he said.
But he remains unconvinced that it should be a primary treatment for depression. “I would prescribe exercise as an adjunct” approach for patients with moderate to severe depression, he said, with psychotherapy or an antidepressant as the main therapy.
Certainly, more research is needed. “We do not know enough about dose, intensity and type of exercise,” Heissel said, or whether early improvements related to activity last. Future studies should dive into these questions and compare exercise head-to-head against psychotherapy and antidepressants, he said.
Perhaps most important, researchers and clinicians need to recognize how daunting exercise can be for someone who is depressed.
“Many people struggle to become active,” Heissel said. “So the first step is trying to understand what is the exercise that is more likely to be accepted and to generate enjoyment.”
If you or someone you care about feels depressed, Heissel said, talk with a doctor about exercise, either as a first therapy or in addition to other treatments. Experiment with different activities until you settle on one you enjoy.
“The best exercise is the one that is actually done,” Heissel said, “and that means exercise that is rewarding or pleasant.”
Do you have a fitness question? Email <email-pii> and we may answer your question in a future column.
Sign up for the Well+Being newsletter, your source of expert advice and simple tips to help you live well every day
Read more from Well+Being
Well+Being shares news and advice for living well every day. Sign up for our newsletter to get tips directly in your inbox.
Eating like a centenarian can help you live a longer life.
Waking up frequently at night can harm your health. Here are three ways to improve sleep.
The frequency and color of poop can vary. Most of the time, they shouldn’t cause alarm.
You should avoid kava and 9 other risky dietary supplements.
Try these 6 ways to slow memory decline and lower dementia risk
|
<urn:uuid:cdef7ad7-e021-4375-9a67-15777bc95311>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00503.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9611784815788269,
"pii_count": 1,
"score": 3,
"token_count": 1622,
"url": "https://www.washingtonpost.com/wellness/2023/03/15/exercise-depression-benefits/"
}
|
Exercise as a treatment for severe depression is at least as effective as standard drugs or psychotherapy and by some measures better, according to the largest study to date of exercise as “medicine” for depression.
“We found large, significant results,” said Andreas Heissel, an exercise scientist at the University of Potsdam in Germany, who led the study.
For people struggling with depression, he said, the findings show you don’t have to run marathons or otherwise train strenuously to benefit. “Something is better than nothing,” Heissel said.
The effects were robust enough that the study’s authors hope the finding will spur a move to make exercise a standard, prescribed therapy for depression.
That approach would represent a notable shift. The American Psychological Association’s clinical practice guidelines, updated in 2019, recommend seven types of psychotherapy and several antidepressants for the treatment of depression, but they do not mention exercise. The World Health Organization promotes exercise for mental health as an add-on to traditional treatments — not on its own.
But the study’s authors are confident. “We expect this review to lead to updated guidelines and recommendations for exercise as a first-line treatment option,” Heissel said.
Some depression experts are hesitant. “I think that exercise should be prescribed for everyone with depression,” said Murray B. Stein, a professor and vice chair for clinical research in the department of psychiatry at the University of California at San Diego who was not involved in the study. “I still feel the evidence is weak, though, that exercise be considered a first-line treatment for depression.”
The research behind exercise and depression
Scientists and clinicians have known for some time that exercise protects us against developing depression. In large-scale epidemiological studies, active men and women become depressed at much lower rates than sedentary people, even if they exercise for only a few minutes a day or a few days a week.
But it’s trickier to test exercise as a treatment for existing depression. You have to study it like any medicine, by recruiting people with the condition and randomly assigning them to the intervention — in this case, exercise — or a control group and scrupulously tracking what happens.
Because these experiments tend to be complicated and expensive, past studies using exercise against depression have been small, typically involving only a few dozen people, making it difficult to draw firm conclusions about whether and how well exercise treats depression.
In the study published in February in the British Journal of Sports Medicine, a global group of researchers pulled together every recent experiment using physical activity as depression therapy. They
|
wound up with data from 41 studies of about 2,265 volunteers, representing the largest sample yet on this topic.
The studies’ exercise programs included walking, running and weight training. Some consisted of group classes, others solo workouts, some supervised, some not. But all featured people with depression getting up and moving more.
Exercise of any kind treats depression
Pooled, the effects were potent. Overall, people with depression who exercised in any way improved their symptoms by almost five points, using one widely recognized diagnostic scale, and by about 6.5 points using another. For both scales, an improvement of three points or more is considered clinically meaningful, the study’s authors write.
In practical terms, these numbers suggest that, for every two people with depression who start to exercise, one of them should experience “a large-magnitude reduction in depressive symptoms,” Heissel said.
Those statistics represent “somewhat better” outcomes than those seen in recent studies of psychotherapy and drug treatment for depression, said Felipe Schuch, a professor researching exercise and mental health at the University of Santa Maria in Brazil and senior author of the study.
In general, the effects were best if people exercised moderately, such as by walking, although more vigorous workouts, including running, cycling and weight training, were almost as effective, and even light activities such as gardening eased symptoms.
Overall, the study “shows that exercise is a further, effective, stand-alone treatment option” for depression, Heissel said.
The study did not look at how exercise might be improving mental health. In past research with depressed mice, as well as with people, exercise raised levels in the brain and bloodstreams of various biochemicals known to be involved in mood enhancement. It also often elevated people’s self-efficacy, which is the sense that you are capable of more than you once believed, a change typically associated with better mental health.
But the precise mechanisms by which bodily movements alter brain functions to improve moods remain unclear, as do the differences in people’s responses. In every study the researchers included, some people’s depression lifted, while others’ symptoms remained stubbornly unchanged. (Similar disparities are seen in almost all studies of psychotherapy and drug treatments, too.)
What is the right dose of exercise?
“To formulate exercise as a prescription for medicine, we still need more research to understand the optimal type, frequency and amount of exercise for different people,” said Karmel Choi, a clinical psychologist and assistant professor at the Center for Precision Psychiatry at Harvard Medical School and Massachusetts General Hospital. She has studied exercise and depression but was not involved in the new review.
“Depression is not one size fits all,” she said, “so treatment should always be tailored to the individual.”
Murray Stein of UCSD agrees. “Exercise has so many health benefits that it should be prescribed for virtually anything that ails humankind,” he said.
But he remains unconvinced that it should be a primary treatment for depression. “I would prescribe exercise as an adjunct” approach for patients with moderate to severe depression, he said, with psychotherapy or an antidepressant as the main therapy.
Certainly, more research is needed. “We do not know enough about dose, intensity and type of exercise,” Heissel said, or whether early improvements related to activity last. Future studies should dive into these questions and compare exercise head-to-head against psychotherapy and antidepressants, he said.
Perhaps most important, researchers and clinicians need to recognize how daunting exercise can be for someone who is depressed.
“Many people struggle to become active,” Heissel said. “So the first step is trying to understand what is the exercise that is more likely to be accepted and to generate enjoyment.”
If you or someone you care about feels depressed, Heissel said, talk with a doctor about exercise, either as a first therapy or in addition to other treatments. Experiment with different activities until you settle on one you enjoy.
“The best exercise is the one that is actually done,” Heissel said, “and that means exercise that is rewarding or pleasant.”
Do you have a fitness question? Email <email-pii> and we may answer your question in a future column.
Sign up for the Well+Being newsletter, your source of expert advice and simple tips to help you live well every day
Read more from Well+Being
Well+Being shares news and advice for living well every day. Sign up for our newsletter to get tips directly in your inbox.
Eating like a centenarian can help you live a longer life.
Waking up frequently at night can harm your health. Here are three ways to improve sleep.
The frequency and color of poop can vary. Most of the time, they shouldn’t cause alarm.
You should avoid kava and 9 other risky dietary supplements.
Try these 6 ways to slow memory decline and lower dementia risk
|
Videos show 'disgusting' slicks in creek water near Ohio train derailment. What's going on?
Videos appear to show shimmering chemical contamination on creeks near the site of the East Palestine, Ohio, train derailment and chemical leak.
Experts tell USA TODAY the rainbow-colored material is likely vinyl chloride, a heavier-than-water chemical that both leaked and burned following the Feb. 3 derailment of a Norfolk Southern freight train. The videos mark yet another example of heightened health and environmental concerns in the wake of the disaster.
Authorities say about 3,500 small fish were killed in the creeks surrounding the derailment site shortly after the crash, leak and burn, but they have not reported significant subsequent deaths. Meanwhile, a new federal lawsuit claims fish and wild animals are dying as far as 20 miles away from the site of the derailment.
Here's what to know about the videos:
What do the videos show?
The videos posted by several people, including Ohio Republican Sen. J.D. Vance show rainbow-colored slicks spreading across the surface of small streams in the area after people poked the creek beds with sticks or threw rocks in.
"This is disgusting," Vance declared as sheen spread across what he said was Leslie Run creek.
FACT CHECK:Ohio train derailment fact check: What's true and what's false?
BACKGROUND: Is the Ohio River contaminated? East Palestine train derailment sparks concerns over water
What is going on in the videos?
John Senko, a professor of geosciences and biology at the University of Akron, said the videos depict what appears to be vinyl chloride, which would sink to the bottom of a lake or stream because it's denser than water.
"It looks like what's happening is you got some of that stuff on the bottom of the creek, you stir it up a little bit, it starts to come up and then it's just going to sink again," he said. "So that stuff's behaving like I would expect vinyl chloride to behave.”
What are the health risks of the creek contamination?
The videos are evidence that groundwater contamination has occurred, experts told the USA TODAY Network. But contamination does not necessarily mean there's a health risk.
The U.S. Environmental Protection Agency sets limits for what's deemed acceptable exposure to many chemicals, and says short-term exposure to high levels of vinyl chloride in the air can make people dizzy or give them headaches, while long-term exposure can cause liver damage.
Dr. Kari Nadeau, the chair of Harvard's Environmental Health Department, said the oily sheen was likely left by burned chemicals that drifted back down to the ground and into the water.
WATCH:Another Norfolk Southern train derails in Michigan after massive derailment in Ohio
FACT CHECK:Baseless claim that Ohio train derailment was a 'false flag' operation
"The information that I know as a public health expert, as well as from what the EPA is telling us right now, the EPA is letting us know that there are not dangerous levels of toxins in the water or the air at the current time," she said.
What health concerns are there after Ohio train derailment?
Ohio Gov. Mike DeWine has asked CDC doctors and experts to help screen area residents for illness, and state and federal environmental experts are overseeing monitoring and cleanup efforts.
- Ground water contamination: The crash and subsequent fire released chemicals into the air and onto the ground and a stream nearby. Experts say the ground and water contamination likely pose the biggest risk now.
- Air quality: Federal authorities have tested more than 450 homes for volatile organic compounds, which could pose a health risk.
- Private wells: Ohio Department of Health Director Bruce Vanderhoff said Tuesday that the air and water quality around East Palestine is generally safe, but private wells are in the process of being tested. Until those results are in, Vanderhoff encouraged residents with a private water supply to drink and use bottled water.
What's being done to clean up?
The spill happened closest to Sulfur Run creek, and authorities have damned it above and below the spill area. They're currently pumping the clean creek around the contamination area, and then remediating any contaminated water flowing into the short section of the dry creek bed.
Norfolk Southern has said it will install wells to monitor groundwater. Officials will also sample soil in key areas, including near where the cars filled with vinyl chloride burned.
WATCH:Ohio Gov. DeWine briefs on East Palestine train derailment
WATCH:Residents seek answers over Ohio train derailment
EPA controversy explained
Many conservative lawmakers have complained the EPA has not responded aggressively enough to the spill. The EPA says Ohio and other federal agencies are better suited to assist.
Vance in particular has attacked the EPA and challenged officials to drink the water in the streams in East Palestine.
Underlying the discussion: The EPA has 20% fewer employees today than it did at its peak in 1999, when about 18,100 people worked there.
The EPA's annual budget hit a high of $10.3 billion in 2010, and today sits at $9.5 billion. If the budget had kept up with inflation, it would be $14 billion. In 2017, then-President Trump proposed a 31% cut to the EPA's annual budget, although Congress ultimately rejected most of his cuts.
President Biden has proposed a 2023 EPA budget of $11.8 billion, including hiring an extra 1,900 workers.
The 2021 Bipartisan Infrastructure Law also provided billions in additional funding for programs overseen by the EPA, including environmental justice and cleanups. Most of the EPA's funding actually gets passed through to states and local governments, according to the agency.
Ohio is among 24 states suing the federal government over the EPA's plans to toughen environmental regulations and pollution limits in small streams and wetlands over a long-disputed "Waters of the United States" rule. That lawsuit was filed Thursday.
Contributing: Kelly Byer, The Repository
|
<urn:uuid:a22d91a1-5f0f-440d-81e5-22fcdd65fdde>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00354.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.956243097782135,
"pii_count": 0,
"score": 2.53125,
"token_count": 1255,
"url": "https://www.usatoday.com/story/news/nation/2023/02/17/east-palestine-ohio-train-derailment-water/11282711002/"
}
|
Videos show 'disgusting' slicks in creek water near Ohio train derailment. What's going on?
Videos appear to show shimmering chemical contamination on creeks near the site of the East Palestine, Ohio, train derailment and chemical leak.
Experts tell USA TODAY the rainbow-colored material is likely vinyl chloride, a heavier-than-water chemical that both leaked and burned following the Feb. 3 derailment of a Norfolk Southern freight train. The videos mark yet another example of heightened health and environmental concerns in the wake of the disaster.
Authorities say about 3,500 small fish were killed in the creeks surrounding the derailment site shortly after the crash, leak and burn, but they have not reported significant subsequent deaths. Meanwhile, a new federal lawsuit claims fish and wild animals are dying as far as 20 miles away from the site of the derailment.
Here's what to know about the videos:
What do the videos show?
The videos posted by several people, including Ohio Republican Sen. J.D. Vance show rainbow-colored slicks spreading across the surface of small streams in the area after people poked the creek beds with sticks or threw rocks in.
"This is disgusting," Vance declared as sheen spread across what he said was Leslie Run creek.
FACT CHECK:Ohio train derailment fact check: What's true and what's false?
BACKGROUND: Is the Ohio River contaminated? East Palestine train derailment sparks concerns over water
What is going on in the videos?
John Senko, a professor of geosciences and biology at the University of Akron, said the videos depict what appears to be vinyl chloride, which would sink to the bottom of a lake or stream because it's denser than water.
"It looks like what's happening is you got some of that stuff on the bottom of the creek, you stir it up a little bit, it starts to come up and then it's just going to sink again," he said. "So that stuff's behaving like I would expect vinyl chloride to behave.”
What are the health risks of the creek contamination?
The videos are evidence that groundwater contamination has occurred, experts told the USA TODAY Network. But contamination does not necessarily mean there's a health risk.
The U.S. Environmental Protection Agency sets limits for what's deemed acceptable exposure to many chemicals, and says short-term exposure to high levels of vinyl chloride in the air can make people dizzy or give
|
them headaches, while long-term exposure can cause liver damage.
Dr. Kari Nadeau, the chair of Harvard's Environmental Health Department, said the oily sheen was likely left by burned chemicals that drifted back down to the ground and into the water.
WATCH:Another Norfolk Southern train derails in Michigan after massive derailment in Ohio
FACT CHECK:Baseless claim that Ohio train derailment was a 'false flag' operation
"The information that I know as a public health expert, as well as from what the EPA is telling us right now, the EPA is letting us know that there are not dangerous levels of toxins in the water or the air at the current time," she said.
What health concerns are there after Ohio train derailment?
Ohio Gov. Mike DeWine has asked CDC doctors and experts to help screen area residents for illness, and state and federal environmental experts are overseeing monitoring and cleanup efforts.
- Ground water contamination: The crash and subsequent fire released chemicals into the air and onto the ground and a stream nearby. Experts say the ground and water contamination likely pose the biggest risk now.
- Air quality: Federal authorities have tested more than 450 homes for volatile organic compounds, which could pose a health risk.
- Private wells: Ohio Department of Health Director Bruce Vanderhoff said Tuesday that the air and water quality around East Palestine is generally safe, but private wells are in the process of being tested. Until those results are in, Vanderhoff encouraged residents with a private water supply to drink and use bottled water.
What's being done to clean up?
The spill happened closest to Sulfur Run creek, and authorities have damned it above and below the spill area. They're currently pumping the clean creek around the contamination area, and then remediating any contaminated water flowing into the short section of the dry creek bed.
Norfolk Southern has said it will install wells to monitor groundwater. Officials will also sample soil in key areas, including near where the cars filled with vinyl chloride burned.
WATCH:Ohio Gov. DeWine briefs on East Palestine train derailment
WATCH:Residents seek answers over Ohio train derailment
EPA controversy explained
Many conservative lawmakers have complained the EPA has not responded aggressively enough to the spill. The EPA says Ohio and other federal agencies are better suited to assist.
Vance in particular has attacked the EPA and challenged officials to drink the water in the streams in East Palestine.
Underlying the discussion: The EPA has 20% fewer employees today than it did at its peak in 1999, when about 18,100 people worked there.
The EPA's annual budget hit a high of $10.3 billion in 2010, and today sits at $9.5 billion. If the budget had kept up with inflation, it would be $14 billion. In 2017, then-President Trump proposed a 31% cut to the EPA's annual budget, although Congress ultimately rejected most of his cuts.
President Biden has proposed a 2023 EPA budget of $11.8 billion, including hiring an extra 1,900 workers.
The 2021 Bipartisan Infrastructure Law also provided billions in additional funding for programs overseen by the EPA, including environmental justice and cleanups. Most of the EPA's funding actually gets passed through to states and local governments, according to the agency.
Ohio is among 24 states suing the federal government over the EPA's plans to toughen environmental regulations and pollution limits in small streams and wetlands over a long-disputed "Waters of the United States" rule. That lawsuit was filed Thursday.
Contributing: Kelly Byer, The Repository
|
Sloshing of Earth’s core may spike major earthquakesBy Paul VoosenOct. 30, 2017 , 1:45 PM The number of major earthquakes, like the magnitude-7 one that devastated Haiti in 2010, seems to be correlated with minute fluctuations in day length. SEATTLE—The world doesn’t stop spinning. But every so often, it slows down. For decades, scientists have charted tiny fluctuations in the length of Earth’s day: Gain a millisecond here, lose a millisecond there. Last week at the annual meeting of the Geological Society of America here, two geophysicists argued that these minute changes could be enough to influence the timing of major earthquakes—and potentially help forecast them. During the past 100 years, Earth’s slowdowns have correlated surprisingly well with periods with a global increase in magnitude-7 and larger earthquakes, according to Roger Bilham of the University of Colorado (CU) in Boulder and Rebecca Bendick at the University of Montana in Missoula. Usefully, the spike, which adds two to five more quakes than typical, happens well after the slow-down begins. “The Earth offers us a 5-years heads up on future earthquakes, which is remarkable,” says Bilham, who presented the work. Most seismologists agree that earthquake prediction is a minefield. And so far, Bilham and Bendick have only fuzzy, hard-to-test ideas about what might cause the pattern they found. But the finding is too provocative to ignore, other researchers say. “The correlation they’ve found is remarkable, and deserves investigation,” says Peter Molnar, a geologist also at CU. The research started as a search for synchrony in earthquake timing. Individual oscillators, be they fireflies, heart muscles, or metronomes, can end up vibrating in synchrony as a result of some kind of cross-talk—or some common influence. To Bendick, it didn’t seem a far jump to consider the faults that cause earthquakes, with their cyclical buildup of strain and violent discharge, as “really noisy, really crummy oscillators,” she says. She and Bilham dove into the data, using the only complete earthquake catalog for the past 100 years: magnitude-7 and larger earthquakes. In work published in August in Geophysical Research Letters they reported two patterns: First, major quakes appeared to cluster in time —although not in space. And second, the number of large earthquakes seemed to peak at 32-year intervals. The earthquakes could be somehow talking to each other, or an external force could be nudging the earth into rupture. Exploring such global forces, the researchers eventually discovered the match with the length of day. Although weather patterns such as El Nino can drive day length to vary back and forth by a millisecond over a year or more, a periodic, decades-long fluctuation of several milliseconds—in particular, its point of peak slow down about every three decades or so—lined up with the quake trend perfectly. “Of course that seems sort of crazy,” Bendick says. But maybe it isn’t. When day length changes over decades, Earth’s magnetic field also develops a temporary ripple. Researchers think slight changes in the flow of the molten iron of the outer core may be responsible for both effects. Just what happens is uncertain—perhaps a bit of the molten outer core sticks to the mantle above. That might change the flow of the liquid metal, altering the magnetic field, and transfer enough momentum between the mantle and the core to affect day length. Seismologists aren’t used to thinking about the planet’s core, buried 2900 kilometers beneath the crust where quakes happen. But they should, Bilham said during his talk here. The core is “quite close to us. It’s closer than New York from here,” he said. At the equator, Earth spins 460 meters per second. Given this high velocity, it’s not absurd to think that a slight mismatch in speed between the solid crust and mantle and the liquid core could translate into a force somehow nudging quakes into synchrony, Molnar says. Of course, he adds, “It might be nonsense.” But the evidence for some kind of link is compelling, says geophysicist Michael Manga of the University of California, Berkeley. “I’ve worked on earthquakes triggered by seasonal variation, melting snow. His correlation is much better than what I’m used to seeing.” One way or another, says James Dolan, a geologist at the University of Southern California in Los Angeles, “we’re going to know in 5 years.” That’s because Earth’s rotation began a periodic slow-down 4-plus years ago. Beginning next year, Earth should expect five more major earthquakes a year than average—between 17 to 20 quakes, compared with the anomalously low four so far this year. If the pattern holds, it will put a new spin on earthquake forecasting. doi:10.1126/science.aar3598
Twenty years ago this month, the US was rushing headlong into war with Iraq, one of the most consequential travesties in modern American history. Here is how one congressman and I tried and failed to get the Democratic Party on record opposing that war.
After 9/11, neoconservatives began their campaign to invade Iraq. Their arguments included: that Saddam Hussein was linked to the 9/11 terrorists; that Iraq had stockpiles of chemical and biological weapons and was secretly buying components to build a nuclear bomb; that the US was attacked because our enemies saw us as weak and to demonstrate our strength and resolve we needed a decisive victory somewhere (anywhere); and that a complete victory in Iraq would be quick, easy, require few troops, be welcomed by the Iraqi people and result in the establishment of a friendly stable democracy.
These outright fabrications or, at the very least, matters that demanded vigorous debate were not challenged. The mainstream media largely served as an echo chamber for the war-hawks and most leading politicians were shy to criticise.
In advance of the February 2003 meeting of the Democratic National Committee, (DNC) Representative Jesse Jackson Junior and I submitted a resolution to encourage debate on the impending war. Using temperate and respectful language, it called on our party to urge the Bush administration “to pursue diplomatic efforts to achieve disarmament of Iraq, to clearly define for the American people and Congress the objectives, costs, consequences, terms and length of commitment envisioned by any US engagement or action in Iraq and to continue to operate in the context of and seek the full support of the United Nations in any effort to resolve the current crisis in Iraq.”
Polling indicated that the majority of Americans and a supermajority of Democrats supported these positions. And we knew that if Democrats failed to challenge the rush to war, we would not only risk losing the support of voters, but also shirk our responsibility to avert a war that would prove devastating to our country and the Middle East region.
At the DNC meeting, party leaders subjected me to intense pressure to withdraw the resolution. They argued that we needed to defer to the Democratic presidential candidates. With only one major candidate, Howard Dean, vigorously opposed to the war, they claimed that such a resolution would imply support for his candidacy. And, in their view, opposing the war would make it appear that the party was weak on national defence.
I refused to withdraw the resolution and insisted on my right to introduce it and be heard.
In my remarks to the committee, I warned that it was unconscionable that we send young men and women to war in a country whose history, culture and social composition we did not understand. I observed that the administration’s miscalculations about Iraq risked beginning “a war without end” and that going to war without UN authorisation jeopardised US legitimacy. I concluded by noting that ”raising the right questions, demanding answers and winning allies to our case is not being weak on defence. It’s being smart on defence.”
After my presentation, the chair ruled that there would be no vote and the resolution died without debate or discussion.
Twenty years later, it gives me no satisfaction to say that we were right to oppose that disastrous war. Thousands of Americans and hundreds of thousands of Iraqis were killed; countless others’ lives were shattered by the war’s consequences. While the neoconservatives told Congress that the war would cost $2 billion, the price tag is in the trillions and still growing. Instead of extinguishing extremism, the war fuelled it, metastasising into ever more virulent forms. And America emerged from the war weaker and less respected, while Iran emerged emboldened to project its menacing, meddlesome behaviour into the broader region.
Passing our resolution would not have stopped the Bush administration’s march to war. At least, however, the Democrats would have been on record in opposition, potentially strengthening the resolve of members of Congress to speak out more forcefully and voice their dissent. That is how a democracy is supposed to work. And when it does not, we all pay a steep price.
That is the concerning revelation shared by Daily Star on Wednesday after a Russian broadcast threatened to “obliterate” the UK and permanently submerge the nation underwater.
According to the broadcast, which was uploaded to Twitter via an account called Terror Alarm, Putin’s chief propaganda reporter Dmitry Kiselyov claimed two Russian super-nukes launched from Moscow could “wipe the British Isles off the map.”
“Russia could obliterate the UK with its new hypersonic Satan-2 missile,” Kiselyov said before adding that Russia is poised to “plunge Britain into the depths of the sea using underwater robotic drone Poseidon”.
“It would only take a minute,” Putin allegedly told Johnson when the then-prime minister told the Russian leader that a war against Ukraine would be an “utter catastrophe.”
Russia’s recent threat to nuke the UK also comes shortly after Ukraine’s Western allies agreed to send tanks and other military arms to the invaded nation – something Russia called “extremely dangerous.”
“Red lines are now a thing of the past,” a Kremlin spokesperson cryptically said at the time.
The Admiral Gorshkov, which was scheduled to sail to the Black Sea before abruptly diverting towards the U.S. and Bermuda last week, is reportedly outfitted with nuclear-capable Zircon missiles that move at speeds up to 6,670 MPH and have a maximum range of 625 miles.
Putin’s navy has also reportedly been running missile tests involving the Admiral Gorshkov and the Mach 9 Zircon missiles, with the ship’s commander – Captain Igor Krokhmal – indicating in a recent video the weapons are allegedly working as expected.
“The electronic launch and the work by the shipborne combat team confirmed the missile system’s designed characteristics demonstrated during preliminary and state trials,” Krokhmal said last week.
Iranian Supreme Leader Ali Khamenei, Iranian President Ebrahim Raisi and the influential Iraqi Shiite cleric Muqtada Al-Sadr in Tehran, Iran on Sept. 11, 2019. (Photo via Iran’s supreme leader’s website)
In this context, many of the ingredients for another political crisis are already stacking up. While the victory in the Gulf Cup saw Iraqis come together in celebration, the country has been beset with a currency crisis. The government has also yet to present a draft for a new election law, as Sudani had promised he would within three months of forming his cabinet. Of further note, the prime minister—whose appointment last autumn ended the deadlock between his supporters in the Iran-backed Shiite Coordination Framework and the Sadrists after the Oct. 2021 elections—also vowed that he would call early legislative polls towards the end of this summer, less than a year after taking charge.
These dynamics—and their domestic and international complications—portend the possibility of Sadr’s gradual return to the political stage. While the mercurial Shiite cleric is known for his unexpected moves, the manner and timing of his official return to Iraqi politics can reasonably be expected to only occur once the stars simultaneously align on the domestic, regional, and international scenes.
Sadr and Iran
Having won more than 70 of the 329 seats in Iraq’s legislature in the Oct. 2021 polls, the Sadrists initially sought to form a “national majority government” together with Sunni Arab blocs and the Kurdistan Democratic Party (KDP). The endeavor could have sidelined Iran’s allies in the Shiite Coordination Framework from the executive branch of power.
But the tenacity of the Coordination Framework parties ultimately paid off. As a political deadlock dragged on, Sadr in June 2022 ordered his MPs to resign, and after violent clashes in Baghdad’s Green Zone two months later announced his “final retirement” from politics. The move ultimately paved the way for Sudani to take office last autumn, with the backing of the Coordination Framework.
Though Iran’s allies have advanced amid the Sadrist withdrawal from the political process, Amwaj.media has learned that there are serious attempts to restore “warmth” to the line of communication between Tehran and Hanana, Sadr’s headquarters in Najaf Governorate. Speaking on condition of anonymity, an informed political source in Iraq asserted that these efforts have entailed talk of a “necessary” trip to Iran by Sadr in the near future. The head of the Sadrist Movement would in such a scenario be expected to directly meet with Iranian officials in charge of the Iraq dossier to discuss political developments.
Talk of Sadr possibly visiting Iran—where members of his extended family live, and where he himself was previously based during parts of the US occupation of Iraq—first emerged on the third anniversary of the US assassination of the former commander of the expeditionary Quds Force of the Islamic Revolutionary Guard Corps (IRGC). Qasem Soleimani was on Jan. 3, 2020 killed in an American drone strike near Baghdad’s international airport along with the deputy chief of Iraq’s Popular Mobilization Units (PMU), Abu Mahdi Al-Muhandis.
Amid rumors of a possibly impending visit to Iran by Sadr, members of his movement have been quick to deny such an eventuality. However, speaking on condition of anonymity, an Iranian diplomat asserted to Amwaj.media that there was a basis for the speculations, without further elaboration.
Lebanon appears central to the apparent efforts to jumpstart a rapprochement between Sadr and Iran. Arabic-language outlet ‘Jadeh Iran’ in Nov. 2022 reported that Sadrist leaders and representatives from the Lebanese Hezbollah movement had gathered in Beirut for talks. The two sides are said to have discussed political developments which surfaced prior to the formation of the Sudani government in Oct. 2022, and the repercussions—including the different scenarios that could unfold.
Preferring to speak anonymously given the sensitivity of the topic, a regional political source indicated to Amwaj.media that the reported meetings in Lebanon late last year were a starting point. In subsequent talks, the source claimed, “A deeper and more candid” dialogue brought together Sadrist leaders and prominent Iranian figures in the Lebanese capital. The two sides are said to have discussed political developments and possible means to restore the political equilibrium in Iraq that was lost when the Coordination Framework parties pushed for the formation of a government without Sadr and also moved forward with measures seen as provocative by some political forces.
The notion that the discussions in Beirut have supposedly led to a joint conclusion that it is vital for Iraq to regain its lost political balance was echoed by an informed Iraqi source in Baghdad. The source told Amwaj.media that there is an emerging understanding of the need for arm-twisting and exclusionary tactics to be abandoned—without going against Sadr’s concept of a “national majority government.” If accurate, this would be a major step forward since the Coordination Framework parties interpreted the Sadrist alliance-building after the Oct. 2021 polls as an attempt to exclude them. At the same time, the first informed political source in Iraq said the efforts of the “wise” in Tehran to promote “dialogue and reconciliation” with Sadr are being complicated by provocative reactions by some players.
The head of the Sadrist Movement has yet to express a clear stance on the anti-establishment protests in Iran that first erupted last September. But in early November, he slammed attacks on turbaned clerics amid the unrest. His condemnation has been interpreted by some observers as indicating that he wishes to maintain a certain distance from the political establishment in Tehran. Ultimately, he has his own calculations in managing and fine-tuning his feud with the Iranians. Sadr’s interests and vision demand the preservation of a general Muslim identity, but that each country should also maintain its own identity without exporting or imposing its culture on other countries.
Speaking on condition of anonymity, a prominent Sadrist figure told Amwaj.media that Sadr wants Iran to respect Iraq’s specificities and deal with it as an independent and sovereign state rather than a country on which agendas can be imposed. It therefore should come as no surprise that Iran’s ambassador to Baghdad recently came under fire by Sadrists for stating that he wished that Sadr would have “consulted” with him on the decision to withdraw from the political process.
The key questions ahead pertain to the mechanisms required to restore the relative political equilibrium in Iraq, and the nature of the Sadrist Movement’s return to the political process. There are many scenarios that could unfold, particularly considering the internal disputes within the Coordination Framework.
Another important dimension to consider is Sudani’s political positioning. His lack of action to restore balance to the political process is greatly due to pressure from some Coordination Framework leaders who are keen on excluding Sadr. Facing ambiguous stances rather than explicit support from regional and international players—with the notable exception of Iran—the prime minister is vulnerable to pressure from his domestic supporters.
To achieve its aims without entering the formal political process, the Sadrist Movement could resort to street politics with the backing of actors who are also rivals of the Coordination Framework and the Sudani government. Iran and its Iraqi allies fear this worst-case scenario the most, as they do not want Sudani’s premiership to end prematurely and under pressure, like that of Adil Abdul Mahdi (2018 –20).
There is another scenario to consider, too: for Sadr’s anticipated trip to Iran to be followed by a new political formula among Iraqi parties with the help of Iranian mediation. This would leave no actor embarrassed or broken by the other, and most of all, help Iran achieve its aim to reconcile the divided ‘Shiite House’ in Iraqi politics.
While the balloon was, the Pentagon said, “traveling at an altitude well above commercial air traffic” and did “not present a military or physical threat to people on the ground”, its presence sparked outrage.
Former US President Donald Trump was among those calling for the US military to shoot it down.
On Friday, China finally acknowledged the balloon was its property, saying that it was a civilian airship used for meteorological research, which deviated from its route because of bad weather.
A statement from China’s Foreign Ministry said that it regretted the incident and would work with the US to resolve the issue.
However, the state department official said that while the US acknowledged China’s claim about the balloon’s purpose, it stood the assessment that it was being used for surveillance.
Another trip by Mr Blinken to China would be planned “at the earliest opportunity” the official said, adding that Washington planned to maintain “open lines of communication” about the incident.
Mr Blinken had been expected to visit China on 5 and 6 February.
A US official quoted by the Associated Press said that the decision to abruptly halt the trip was made by Mr Blinken and President Joe Biden.
Mr Biden did not take questions about the balloon following remarks about the US economy on Friday morning.
According to US officials, the balloon flew over Alaska and Canada before appearing in the US state of Montana, which is home to a number of sensitive military missile sites.
Although fighter planes were alerted, the US decided not to shoot the object down due to the dangers of falling debris, officials said.
Several Republican lawmakers – as well as former President Donald Trump – have criticised the decision and urged the US to down the balloon.
“Shoot down the balloon,” Mr Trump said in a short message on his Truth Social social media platform.
While US officials have not commented on the size and details of the suspected spy balloon, Chinese officials have previously publicly expressed interest in the potential military and intelligence-gathering potential of balloons.
“Technological advances have opened a new door for the use of balloons,” an article in the military-run Liberation Army Daily said last year.
In 2022, Taiwan’s defence ministry said it detected Chinese balloons over its territory.
While the nuclear risk may or may not happen, the Doomsday Clock has in recent years also been tracking the climate crisis with growing alarm, writes John Gibbons
Wed, 01 Feb, 2023 – 17:56
Today, a century and a half later, his opening lines from ‘A Tale of Two Cities’ still ring true. There are more people alive now than at any other time in history, and more are living free from the shackles of abject poverty, hunger, disease and early death than ever before.
By many objective measures, especially for those of us in prosperous, stable countries like Ireland, these are indeed the very best of times. We enjoy levels of personal freedom, material wealth, comfort and physical well-being almost unimaginable even to our grandparents’ generation.
Paradoxically, we also live in an age of foolishness and incredulity, that threatens to propel humanity into an endless winter of despair. Last week, a panel of international scientists who maintain the so-called Doomsday Clock, moved its hands ominously forward, to 90 seconds to midnight.
The clock was first established by the Bulletin of the Atomic Scientists in 1947, just two years after the devastating killing power of nuclear weapons was first unleashed at Hiroshima and Nagasaki.
For the first time in history, humanity was now a planetary force, capable of triggering a global catastrophe on a par with an asteroid strike. Such god-like power should come with commensurate responsibility, and the Doomsday Clock was set up as a stark visual reminder of the limits of our power.
In the three-quarters of a century since then, the clock has ticked back and forth in synch with the ebb and flow of world events. In 1953, it moved to two minutes to midnight following the test detonation of the devastatingly powerful hydrogen bomb. That had, in the assessment of the scientific panel, been our most dangerous moment — until now.
Russia’s brutal invasion of Ukraine is the principal reason for the unprecedented pessimism in 2023. Vladimir Putin’s bellicose rhetoric since then has included repeated thinly veiled threats to use nuclear weapons. This was repeated in recent days by former Russian president, Dmitry Medvedev, who warned darkly that his country’s defeat in Ukraine could lead to a nuclear strike by Russia.
The Russian invasion has also severely damaged international efforts at nuclear non-proliferation. Ukraine handed over its entire Soviet-era nuclear arsenal to the Russian Federation under a 1994 treaty signed in Budapest in which Russia, the US and Britain solemnly agreed to “respect the independence and sovereignty and the existing borders of Ukraine”.
Many in Ukraine and beyond are wondering if the only true deterrent to an aggressive neighbour is to have your own nuclear weapons. Russia’s recklessness extends to what the scientists call its “violation of international protocols and risking of the widespread release of radioactive materials” in its capture of the nuclear reactor sites at Zaporizhzhia and Chernobyl.
With the heightened risk of an intentional or accidental nuclear incident, “the possibility that the conflict could spin out of anyone’s control remains high”, the report warned.
While the nuclear risk remains binary – it may or may not happen – the Doomsday Clock has in recent years also been tracking the climate crisis with growing alarm.
The war in Ukraine has occurred at the worst possible moment as it “undermines global efforts to combat climate change…and has led to expanded investment in natural gas exactly when such investment should have been shrinking”, the report warned.
As a uniquely global crisis, effective efforts to tackle the climate emergency “require faith in multilateral governance”, which the scientists say has been weakened by the “geopolitical fissure opened by the invasion of Ukraine”.
Having stumbled in the past into dangerous nuclear stand-offs, the hope is that once again sense will prevail and a nuclear disaster will be avoided. However, the climate crisis is an altogether different threat. Here, for a cataclysm to unfold simply requires that the international community fails to act in line with the science.
Division, disinformation, social media-fuelled polarisation and the resurgence of political extremism all undermine our faith in science and reason at the very moment in human history when we need to come together like never before.
Southern Asia’s strategic stability is getting harder to manage because of geopolitical changes and evolving military technologies.
Wednesday, February 1, 2023 /BY: Daniel Markey, Ph.D.
PUBLICATION TYPE:Analysis and Commentary
Southern Asia — India, Pakistan and China — is the only place on earth where three nuclear-armed states have recently engaged in violent confrontations along their contested borders. As a USIP senior study group report concluded last year, the problem of nuclear stability in Southern Asia is getting harder to manage because of geopolitical changes, such as rising India-China border tensions, as well as evolving military technologies, including growing nuclear arsenals and more capable delivery systems. Unfortunately, in the time since that senior study group completed its work, little has happened to revise its worrisome conclusion or to prevent the most likely triggering causes of a nuclearized crisis in Southern Asia. To the contrary, there are some good reasons to fear that the situation in Southern Asia has even deteriorated over the past year.
To be clear, just because states invest in nuclear weapons and delivery systems does not mean that a crisis or war is imminent. Leaders in China, India and Pakistan have always viewed their nuclear arsenals primarily as tools of deterrence, less for practical warfighting than to convince adversaries of the extraordinary costs that a war would risk. Nor do any of the region’s leaders take their nuclear programs lightly; all feel tremendous incentives to keep their arsenals safe and secure and to build systems of command, control and communications intended to prevent accidents, unauthorized use or theft.
Nevertheless, because even a single nuclear detonation could be massively destructive, U.S. policymakers have an obligation not to accept these sorts of logical assurances passively or uncritically. Accidents do happen. India’s misfire of a Brahmos missile test into Pakistan last year proved that point perfectly. No matter how well designed, nuclear systems are complicated and involve the potential for human or technical error. When something does go wrong, overreaction by opposing forces is less likely when they have a greater degree of confidence in, and knowledge of, the other side. Reliable and secure communications — in the form of hotlines — can help, but only to the point that they are actually used in a timely manner. Apparently, India failed to do so during the Brahmos incident.
Fear, hatred and other emotions can cloud human judgment, especially in the heat of a crisis when information is imperfect and communication difficult. Reflecting on his own experience of crisis management in Southern Asia, former secretary of state Mike Pompeo recently wrote that he does “not think the world properly knows just how close the India-Pakistan rivalry came to spilling over into a nuclear conflagration in February 2019.” The question — for Pompeo and current U.S. policymakers — is what more they are doing now to prepare for the next crisis.
Fortunately, a February 2021 cease-fire agreement between India and Pakistan holds, supplemented at times by a widely rumored “backchannel” dialogue between New Delhi and Islamabad. Then again, it is a measure of the low level of our collective expectations for India-Pakistan relations that the bare agreement not to actively shoot artillery shells across their border and to participate in sporadic, secret talks is considered progress.
The Terrorism Tinderbox
A return to serious India-Pakistan crisis could be just one terrorist attack away. Not even when Pakistan suffered devastating floods last summer could leaders in Islamabad and New Delhi create sufficient political space to open basic commodity trade. Hostile rhetoric is high, and there is reason to anticipate it could get far worse over the coming year as national leaders on both sides prepare for elections. Indian Prime Minister Narendra Modi has learned he can whip up domestic political support from tough talk and cross-border retaliation. In Pakistan, neither civilian nor army leaders can afford to look weak in the face of Indian attacks, especially when they face jingoistic (if transparently opportunistic) criticism from ousted prime minister Imran Khan.
The prospect of anti-Indian terrorism is also growing. The Taliban regime in Afghanistan shows no greater commitment to eliminating terrorist safe havens than it did in the 1990s, and Pakistan’s will (and capacity) for keeping a lid on cross-border terrorism will be tested as it faces heightened security and economic pressures at home. In addition, India’s repression of its Muslim minority community, especially in Kashmir, is simultaneously a reaction to past anti-state militancy and nearly guaranteed to inspire new acts of violence.
No matter the specific cause or circumstances of anti-Indian militancy, Modi’s government is likely to attribute culpability to Pakistan. That, in turn, raises the potential for an emotionally charged crisis that could, under the wrong circumstances, spiral into another India-Pakistan war.
Nor can Pakistan afford only to worry about its border with India. Relations between Islamabad and Kabul have deteriorated drastically ever since the Taliban swept back into power. Rather than controlling Afghanistan through its favored militant proxies, Pakistan is suffering a surge in violence on its own soil, most recently the devastating bombing of a police mosque in Peshawar claimed by the anti-state Tehreek-e-Taliban Pakistan. Such violence, along with national political turmoil, environmental calamity and economic crisis, will raise concerns among some in the United States about threats to the safety and security of Pakistan’s nuclear enterprise. Sadly, that will probably lead Pakistan’s Strategic Plans Division — the guardians of its nuclear arsenal — and other Pakistani military leaders to fear a phantom threat of American military intervention rather than to address actual causes of the Pakistani state’s fragility.
India-China Tensions Rise
Events along the contested border between India and China hardly inspire confidence that New Delhi and Beijing have found a path back to normal relations after their bloody border skirmishes of 2020. To the contrary, the prospects of rapid military escalation have grown, principally because both sides have positioned greater numbers of more lethal forces close to the border. Before 2020, relatively small, unarmed Chinese and Indian patrols routinely risked coming into contact as they pressed territorial claims on the un-demarcated border. This was dangerous, but extremely unlikely to escalate rapidly into a serious military encounter. In early December 2022 hundreds of Chinese troops attacked an Indian camp in what could not possibly have been an unplanned operation. With tens of thousands of troops stationed not far away, conventional military escalation is far more plausible than it was just a few years ago.
Although there is still a long way between remote mountain warfare and a nuclear crisis, at least some Indian security officials anticipate a future of more routine border violence as troops on both sides become more entrenched. China and India are also jockeying in the Indian Ocean, where China’s increasing naval presence and influence with India’s smaller neighbors feed Indian insecurities and encourage New Delhi to seek countervailing defense ties with Quad partners (Japan, Australia and the United States) as well as other naval powers, like France.
Against this backdrop of tensions, China’s growing nuclear, missile and surveillance capabilities will look more threatening to Indian nuclear defense planners. New Delhi may even come to fear that China is developing a first strike so devastating that it would effectively eliminate India’s retaliatory response and, as a consequence, diminish the threat of its nuclear deterrent. In response, India could seek to demonstrate that it has thermonuclear weapons capable of destroying Chinese cities in one blow as well as more nuclear submarines capable of evading China’s first strike.
A ‘Cascading Security Dilemma’
Not only would those Indian moves raise serious policy questions for the United States, but they would demonstrate the region’s “cascading security dilemma,” by which military capabilities intended to deter one adversary tend to inspire dangerous insecurities in another. When India arms itself to deter China, Pakistan perceives new threats from India and will likely pursue enhanced capabilities of its own. In a worst-case scenario, Southern Asia could be entering an accelerated nuclear arms race in which uneven waves of new investments in capabilities and delivery systems will alter perceptions of deterrence and stability in dangerously unpredictable ways.
All told, U.S. policymakers have at least as many reasons for concern about strategic stability in Southern Asia as when USIP launched its report last spring. Old triggers for escalation, like terrorist attacks against India, persist, while newer storms are brewing. As that earlier report explained, Washington cannot solve Southern Asia’s troubles alone, but neither can it afford to stand aloof or to downplay their seriousness.
|
<urn:uuid:dee7aff4-6b8d-431b-aeeb-ebd2fe699031>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00766.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9568495750427246,
"pii_count": 0,
"score": 3.140625,
"token_count": 7139,
"url": "https://andrewtheprophetcom.wordpress.com/2023/02/04/"
}
|
Sloshing of Earth’s core may spike major earthquakesBy Paul VoosenOct. 30, 2017 , 1:45 PM The number of major earthquakes, like the magnitude-7 one that devastated Haiti in 2010, seems to be correlated with minute fluctuations in day length. SEATTLE—The world doesn’t stop spinning. But every so often, it slows down. For decades, scientists have charted tiny fluctuations in the length of Earth’s day: Gain a millisecond here, lose a millisecond there. Last week at the annual meeting of the Geological Society of America here, two geophysicists argued that these minute changes could be enough to influence the timing of major earthquakes—and potentially help forecast them. During the past 100 years, Earth’s slowdowns have correlated surprisingly well with periods with a global increase in magnitude-7 and larger earthquakes, according to Roger Bilham of the University of Colorado (CU) in Boulder and Rebecca Bendick at the University of Montana in Missoula. Usefully, the spike, which adds two to five more quakes than typical, happens well after the slow-down begins. “The Earth offers us a 5-years heads up on future earthquakes, which is remarkable,” says Bilham, who presented the work. Most seismologists agree that earthquake prediction is a minefield. And so far, Bilham and Bendick have only fuzzy, hard-to-test ideas about what might cause the pattern they found. But the finding is too provocative to ignore, other researchers say. “The correlation they’ve found is remarkable, and deserves investigation,” says Peter Molnar, a geologist also at CU. The research started as a search for synchrony in earthquake timing. Individual oscillators, be they fireflies, heart muscles, or metronomes, can end up vibrating in synchrony as a result of some kind of cross-talk—or some common influence. To Bendick, it didn’t seem a far jump to consider the faults that cause earthquakes, with their cyclical buildup of strain and violent discharge, as “really noisy, really crummy oscillators,” she says. She and Bilham dove into the data, using the only complete earthquake catalog for the past 100 years: magnitude-7 and larger earthquakes. In work published in August in Geophysical Research Letters they reported two patterns: First, major quakes appeared to cluster in time —although not in space. And second
|
, the number of large earthquakes seemed to peak at 32-year intervals. The earthquakes could be somehow talking to each other, or an external force could be nudging the earth into rupture. Exploring such global forces, the researchers eventually discovered the match with the length of day. Although weather patterns such as El Nino can drive day length to vary back and forth by a millisecond over a year or more, a periodic, decades-long fluctuation of several milliseconds—in particular, its point of peak slow down about every three decades or so—lined up with the quake trend perfectly. “Of course that seems sort of crazy,” Bendick says. But maybe it isn’t. When day length changes over decades, Earth’s magnetic field also develops a temporary ripple. Researchers think slight changes in the flow of the molten iron of the outer core may be responsible for both effects. Just what happens is uncertain—perhaps a bit of the molten outer core sticks to the mantle above. That might change the flow of the liquid metal, altering the magnetic field, and transfer enough momentum between the mantle and the core to affect day length. Seismologists aren’t used to thinking about the planet’s core, buried 2900 kilometers beneath the crust where quakes happen. But they should, Bilham said during his talk here. The core is “quite close to us. It’s closer than New York from here,” he said. At the equator, Earth spins 460 meters per second. Given this high velocity, it’s not absurd to think that a slight mismatch in speed between the solid crust and mantle and the liquid core could translate into a force somehow nudging quakes into synchrony, Molnar says. Of course, he adds, “It might be nonsense.” But the evidence for some kind of link is compelling, says geophysicist Michael Manga of the University of California, Berkeley. “I’ve worked on earthquakes triggered by seasonal variation, melting snow. His correlation is much better than what I’m used to seeing.” One way or another, says James Dolan, a geologist at the University of Southern California in Los Angeles, “we’re going to know in 5 years.” That’s because Earth’s rotation began a periodic slow-down 4-plus years ago. Beginning next year, Earth should expect five more major earthquakes a year than average—between 17 to 20 quakes, compared with the anomalously low four so far this year. If the pattern holds, it will put a new spin on earthquake forecasting. doi:10.1126/science.aar3598
Twenty years ago this month, the US was rushing headlong into war with Iraq, one of the most consequential travesties in modern American history. Here is how one congressman and I tried and failed to get the Democratic Party on record opposing that war.
After 9/11, neoconservatives began their campaign to invade Iraq. Their arguments included: that Saddam Hussein was linked to the 9/11 terrorists; that Iraq had stockpiles of chemical and biological weapons and was secretly buying components to build a nuclear bomb; that the US was attacked because our enemies saw us as weak and to demonstrate our strength and resolve we needed a decisive victory somewhere (anywhere); and that a complete victory in Iraq would be quick, easy, require few troops, be welcomed by the Iraqi people and result in the establishment of a friendly stable democracy.
These outright fabrications or, at the very least, matters that demanded vigorous debate were not challenged. The mainstream media largely served as an echo chamber for the war-hawks and most leading politicians were shy to criticise.
In advance of the February 2003 meeting of the Democratic National Committee, (DNC) Representative Jesse Jackson Junior and I submitted a resolution to encourage debate on the impending war. Using temperate and respectful language, it called on our party to urge the Bush administration “to pursue diplomatic efforts to achieve disarmament of Iraq, to clearly define for the American people and Congress the objectives, costs, consequences, terms and length of commitment envisioned by any US engagement or action in Iraq and to continue to operate in the context of and seek the full support of the United Nations in any effort to resolve the current crisis in Iraq.”
Polling indicated that the majority of Americans and a supermajority of Democrats supported these positions. And we knew that if Democrats failed to challenge the rush to war, we would not only risk losing the support of voters, but also shirk our responsibility to avert a war that would prove devastating to our country and the Middle East region.
At the DNC meeting, party leaders subjected me to intense pressure to withdraw the resolution. They argued that we needed to defer to the Democratic presidential candidates. With only one major candidate, Howard Dean, vigorously opposed to the war, they claimed that such a resolution would imply support for his candidacy. And, in their view, opposing the war would make it appear that the party was weak on national defence.
I refused to withdraw the resolution and insisted on my right to introduce it and be heard.
In my remarks to the committee, I warned that it was unconscionable that we send young men and women to war in a country whose history, culture and social composition we did not understand. I observed that the administration’s miscalculations about Iraq risked beginning “a war without end” and that going to war without UN authorisation jeopardised US legitimacy. I concluded by noting that ”raising the right questions, demanding answers and winning allies to our case is not being weak on defence. It’s being smart on defence.”
After my presentation, the chair ruled that there would be no vote and the resolution died without debate or discussion.
Twenty years later, it gives me no satisfaction to say that we were right to oppose that disastrous war. Thousands of Americans and hundreds of thousands of Iraqis were killed; countless others’ lives were shattered by the war’s consequences. While the neoconservatives told Congress that the war would cost $2 billion, the price tag is in the trillions and still growing. Instead of extinguishing extremism, the war fuelled it, metastasising into ever more virulent forms. And America emerged from the war weaker and less respected, while Iran emerged emboldened to project its menacing, meddlesome behaviour into the broader region.
Passing our resolution would not have stopped the Bush administration’s march to war. At least, however, the Democrats would have been on record in opposition, potentially strengthening the resolve of members of Congress to speak out more forcefully and voice their dissent. That is how a democracy is supposed to work. And when it does not, we all pay a steep price.
That is the concerning revelation shared by Daily Star on Wednesday after a Russian broadcast threatened to “obliterate” the UK and permanently submerge the nation underwater.
According to the broadcast, which was uploaded to Twitter via an account called Terror Alarm, Putin’s chief propaganda reporter Dmitry Kiselyov claimed two Russian super-nukes launched from Moscow could “wipe the British Isles off the map.”
“Russia could obliterate the UK with its new hypersonic Satan-2 missile,” Kiselyov said before adding that Russia is poised to “plunge Britain into the depths of the sea using underwater robotic drone Poseidon”.
“It would only take a minute,” Putin allegedly told Johnson when the then-prime minister told the Russian leader that a war against Ukraine would be an “utter catastrophe.”
Russia’s recent threat to nuke the UK also comes shortly after Ukraine’s Western allies agreed to send tanks and other military arms to the invaded nation – something Russia called “extremely dangerous.”
“Red lines are now a thing of the past,” a Kremlin spokesperson cryptically said at the time.
The Admiral Gorshkov, which was scheduled to sail to the Black Sea before abruptly diverting towards the U.S. and Bermuda last week, is reportedly outfitted with nuclear-capable Zircon missiles that move at speeds up to 6,670 MPH and have a maximum range of 625 miles.
Putin’s navy has also reportedly been running missile tests involving the Admiral Gorshkov and the Mach 9 Zircon missiles, with the ship’s commander – Captain Igor Krokhmal – indicating in a recent video the weapons are allegedly working as expected.
“The electronic launch and the work by the shipborne combat team confirmed the missile system’s designed characteristics demonstrated during preliminary and state trials,” Krokhmal said last week.
Iranian Supreme Leader Ali Khamenei, Iranian President Ebrahim Raisi and the influential Iraqi Shiite cleric Muqtada Al-Sadr in Tehran, Iran on Sept. 11, 2019. (Photo via Iran’s supreme leader’s website)
In this context, many of the ingredients for another political crisis are already stacking up. While the victory in the Gulf Cup saw Iraqis come together in celebration, the country has been beset with a currency crisis. The government has also yet to present a draft for a new election law, as Sudani had promised he would within three months of forming his cabinet. Of further note, the prime minister—whose appointment last autumn ended the deadlock between his supporters in the Iran-backed Shiite Coordination Framework and the Sadrists after the Oct. 2021 elections—also vowed that he would call early legislative polls towards the end of this summer, less than a year after taking charge.
These dynamics—and their domestic and international complications—portend the possibility of Sadr’s gradual return to the political stage. While the mercurial Shiite cleric is known for his unexpected moves, the manner and timing of his official return to Iraqi politics can reasonably be expected to only occur once the stars simultaneously align on the domestic, regional, and international scenes.
Sadr and Iran
Having won more than 70 of the 329 seats in Iraq’s legislature in the Oct. 2021 polls, the Sadrists initially sought to form a “national majority government” together with Sunni Arab blocs and the Kurdistan Democratic Party (KDP). The endeavor could have sidelined Iran’s allies in the Shiite Coordination Framework from the executive branch of power.
But the tenacity of the Coordination Framework parties ultimately paid off. As a political deadlock dragged on, Sadr in June 2022 ordered his MPs to resign, and after violent clashes in Baghdad’s Green Zone two months later announced his “final retirement” from politics. The move ultimately paved the way for Sudani to take office last autumn, with the backing of the Coordination Framework.
Though Iran’s allies have advanced amid the Sadrist withdrawal from the political process, Amwaj.media has learned that there are serious attempts to restore “warmth” to the line of communication between Tehran and Hanana, Sadr’s headquarters in Najaf Governorate. Speaking on condition of anonymity, an informed political source in Iraq asserted that these efforts have entailed talk of a “necessary” trip to Iran by Sadr in the near future. The head of the Sadrist Movement would in such a scenario be expected to directly meet with Iranian officials in charge of the Iraq dossier to discuss political developments.
Talk of Sadr possibly visiting Iran—where members of his extended family live, and where he himself was previously based during parts of the US occupation of Iraq—first emerged on the third anniversary of the US assassination of the former commander of the expeditionary Quds Force of the Islamic Revolutionary Guard Corps (IRGC). Qasem Soleimani was on Jan. 3, 2020 killed in an American drone strike near Baghdad’s international airport along with the deputy chief of Iraq’s Popular Mobilization Units (PMU), Abu Mahdi Al-Muhandis.
Amid rumors of a possibly impending visit to Iran by Sadr, members of his movement have been quick to deny such an eventuality. However, speaking on condition of anonymity, an Iranian diplomat asserted to Amwaj.media that there was a basis for the speculations, without further elaboration.
Lebanon appears central to the apparent efforts to jumpstart a rapprochement between Sadr and Iran. Arabic-language outlet ‘Jadeh Iran’ in Nov. 2022 reported that Sadrist leaders and representatives from the Lebanese Hezbollah movement had gathered in Beirut for talks. The two sides are said to have discussed political developments which surfaced prior to the formation of the Sudani government in Oct. 2022, and the repercussions—including the different scenarios that could unfold.
Preferring to speak anonymously given the sensitivity of the topic, a regional political source indicated to Amwaj.media that the reported meetings in Lebanon late last year were a starting point. In subsequent talks, the source claimed, “A deeper and more candid” dialogue brought together Sadrist leaders and prominent Iranian figures in the Lebanese capital. The two sides are said to have discussed political developments and possible means to restore the political equilibrium in Iraq that was lost when the Coordination Framework parties pushed for the formation of a government without Sadr and also moved forward with measures seen as provocative by some political forces.
The notion that the discussions in Beirut have supposedly led to a joint conclusion that it is vital for Iraq to regain its lost political balance was echoed by an informed Iraqi source in Baghdad. The source told Amwaj.media that there is an emerging understanding of the need for arm-twisting and exclusionary tactics to be abandoned—without going against Sadr’s concept of a “national majority government.” If accurate, this would be a major step forward since the Coordination Framework parties interpreted the Sadrist alliance-building after the Oct. 2021 polls as an attempt to exclude them. At the same time, the first informed political source in Iraq said the efforts of the “wise” in Tehran to promote “dialogue and reconciliation” with Sadr are being complicated by provocative reactions by some players.
The head of the Sadrist Movement has yet to express a clear stance on the anti-establishment protests in Iran that first erupted last September. But in early November, he slammed attacks on turbaned clerics amid the unrest. His condemnation has been interpreted by some observers as indicating that he wishes to maintain a certain distance from the political establishment in Tehran. Ultimately, he has his own calculations in managing and fine-tuning his feud with the Iranians. Sadr’s interests and vision demand the preservation of a general Muslim identity, but that each country should also maintain its own identity without exporting or imposing its culture on other countries.
Speaking on condition of anonymity, a prominent Sadrist figure told Amwaj.media that Sadr wants Iran to respect Iraq’s specificities and deal with it as an independent and sovereign state rather than a country on which agendas can be imposed. It therefore should come as no surprise that Iran’s ambassador to Baghdad recently came under fire by Sadrists for stating that he wished that Sadr would have “consulted” with him on the decision to withdraw from the political process.
The key questions ahead pertain to the mechanisms required to restore the relative political equilibrium in Iraq, and the nature of the Sadrist Movement’s return to the political process. There are many scenarios that could unfold, particularly considering the internal disputes within the Coordination Framework.
Another important dimension to consider is Sudani’s political positioning. His lack of action to restore balance to the political process is greatly due to pressure from some Coordination Framework leaders who are keen on excluding Sadr. Facing ambiguous stances rather than explicit support from regional and international players—with the notable exception of Iran—the prime minister is vulnerable to pressure from his domestic supporters.
To achieve its aims without entering the formal political process, the Sadrist Movement could resort to street politics with the backing of actors who are also rivals of the Coordination Framework and the Sudani government. Iran and its Iraqi allies fear this worst-case scenario the most, as they do not want Sudani’s premiership to end prematurely and under pressure, like that of Adil Abdul Mahdi (2018 –20).
There is another scenario to consider, too: for Sadr’s anticipated trip to Iran to be followed by a new political formula among Iraqi parties with the help of Iranian mediation. This would leave no actor embarrassed or broken by the other, and most of all, help Iran achieve its aim to reconcile the divided ‘Shiite House’ in Iraqi politics.
While the balloon was, the Pentagon said, “traveling at an altitude well above commercial air traffic” and did “not present a military or physical threat to people on the ground”, its presence sparked outrage.
Former US President Donald Trump was among those calling for the US military to shoot it down.
On Friday, China finally acknowledged the balloon was its property, saying that it was a civilian airship used for meteorological research, which deviated from its route because of bad weather.
A statement from China’s Foreign Ministry said that it regretted the incident and would work with the US to resolve the issue.
However, the state department official said that while the US acknowledged China’s claim about the balloon’s purpose, it stood the assessment that it was being used for surveillance.
Another trip by Mr Blinken to China would be planned “at the earliest opportunity” the official said, adding that Washington planned to maintain “
|
This story is part of The Salt Lake Tribune’s ongoing commitment to identify solutions to Utah’s biggest challenges through the work of the Innovation Lab.
Utah FORGE, the nation’s most advanced effort to produce energy from the earth’s core, reached a significant milestone this week when it successfully pushed water through an underground reservoir 1½ miles below the surface.
The University of Utah-managed project north of Milford in Beaver County, funded by more than $200 million from the U.S. Department of Energy, is DOE’s foremost experiment in what is termed “enhanced geothermal systems.” Geothermal energy from hot springs has been captured to produce electricity for decades, but that can only be used where hot springs are available.
Enhanced geothermal systems require drilling deep into hot, dry rock to pull some of that heat out. If such systems can be made practical and affordable, there are locations worldwide where the geology is favorable. Ten miles north of Milford is one of the best.
Utah FORGE researchers drilled two deep wells and then broke up the rock between the two wells using fracking technology from the oil and gas industry.
After fracturing rock from both wells, the scientists for the first time this week were able to push water down one well into the fractures and see water come up through the second well.
“We do have some connectivity between the two wells,” said John McLennan, a University of Utah chemical engineering professor and a co-principal investigator of Utah FORGE, during a tour of the site Wednesday.
The news brings the project one step closer to the goal of providing a continuous flow of hot water that can produce electricity.
Utah FORGE sits in an area of Beaver County that has become a hotbed of renewable energy. Farther up the hill is PacifiCorp’s Blundell plant, a conventional geothermal project fed by a hot spring. Surrounding the site are solar farms and windmills, and a private company, Fervo Energy, is also pursuing an enhanced geothermal project a short distance away. Even the nearby pig farms produce energy by extracting methane from the waste.
“I don’t think there’s another place like this in the world,” said Joseph N. Moore, a U of U civil and environmental engineering professor who oversees Utah FORGE.
The site was chosen six years ago after a national competition because it had lots of dry granite sitting above a large pool of magma. Nearby are long-dead volcanoes and Roosevelt Hot Springs. It also has access to water that, because of its mineral content, can’t be consumed by humans or animals but can be used to pump into the ground and capture heat.
Moore describes FORGE as a “de-risking” laboratory, and all experimental data is made public in an effort to encourage more development. There are no large-scale working enhanced geothermal systems, and that is because of huge costs involved in drilling into granite with diamond-tipped drill rigs that cost $70,000 a day to operate.
Because the temperatures exceed what are normally seen in an oil and gas drilling project, Utah FORGE also has to develop new equipment and materials that can handle temperatures reaching more than 400 degrees. The heat has destroyed the plugs and seals normally used in drill projects.
“Fracking” is shorthand for hydraulic fracturing, a process that has dramatically increased U.S. production of oil and gas, turning the nation into an oil and gas exporter instead of importer. Fracking means pumping high-pressure fluids into wells to fracture the rock. The drilling crew working on the FORGE site this week will be moving to an oil and gas drilling operation elsewhere in Utah in a few weeks.
But fracking for heat is substantially cleaner than oil and gas fracking, which produces massive amounts of wastewater while also greasing underground plates, causing earthquakes. Seismicity is closely monitored at the site, but only tiny movements have been recorded.
Fracking is especially fruitful when combined combined with another recent technology, horizontal drilling. Instead of just going straight down, drill rigs can now go down and turn sideways, meaning a well can pull more oil and gas from a shallow but wide underground deposit.
Both technologies are in use at Utah FORGE. The two wells go down 5,000 feet, and then bend 65 degrees and continue down to 8,500 feet. The whole length of the wells is about 11,000 feet. The “reservoir” is the extensive system of cracks produced from the fracking but still isolated from any outside water by surrounding rock. And the water pumped through can be reused continuously in a closed-loop system.
Derisking has also meant learning how to drill faster. They have tested various drill bits, and are now chewing through granite five times faster than they were when they started, McLennan said.
In this week’s test, the scientists pushed a few thousand gallons of water in. For the system to be commercially viable, they will need to get to hundreds of gallons per minute, Moore said. Current DOE funding continues through 2025, and Moore expects they will apply for more to continue after that.
|
<urn:uuid:78be1549-8dfd-4758-ac01-7969b33f0c33>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510259.52/warc/CC-MAIN-20230927035329-20230927065329-00326.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9550566673278809,
"pii_count": 0,
"score": 3.34375,
"token_count": 1085,
"url": "https://www.sltrib.com/news/2023/07/07/utah-geothermal-project-hits/"
}
|
This story is part of The Salt Lake Tribune’s ongoing commitment to identify solutions to Utah’s biggest challenges through the work of the Innovation Lab.
Utah FORGE, the nation’s most advanced effort to produce energy from the earth’s core, reached a significant milestone this week when it successfully pushed water through an underground reservoir 1½ miles below the surface.
The University of Utah-managed project north of Milford in Beaver County, funded by more than $200 million from the U.S. Department of Energy, is DOE’s foremost experiment in what is termed “enhanced geothermal systems.” Geothermal energy from hot springs has been captured to produce electricity for decades, but that can only be used where hot springs are available.
Enhanced geothermal systems require drilling deep into hot, dry rock to pull some of that heat out. If such systems can be made practical and affordable, there are locations worldwide where the geology is favorable. Ten miles north of Milford is one of the best.
Utah FORGE researchers drilled two deep wells and then broke up the rock between the two wells using fracking technology from the oil and gas industry.
After fracturing rock from both wells, the scientists for the first time this week were able to push water down one well into the fractures and see water come up through the second well.
“We do have some connectivity between the two wells,” said John McLennan, a University of Utah chemical engineering professor and a co-principal investigator of Utah FORGE, during a tour of the site Wednesday.
The news brings the project one step closer to the goal of providing a continuous flow of hot water that can produce electricity.
Utah FORGE sits in an area of Beaver County that has become a hotbed of renewable energy. Farther up the hill is PacifiCorp’s Blundell plant, a conventional geothermal project fed by a hot spring. Surrounding the site are solar farms and windmills, and a private company, Fervo Energy, is also pursuing an enhanced geothermal project a short distance away. Even the nearby pig farms produce energy by extracting methane from the waste.
“I don’t think there’s another place like this in the world,” said Joseph N. Moore, a U of U civil and environmental engineering professor who oversees Utah FORGE.
The site was chosen six years ago after a national competition because it had lots of dry granite sitting above a large pool of magma. Nearby are long-dead volcanoes and Roosevelt Hot Springs. It also
|
has access to water that, because of its mineral content, can’t be consumed by humans or animals but can be used to pump into the ground and capture heat.
Moore describes FORGE as a “de-risking” laboratory, and all experimental data is made public in an effort to encourage more development. There are no large-scale working enhanced geothermal systems, and that is because of huge costs involved in drilling into granite with diamond-tipped drill rigs that cost $70,000 a day to operate.
Because the temperatures exceed what are normally seen in an oil and gas drilling project, Utah FORGE also has to develop new equipment and materials that can handle temperatures reaching more than 400 degrees. The heat has destroyed the plugs and seals normally used in drill projects.
“Fracking” is shorthand for hydraulic fracturing, a process that has dramatically increased U.S. production of oil and gas, turning the nation into an oil and gas exporter instead of importer. Fracking means pumping high-pressure fluids into wells to fracture the rock. The drilling crew working on the FORGE site this week will be moving to an oil and gas drilling operation elsewhere in Utah in a few weeks.
But fracking for heat is substantially cleaner than oil and gas fracking, which produces massive amounts of wastewater while also greasing underground plates, causing earthquakes. Seismicity is closely monitored at the site, but only tiny movements have been recorded.
Fracking is especially fruitful when combined combined with another recent technology, horizontal drilling. Instead of just going straight down, drill rigs can now go down and turn sideways, meaning a well can pull more oil and gas from a shallow but wide underground deposit.
Both technologies are in use at Utah FORGE. The two wells go down 5,000 feet, and then bend 65 degrees and continue down to 8,500 feet. The whole length of the wells is about 11,000 feet. The “reservoir” is the extensive system of cracks produced from the fracking but still isolated from any outside water by surrounding rock. And the water pumped through can be reused continuously in a closed-loop system.
Derisking has also meant learning how to drill faster. They have tested various drill bits, and are now chewing through granite five times faster than they were when they started, McLennan said.
In this week’s test, the scientists pushed a few thousand gallons of water in. For the system to be commercially viable, they will need to get to hundreds of gallons per minute, Moore said. Current DOE funding continues through 2025, and Moore expects they will apply for more to continue after that.
|
Unpeeling history: How an orange debunked Zionism | Omar Hammuda | ISLAM21C | 8 Jan 2023
In this delectable piece, Omar Hammuda shows how the history of the simple yet emblematic Palestinian Jaffa orange reveals the myths of Zionism
There is more to an orange than what meets the eye! Recent online discussions reignited the lie that Palestine was a barren and uninhabited land before the emergence of the Zionist colony. While this article touches on Palestine’s rich agricultural history, it will show how the simple orange alone can debunk this concocted myth.
An introduction to the Zionist propaganda machine
From the very early stages of Zionism to the present day, Zionists and their sympathisers have promulgated baseless claims that one of earth’s most historically significant lands (Palestine) was, once upon a time, romantically empty and destitute.
This politically convenient rhetoric follows that it wasn’t until Jewish migration and subsequent settlement in the land that Palestine flourished and made its ‘arid’ deserts bloom. To facilitate such outright lies and misinformation, the Zionists adopted a widely cited phrase in Zionist literature:
“A land without a people for a people without a land.”
While the historicity of the phrase appears to be a matter of contention, this slogan persists. In reality, it was never intended to be literal, but rather, purely ideological. It was invented to provide a legal and moral basis for the seizure of the land, the murder and displacement of the native population, and the establishment of a Zionist colony.
An important feature of early Zionist political discourse was that the native population in Palestine were no more than a marginal segment of people who only existed as scattered individuals or only sometimes as communities.
This arrogance and disdain towards the native population becomes exceedingly clear when reading the pronouncements of the early Zionists who wielded this slogan.
After having been promised Palestine by the Balfour Declaration, British author and staunch Zionist Israel Zangwill wrote,
“…for there is no Arab people living in intimate fusion with the country, utilising its resources and stamping it with a characteristic impress; there is at best an Arab encampment.”
In a similar vein, when former Zionist Prime Minister Chaim Weizmann was asked about the Palestinian Arabs and the Balfour Declaration, he condescendingly mirrored the racist attitudes of his European overlords and responded by stating that,
“The British told us that there are some hundred thousand Negroes (Kushim in Hebrew) and for those, there is no value.”
In other words, these Zionist leaders did not mean that there were no people in Palestine in terms of territorial emptiness, but rather that there were no people who were worth considering, as they did not constitute a ‘developed’ nation in the most Eurocentric and arbitrary sense of the term.
Apparently, in the minds of these colonialists, this automatically disqualified its inhabitants from being its rightful owners as they did not transform the land into a fully functioning ‘modern’ nation-state.
For a group of colonisers who have cried wolf for decades, the irony here is extremely stark. The influence and the internalisation of racist European sentiment permeates the Zionist mind to this very day. It is this very attitude that would later form the foundations of the nascent ideological and political movement known as Zionism.
Palestine as a global commercial hub
All it takes is a brief glance at Palestine’s agricultural and economic history to entirely dispel these constructed myths of a dead and desolate land.
As one of the most significant land bridges in human history, Palestine has always served as an important producer of key agricultural commodities across the centuries.
In a detailed description of its land and fertility, 10th century geographer, Shams al-Din al-Maqdisi, testified to Palestine’s agricultural activity and manufactured goods:
“Within the Province of Palestine may be found gathered together 36 products that are not found thus united in any other land … From Palestine comes olives, dried figs, raisins, the carob-fruit, stuffs of mixed silk and cotton, soap, and kerchiefs.”
As far as cotton is concerned, between the 10th and 13th centuries CE, the fibre formed the bulk of exports that found their way to European shores.
In fact, its value as a global commodity is clearly reflected in the account of al-Maqdisi, who stressed the importance of Palestine’s cotton production.
Even the British Consul in Jerusalem, James Finn, could not help but praise the cotton plantations that he witnessed during his travels in Palestine:
“The cotton plantations are beautifully clean and orderly, and the fields from which grain crops had been reaped, are well defined and carefully cleaned.”
Cotton cultivation and international trade continued all throughout the Mamluk period and eventually reached its peak during the late Ottoman period. And it is worth mentioning that cotton – a commodity long cultivated in Palestine – formed the backbone of the European industrial revolution, centuries later. One can even go as far as to say that as a source of raw materials, Palestine played a vital role in driving the European industrial revolution.
As of today, Sūq al-Qaṭṭānīn (Market of the Cotton Merchants), which is located on the west side of the Haram al-Sharif, continues to serve as a living reminder of Palestine’s historically monumental cotton industry.
The mighty Jaffa orange
Throughout the ages, the Jaffa orange, otherwise known by its Arabic name, the Shamouti or Abu Surrah (navel), eventually came to outpace all other commodities.
The Jaffa orange – which later emerged as a global brand – was a new variety of orange developed by Palestinian farmers during the first half of the 19th century. The fruit originated as a mutation on a Baladi tree in a city near Jaffa.
With the decline of its cotton industry, Palestine now had a monopoly over the orange trade, and this increased European economic interest in Jaffa following the aftermath of the Crimean War (1853 – 1856). In turn, this led to the expansion of orange groves in the area surrounding the ancient port-city.
This was reflected in the exponential growth of orange groves in Jaffa’s vicinities, thanks to their “good quality”, namely the distinct thick peel of the Shamouti that provided it with more protection from disease and rot than its other Mediterranean competitors.
These groves produced a staggering 33 million oranges per year – one sixth of which were consumed locally – with the rest exported to domestic markets on Greek ships. By 1880, Europe had become the leading destination for orange exports.
In a 1902 study produced by two Zionist officials, on the impressive growth of the orange industry and its international reach, it nevertheless described traditional Palestinian cultivation methods as being “primitive”.
Embarrassingly, an in-depth discussion of the costs associated with Palestinian and European proprietors showed that Palestinian cultivation methods were much more cost-efficient than the so-called ‘modern’ Zionist-European ones that were introduced two decades later by Zionist agronomist Yitzhak Elazari-Volcani.
However, the notions of a “primitive” Palestinian society is still as yet echoed by clueless contemporary Israeli ‘academics’ such as the likes of Benny Morris, whose analysis of Palestinian agriculture is teeming with terms that evoke backwardness and sub-normality.
As has been demonstrated thus far, the least that is required of these militant Zionist pseudo-intellectuals is to take a brief glance at the historical record in order to learn that Palestinian farmers were the pioneers of the Jaffa orange industry. As a matter of fact, the sub-par nature of early Zionist agriculture was bleakly described by one of the leading pre-state Zionist writers, Ahad Ha’am, who said,
“There are now about ten [Jewish] colonies standing for some years, and no one of them is able to support itself … wherever I strived to look, I did not manage to see even one man living solely from the fruit of his land.”
“Why then? The real answer, that any clever man in Palestine knows, is that the first colonists brought with them substantial idealism, but they all lack the qualifications necessary for agriculture and cannot be simple farmers.”
Then, the same cannot be said about the entrepreneurship demonstrated by the Palestinians, who singlehandedly transformed the Jaffa port into a thriving economic, social, and cultural centre through the production and export of its oranges.
Until the very end of the 19th century, this industry was entirely exclusive to the native Palestinians. Having dominated the orange trade since its inception in the 1870s, the freeloading Zionist colonists only began to join the industry by the turn of the 20th century, beginning in the former depopulated Palestinian village of Fajja – today a part of the Israeli city, Petah Tikva.
Despite the use of Western agricultural methods in their colonies, they nevertheless relied on the knowledge of Palestinian farmers.
The first few years of the 20th century saw the economic growth of many Zionist colonies, powered by the citrus industry. Then, during the Mandate period, the British sought to facilitate co-operation between the Palestinian and Jewish sectors of the citrus industry.
Finally, with the beginning of the Nakba, Palestinian-owned orange groves were completely usurped and annexed as part of the newly incepted terrorist state of ‘Israel’. Much of the industry’s leading Palestinians were kicked out of their lands.
Regardless of whether or not Palestinian contemporaries would have thought so at the time, one can interpret the gradual weakening of the Palestinians’ grip on the citrus industry as a stepping stone towards the appropriation of the Jaffa orange as the national symbol of the Zionist state.
Of course, this reality eventually manifested itself in the wake of the Nakba, in which the Jaffa orange became no more than a faint memory living in the minds of Palestinians.
N. Masalha (2007) The Bible and Zionism: Invented Traditions, Archaeology and Post-Colonialism in Palestine – Israel, Volume 1. Bloomsbury Academic, p. 45
Ibid; G. Kramer (2008) A History of Palestine: From the Ottoman Conquest to the Founding State of Israel. Princeton University Press, p. 165-6
Ibid, p. 173
B. Doumani (1995) Rediscovering Palestine: Merchants and Peasants in Jabal Nablus, 1700-1900. University of California Press, p. 97
N. Masalha (2022) Palestine: A Four Thousand Year History. Bloomsbury Publishing, p. 230
L. Kamel (2015) Imperial Perceptions of Palestine: British Influence and Power in Late Ottoman Times. Bloomsbury Academic, p. 78
N. Masalha (2002), op. cit., p. 230
H. Gerber (1982) Modernization in Nineteenth-Century Palestine: The Role of Foreign Trade. Taylor & Francis, p. 251
N. Masalha (2022), op. cit., p. 230
I. Charles (1982) An Economic History of the Middle East and North Africa. Columbia University Press, p. 127; C. Ward, et al. (2021) The History of Water in the Land Once Called Palestine: Scarcity, Conflict and Loss in Middle East Water Resources. Bloomsbury Publishing, p. 26
M. LeVine (2005) Overthrowing Geography: Jaffa, Tel-Aviv, and the Struggle for Palestine, 1880 – 1948. University of California Press, p. 34
Ibid, p. 35
M. Kabha & N. Karlinksy (2021) The Lost Orchard: The Palestinian-Arab Citrus Industry, 1850 – 1950. Syracuse University Press, p. 12
M. Levine (2005), op. cit., p. 35
Ibid, p. 34
B. Morris (2004) The Birth of the Palestinian Refugee Problem Revisited. Cambridge University Press, pp. 17-20
H. Gerber (1982) Zionism, Orientalism and the Palestinians in the Journal of Palestine Studies. University of California Press, p. 33
Ibid, pp. 33-4
M. Kabha & N. Karlinsky (2021), op. cit., p. 18
Ibid, p. 19
N. Karlinksy (2000) California Dreaming: Adapting the “California Model” to the Jewish Citrus Industry in Palestine, 1917 – 1939 in Israel Studies. Indiana University Press, p. 26
N. Karlinsky (2012) California Dreaming: Ideology, Society, and Technology in the Citrus Industry of Palestine, 1890-1939. State University of New York Press, p. 56
M. Kabha & N. Karlinsky (2021), op. cit., pp. 75-101
Ibid, p. 118
Open letter to Tony Greenstein, obsessive hunter of pro-Palestinian activists | Redress Information & Analysis | 9 Jan 2023
On 6 January Tony Greenstein published a massive attack on Pete Gregson, the chair of One Democratic Palestine, on both his blogs (here and here). Below is Gregson’s response:
Tony, contrary to your assertion, I am not an anti-Semite. Just like yourself, I am an anti-Zionist.
You misrepresent me and even contradict yourself. It feels as if you have never heard of the Campaign Against Antisemitism, the Community Security Trust, the Jewish Labour Movement (JLM), Hope not Hate (a misnomer), Labour Against Antisemitism, the Board of Deputies of British Jews, the Jewish Leadership Council, the three Jewish newspapers, UK Lawyers for Israel, Friends of Israel, Britain-Israel Communications and Research Centre (BICOM) – and, of course, the Israeli government. These are the agencies that are run by Zionists who are also, interestingly, run by people who call themselves Jews – and they are the people who brought down Jeremy Corbyn because of his support for Palestine. But according to your twisted logic, it was the capitalists who destroyed him because he was a lefty and the Zionist Jews were just their pawns. You are oblivious to anything but your pseudo-Marxist analysis.
You published a lie about me on your blogs. You said “Peter has to his credit expulsion from a number of organisations. I initiated the first from Labour Against the Witchhunt when he put up a petition on his website linking to an article by holocaust denier Nick Kollerstrom who had written a ‘literature review’ on ‘‘The Auschwitz gas chamber illusion’.”
This is quite untrue. This is what I actually said four years ago: Also see the article “UK’s Labour Antisemitism Split” by Ian Fantom here. Ian organised the Keep Talking group that filmed my talk in the last but one update. (But I must say I think the Kollerstrom article he mentions is quite toxic).
“Hypocrisy, truth-twisting and witch-hunting seem to be your specialty.”
Because you are so obsessed by guilt-by-association – you hate Fantom and you are furious that I referenced him – you persuaded 60 per cent of your pals at Labour Against the Witchhunt to hunt me out; this was a body you had basically set up, after all.
But you can see from the above that it’s not me that gives out links to Kollerstrom’s nonsense about Auschwitz – it’s you! Hypocrisy, truth-twisting and witch-hunting seem to be your specialty.
I never said the Zionists supported the actual gassing of Jews. I said they supported the holocaust by stopping any boycott of Nazi Germany, by making financial deals with the Nazis, by choosing not to warn Jews about the gas chambers, by thwarting efforts to save Jews, by being leading lights in the Judenrat, and by saying that Jewish blood must be spilt if Jews were to get their hands on Palestine. This is all in your book.
“… you throw slurs at pro-Palestine activists because the Zionists do: they stab from the front, you stab from the back.”
The problem is that you’ve become a willing pawn for the Zionists. You live in an “anti-Semitic echo-chamber”, seeing anti-Semitism where there is none, attacking those whom you should be supporting, so keen are you to find Jew-hate. Like the Zionists, like so many “victim-obsessed” people who think themselves Jewish, not much different in that respect to David Baddiel, whom you recently (correctly) took to task. I make observations about many influential Jews in the UK, but I do not harbour any prejudice against Jews in general. You, however, see anti-Semitism everywhere. So, you throw slurs at pro-Palestine activists because the Zionists do: they stab from the front, you stab from the back. The sad thing is that you yourself have not suffered in the storm of imagined Jew-hate that has beset this country. (Yes, you were kicked out of Labour, but not because of bogus anti-Semitism.) I was forced out of the Labour Party, booted out of my union, have been suspended from our local scout group, been ostracised from many causes I care about because I have criticised Israel. And the people who have done this to me, folk like Rea Wolfson, who calls herself Jewish, were behind these attacks on my freedom of speech (I know it was Wolfson – a GMB senior officer was the source). I’ve seen good people brought low, like Chris Williamson and Corbyn, by the same Zionists that have attacked me and who have beaten our politicians, the BBC and the rest of the media into submission, all on the back of our collective holocaust guilt.
We need to be able to have an intelligent discussion about Zionist power in the UK. In 2011 Jeremy Newmark implored, in the “Big Tent for Israel” event, Jews and anybody supporting Israel (i.e. Zionists) to join trade unions and their local political party to mobilise. Then, as soon as Corbyn was elected leader in 2015, Newmark rebadged a virtually redundant Paole Zion, an affiliate body of the Labour Party from the party’s early years, into the JLM and set about recruiting Jews and others who supported Israel. This group was then able to command the debate from virtually inside the party.
Many JLM members secured posts in Labour HQ and in the trade unions. Zionist activists then set about convincing major bodies that Jews were at risk and that the only way they could be protected was by adopting the International Holocaust Remembrance Alliance (IHRA) definition of anti-Semitism. From 2015 to 2018, they largely succeeded. This was achieved through cultivating a hysteria in the media that Jews felt vulnerable without it – Rhea Wolfson, in the Scotsman in July 2018, for example. Wolfson served on Labour’s National Executive Committee (NEC) and she and Jon Lansman were key in getting Labour to adopt the IHRA definition (more on this here).
Did you never notice that up until 2012 the BBC and the rest of the mainstream media were increasingly fair in their coverage of the Palestinians, a trend that has since been sharply reversed? Did you watch The Promise, broadcast by Channel 4 in February 2011, a series about a young woman who goes to present-day Israel and Palestine, determined to find out about her soldier grandfather’s involvement in the final years of Palestine under the British mandate? Can you imagine such a TV series being made now? No, of course not. Jeremy Newmark introduces his 2011 “Big Tent for Israel” event by arguing for the urgent need to stop criticisms of Israel, for Jews throughout the UK to come to the defence of their ‘homeland’. And he has spectacularly succeeded in this, with a great deal of help from the Israeli embassy, as the TV series The Lobby, proves.
Are you aware that 38 per cent of the BBC’s governing body are now declared pro-Israel or pro-Zionists? The BBC’s political editor was in place at exactly the right time to do the maximum damage. Corbyn was repeatedly skewered by Laura Kuensberg, one of the Glasgow Zionists; she brought him down with help from fellow Zionist, Jonathan Freedland at the Guardian. Kuensberg and other Zionists, calling themselves Jews, rose to their positions of influence in the six years following the “Big Tent for Israel” event. Why on earth would a toff like Rhea Wolfson want to work in a trade union like the GMB, yet she was there at the top in 2018? What on earth was a property magnate like Lansman doing running Momentum? These are the people that drove Labour to adopt the IHRA definition – and they weren’t doing it because they loved the ‘Establishment’. They were doing it because they loved Israel. And they call themselves Jews, too.
You claim I think Jews act as one homogenous mass. I don’t. But 70 per cent of those calling themselves Jews in the UK support Israel. And Israel supports these people. The Lobby makes clear who was directing the attacks on British politicians who were deemed pro-Palestinian. It wasn’t MI5 – it was the Israeli embassy, in league with the Zionists agencies I listed earlier. The fact of the Big Tent for Israel event undermines everything you say. Over the past 10 years there has been a massively successful campaign to entrench Zionism in Westminster – we now have 80 per cent of the Tories and 40 per cent of Labour MPs who are Friends of Israel.
“Your abuse of the word ‘anti-Semitism’ has, alongside the Zionists, made the term increasingly meaningless… You now use it against me because I accuse Zionists of supporting the Nazis, yet you know that the Haavara Agreement showed Nazis and Zionists working together very well.”
Your abuse of the word ‘anti-Semitism’ has, alongside the Zionists, made the term increasingly meaningless. You don’t accept the Oxford English Dictionary definition of the term – “hostility to or prejudice against Jewish people”. You now use it against me because I accuse Zionists of supporting the Nazis, yet you know that the Haavara Agreement showed Nazis and Zionists working together very well. The Zionists did nothing to avert the holocaust and declared that “only through the bloodshed of Jews will we get the land, because when the victors will divide the land after the war, then they will give us [the Zionists] the land in return for Jewish bloodshed, and that is what we want”. In my book, this counts as quiet support for the holocaust. They were content to see Jews gassed if they were not heading towards Palestine.
In your self-righteous ranting, you attack campaigners like myself as anti-Semites – but I don’t show “hostility to or prejudice against Jewish people”, I merely observe that most of those who call themselves Jews in the UK are wealthy, influential and they support Israel. I am prejudiced against Zionists – who happen to be Jews, but not because they are Jews. And you are too. But you wouldn’t call yourself anti-Semitic, would you? You think you are above the mudslinging because you think yourself Jewish. You reserve that slander against people like me, who don’t declare themselves Jewish.
I have to disagree with you as well on the Zionist project. It was imperialist and colonial at the outset, but these days it is primarily a Jewish project. If we reflect on the parallels with South Africa, it quickly becomes clear as to why attempts to mount an effective campaign against Israel have so consistently failed. Zionists declare Jews to be a race, and they declare that those who’ve emigrated there since 1917 belong there, that they were ‘returning to their roots’. This is what makes it so difficult to change the status quo, with we anti-apartheid campaigners being branded as ‘anti-Semites’.
Israel calls itself the Jewish state and people in the UK have largely accepted this. That is why BDS efforts are undermined and any move by local authorities to support BDS is soon to be made illegal, as it will be deemed ‘anti-Semitic’. So, matters as to whether Israel really is the Jewish state are fundamental to any question of Palestine’s liberation.
Rabbi Weiss shares my view that Israel is not the Jewish state. He comes from the religious perspective (that they cannot be Jewish for they have ignored the Torah’s ruling on Aliyah) – but I am declaring this from the genetic perspective too: the ‘Jewish’ occupants of the Knesset are Europeans, not Semites. (By the way, you say you are going to write to Rabbi Weiss and ask him to dissociate himself from me. But he has read your attack on his organisation, so I’m not sure he’ll listen.)
“You have become part of the problem, Tony, because you have lost sight of what anti-Semitism is…Can you not see what you are doing with your obsessive attacks on fellow campaigners?… You do nothing but harm to the Palestinian cause with your pointless attacks…”
The Ashkenazis in Israel, who comprise around 50 per cent of the population and are generally mostly in power, originally hail from Kazakstan, and are fundamentally Europeans – it is they who have given Israel its European flavour, so much so that the European Union virtually accepts Israel as a member state. They do this whilst claiming they are of the land, that Jews were given the land by God – sadly, this is an argument that has been generally accepted. In order to fight this, we need to challenge their claim to a Jewish heritage, to their rights to occupy the land. If we do not, we will never win. As far as the dictionary definition of Jews applies, their claim falls flat. The dictionary says a Jew is “a member of the people and cultural community whose traditional religion is Judaism and who trace their origins through the ancient Hebrew people of Israel to Abraham”. But present-day Israelis have rejected the key Judaic precept that there can be no return to ‘Israel’ until the Messiah comes. So, their claim to be followers of Judaism are false. And genetically, they cannot trace their origins to the ancient Hebrew people either; the Ashkenazis rule the roost and they are Europeans, not Hebrews. They ignore key Judaic teachings, with the religious right there just picking out the bits of the Torah they like, that they think justify their occupation. Their claim that God gave them the land is entirely bogus. The true descendants of the ancient Hebrews are the Palestinians, who mostly converted to Islam in the 7th Century – something you yourself have observed.
You have become part of the problem, Tony, because you have lost sight of what anti-Semitism is. I have no problem with Jews, as you well know. However, I do have a problem with Zionists who call themselves Jews, who have done everything possible to cancel my life. And now you have joined them. Can you not see what you are doing with your obsessive attacks on fellow campaigners? Folk like me and Williamson have lost so much for complaining about bogus anti-Semitism, at the hands of people who would do anything to protect their racist colony. You do nothing but harm to the Palestinian cause with your pointless attacks. The only people you are helping are the Israelis. It’s time you woke up and realised that.
Hands off Peter Gregson – For Workers Democracy and Free Debate, not Bundist Heresy-Hunts! – Consistent Democrats]
|
<urn:uuid:c7a6bc32-56eb-4d6c-965b-9c5212a15c61>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00109.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9534401297569275,
"pii_count": 0,
"score": 3.03125,
"token_count": 6388,
"url": "https://truthaholics.wordpress.com/2023/01/11/unpeeling-history-how-an-orange-debunked-zionism-antisemitism-ziocolony-blues-bds/"
}
|
Unpeeling history: How an orange debunked Zionism | Omar Hammuda | ISLAM21C | 8 Jan 2023
In this delectable piece, Omar Hammuda shows how the history of the simple yet emblematic Palestinian Jaffa orange reveals the myths of Zionism
There is more to an orange than what meets the eye! Recent online discussions reignited the lie that Palestine was a barren and uninhabited land before the emergence of the Zionist colony. While this article touches on Palestine’s rich agricultural history, it will show how the simple orange alone can debunk this concocted myth.
An introduction to the Zionist propaganda machine
From the very early stages of Zionism to the present day, Zionists and their sympathisers have promulgated baseless claims that one of earth’s most historically significant lands (Palestine) was, once upon a time, romantically empty and destitute.
This politically convenient rhetoric follows that it wasn’t until Jewish migration and subsequent settlement in the land that Palestine flourished and made its ‘arid’ deserts bloom. To facilitate such outright lies and misinformation, the Zionists adopted a widely cited phrase in Zionist literature:
“A land without a people for a people without a land.”
While the historicity of the phrase appears to be a matter of contention, this slogan persists. In reality, it was never intended to be literal, but rather, purely ideological. It was invented to provide a legal and moral basis for the seizure of the land, the murder and displacement of the native population, and the establishment of a Zionist colony.
An important feature of early Zionist political discourse was that the native population in Palestine were no more than a marginal segment of people who only existed as scattered individuals or only sometimes as communities.
This arrogance and disdain towards the native population becomes exceedingly clear when reading the pronouncements of the early Zionists who wielded this slogan.
After having been promised Palestine by the Balfour Declaration, British author and staunch Zionist Israel Zangwill wrote,
“…for there is no Arab people living in intimate fusion with the country, utilising its resources and stamping it with a characteristic impress; there is at best an Arab encampment.”
In a similar vein, when former Zionist Prime Minister Chaim Weizmann was asked about the Palestinian Arabs and the Balfour Declaration, he condescendingly mirrored the racist attitudes of his European overl
|
ords and responded by stating that,
“The British told us that there are some hundred thousand Negroes (Kushim in Hebrew) and for those, there is no value.”
In other words, these Zionist leaders did not mean that there were no people in Palestine in terms of territorial emptiness, but rather that there were no people who were worth considering, as they did not constitute a ‘developed’ nation in the most Eurocentric and arbitrary sense of the term.
Apparently, in the minds of these colonialists, this automatically disqualified its inhabitants from being its rightful owners as they did not transform the land into a fully functioning ‘modern’ nation-state.
For a group of colonisers who have cried wolf for decades, the irony here is extremely stark. The influence and the internalisation of racist European sentiment permeates the Zionist mind to this very day. It is this very attitude that would later form the foundations of the nascent ideological and political movement known as Zionism.
Palestine as a global commercial hub
All it takes is a brief glance at Palestine’s agricultural and economic history to entirely dispel these constructed myths of a dead and desolate land.
As one of the most significant land bridges in human history, Palestine has always served as an important producer of key agricultural commodities across the centuries.
In a detailed description of its land and fertility, 10th century geographer, Shams al-Din al-Maqdisi, testified to Palestine’s agricultural activity and manufactured goods:
“Within the Province of Palestine may be found gathered together 36 products that are not found thus united in any other land … From Palestine comes olives, dried figs, raisins, the carob-fruit, stuffs of mixed silk and cotton, soap, and kerchiefs.”
As far as cotton is concerned, between the 10th and 13th centuries CE, the fibre formed the bulk of exports that found their way to European shores.
In fact, its value as a global commodity is clearly reflected in the account of al-Maqdisi, who stressed the importance of Palestine’s cotton production.
Even the British Consul in Jerusalem, James Finn, could not help but praise the cotton plantations that he witnessed during his travels in Palestine:
“The cotton plantations are beautifully clean and orderly, and the fields from which grain crops had been reaped, are well defined and carefully cleaned.”
Cotton cultivation and international trade continued all throughout the Mamluk period and eventually reached its peak during the late Ottoman period. And it is worth mentioning that cotton – a commodity long cultivated in Palestine – formed the backbone of the European industrial revolution, centuries later. One can even go as far as to say that as a source of raw materials, Palestine played a vital role in driving the European industrial revolution.
As of today, Sūq al-Qaṭṭānīn (Market of the Cotton Merchants), which is located on the west side of the Haram al-Sharif, continues to serve as a living reminder of Palestine’s historically monumental cotton industry.
The mighty Jaffa orange
Throughout the ages, the Jaffa orange, otherwise known by its Arabic name, the Shamouti or Abu Surrah (navel), eventually came to outpace all other commodities.
The Jaffa orange – which later emerged as a global brand – was a new variety of orange developed by Palestinian farmers during the first half of the 19th century. The fruit originated as a mutation on a Baladi tree in a city near Jaffa.
With the decline of its cotton industry, Palestine now had a monopoly over the orange trade, and this increased European economic interest in Jaffa following the aftermath of the Crimean War (1853 – 1856). In turn, this led to the expansion of orange groves in the area surrounding the ancient port-city.
This was reflected in the exponential growth of orange groves in Jaffa’s vicinities, thanks to their “good quality”, namely the distinct thick peel of the Shamouti that provided it with more protection from disease and rot than its other Mediterranean competitors.
These groves produced a staggering 33 million oranges per year – one sixth of which were consumed locally – with the rest exported to domestic markets on Greek ships. By 1880, Europe had become the leading destination for orange exports.
In a 1902 study produced by two Zionist officials, on the impressive growth of the orange industry and its international reach, it nevertheless described traditional Palestinian cultivation methods as being “primitive”.
Embarrassingly, an in-depth discussion of the costs associated with Palestinian and European proprietors showed that Palestinian cultivation methods were much more cost-efficient than the so-called ‘modern’ Zionist-European ones that were introduced two decades later by Zionist agronomist Yitzhak Elazari-Volcani.
However, the notions of a “primitive” Palestinian society is still as yet echoed by clueless contemporary Israeli ‘academics’ such as the likes of Benny Morris, whose analysis of Palestinian agriculture is teeming with terms that evoke backwardness and sub-normality.
As has been demonstrated thus far, the least that is required of these militant Zionist pseudo-intellectuals is to take a brief glance at the historical record in order to learn that Palestinian farmers were the pioneers of the Jaffa orange industry. As a matter of fact, the sub-par nature of early Zionist agriculture was bleakly described by one of the leading pre-state Zionist writers, Ahad Ha’am, who said,
“There are now about ten [Jewish] colonies standing for some years, and no one of them is able to support itself … wherever I strived to look, I did not manage to see even one man living solely from the fruit of his land.”
“Why then? The real answer, that any clever man in Palestine knows, is that the first colonists brought with them substantial idealism, but they all lack the qualifications necessary for agriculture and cannot be simple farmers.”
Then, the same cannot be said about the entrepreneurship demonstrated by the Palestinians, who singlehandedly transformed the Jaffa port into a thriving economic, social, and cultural centre through the production and export of its oranges.
Until the very end of the 19th century, this industry was entirely exclusive to the native Palestinians. Having dominated the orange trade since its inception in the 1870s, the freeloading Zionist colonists only began to join the industry by the turn of the 20th century, beginning in the former depopulated Palestinian village of Fajja – today a part of the Israeli city, Petah Tikva.
Despite the use of Western agricultural methods in their colonies, they nevertheless relied on the knowledge of Palestinian farmers.
The first few years of the 20th century saw the economic growth of many Zionist colonies, powered by the citrus industry. Then, during the Mandate period, the British sought to facilitate co-operation between the Palestinian and Jewish sectors of the citrus industry.
Finally, with the beginning of the Nakba, Palestinian-owned orange groves were completely usurped and annexed as part of the newly incepted terrorist state of ‘Israel’. Much of the industry’s leading Palestinians were kicked out of their lands.
Regardless of whether or not Palestinian contemporaries would have thought so at the time, one can interpret the gradual weakening of the Palestinians’ grip on the citrus industry as a stepping stone towards the appropriation of the Jaffa orange as the national symbol of the Zionist state.
Of course, this reality eventually manifested itself in the wake of the Nakba, in which the Jaffa orange became no more than a faint memory living in the minds of Palestinians.
N. Masalha (2007) The Bible and Zionism: Invented Traditions, Archaeology and Post-Colonialism in Palestine – Israel, Volume 1. Bloomsbury Academic, p. 45
Ibid; G. Kramer (2008) A History of Palestine: From the Ottoman Conquest to the Founding State of Israel. Princeton University Press, p. 165-6
Ibid, p. 173
B. Doumani (1995) Rediscovering Palestine: Merchants and Peasants in Jabal Nablus, 1700-1900. University of California Press, p. 97
N. Masalha (2022) Palestine: A Four Thousand Year History. Bloomsbury Publishing, p. 230
L. Kamel (2015) Imperial Perceptions of Palestine: British Influence and Power in Late Ottoman Times. Bloomsbury Academic, p. 78
N. Masalha (2002), op. cit., p. 230
H. Gerber (1982) Modernization in Nineteenth-Century Palestine: The Role of Foreign Trade. Taylor & Francis, p. 251
N. Masalha (2022), op. cit., p. 230
I. Charles (1982) An Economic History of the Middle East and North Africa. Columbia University Press, p. 127; C. Ward, et al. (2021) The History of Water in the Land Once Called Palestine: Scarcity, Conflict and Loss in Middle East Water Resources. Bloomsbury Publishing, p. 26
M. LeVine (2005) Overthrowing Geography: Jaffa, Tel-Aviv, and the Struggle for Palestine, 1880 – 1948. University of California Press, p. 34
Ibid, p. 35
M. Kabha & N. Karlinksy (2021) The Lost Orchard: The Palestinian-Arab Citrus Industry, 1850 – 1950. Syracuse University Press, p. 12
M. Levine (2005), op. cit., p. 35
Ibid, p. 34
B. Morris (2004) The Birth of the Palestinian Refugee Problem Revisited. Cambridge University Press, pp. 17-20
H. Gerber (1982) Zionism, Orientalism and the Palestinians in the Journal of Palestine Studies. University of California Press, p. 33
Ibid, pp. 33-4
M. Kabha & N. Karlinsky (2021), op. cit., p. 18
Ibid, p. 19
N. Karlinksy (2000) California Dreaming: Adapting the “California Model” to the Jewish Citrus Industry in Palestine, 1917 – 1939 in Israel Studies. Indiana University Press, p. 26
N. Karlinsky (2012) California Dreaming: Ideology, Society, and Technology in the Citrus Industry of Palestine, 1890-1939. State University of New York Press, p. 56
M. Kabha & N. Karlinsky (2021), op. cit., pp. 75-101
Ibid, p. 118
Open letter to Tony Greenstein, obsessive hunter of pro-Palestinian activists | Redress Information & Analysis | 9 Jan 2023
On 6 January Tony Greenstein published a massive attack on Pete Gregson, the chair of One Democratic Palestine, on both his blogs (here and here). Below is Gregson’s response:
Tony, contrary to your assertion, I am not an anti-Semite. Just like yourself, I am an anti-Zionist.
You misrepresent me and even contradict yourself. It feels as if you have never heard of the Campaign Against Antisemitism, the Community Security Trust, the Jewish Labour Movement (JLM), Hope not Hate (a misnomer), Labour Against Antisemitism, the Board of Deputies of British Jews, the Jewish Leadership Council, the three Jewish newspapers, UK Lawyers for Israel, Friends of Israel, Britain-Israel Communications and Research Centre (BICOM) – and, of course, the Israeli government. These are the agencies that are run by Zionists who are also, interestingly, run by people who call themselves Jews – and they are the people who brought down Jeremy Corbyn because of his support for Palestine. But according to your twisted logic, it was the capitalists who destroyed him because he was a lefty and the Zionist Jews were just their pawns. You are oblivious to anything but your pseudo-Marxist analysis.
You published a lie about me on your blogs. You said “Peter has to his credit expulsion from a number of organisations. I initiated the first from Labour Against the Witchhunt when he put up a petition on his website linking to an article by holocaust denier Nick Kollerstrom who had written a ‘literature review’ on ‘‘The Auschwitz gas chamber illusion’.”
This is quite untrue. This is what I actually said four years ago: Also see the article “UK’s Labour Antisemitism Split” by Ian Fantom here. Ian organised the Keep Talking group that filmed my talk in the last but one update. (But I must say I think the Kollerstrom article he mentions is quite toxic).
“Hypocrisy, truth-twisting and witch-hunting seem to be your specialty.”
Because you are so obsessed by guilt-by-association – you hate Fantom and you are furious that I referenced him – you persuaded 60 per cent of your pals at Labour Against the Witchhunt to hunt me out; this was a body you had basically set up, after all.
But you can see from the above that it’s not me that gives out links to Kollerstrom’s nonsense about Auschwitz – it’s you! Hypocrisy, truth-twisting and witch-hunting seem to be your specialty.
I never said the Zionists supported the actual gassing of Jews. I said they supported the holocaust by stopping any boycott of Nazi Germany, by making financial deals with the Nazis, by choosing not to warn Jews about the gas chambers, by thwarting efforts to save Jews, by being leading lights in the Judenrat, and by saying that Jewish blood must be spilt if Jews were to get their hands on Palestine. This is all in your book.
“… you throw slurs at pro-Palestine activists because the Zionists do: they stab from the front, you stab from the back.”
The problem is that you’ve become a willing pawn for the Zionists. You live in an “anti-Semitic echo-chamber”, seeing anti-Semitism where there is none, attacking those whom you should be supporting, so keen are you to find Jew-hate. Like the Zionists, like so many “victim-obsessed” people who think themselves Jewish, not much different in that respect to David Baddiel, whom you recently (correctly) took to task. I make observations about many influential Jews in the UK, but I do not harbour any prejudice against Jews in general. You, however, see anti-Semitism everywhere. So, you throw slurs at pro-Palestine activists because the Zionists do: they stab from the front, you stab from the back. The sad thing is that you yourself have not suffered in the storm of imagined Jew-hate that has beset this country. (Yes, you were kicked out of Labour, but not because of bogus anti-Semitism.) I was forced out of the Labour Party, booted out of my union, have been suspended from our local scout group, been ostracised from many causes I care about because I have criticised Israel. And the people who have done this to me, folk like Rea Wolfson, who calls herself Jewish, were behind these attacks on my freedom of speech (I know it was Wolfson – a GMB senior officer was the source). I’ve seen good people brought low, like Chris Williamson and Corbyn, by the same Zionists that have attacked me and who have beaten our politicians, the BBC and the rest of the media into submission, all on the back of our collective holocaust guilt.
We need to be able to have an intelligent discussion about Zionist power in the UK. In 2011 Jeremy Newmark implored, in the “Big Tent for Israel” event, Jews and anybody supporting Israel (i.e. Zionists) to join trade unions and their local political party to mobilise. Then, as soon as Corbyn was elected leader in 2015, Newmark rebadged a virtually redundant Paole Zion, an affiliate body of the
|
Human genetic engineering has come a long way since its inception, and gene editing is one of the newest and most promising technologies in this field. Gene editing allows scientists to precisely alter the DNA sequence of an organism, including human beings. With the advent of gene editing, it has become possible to treat genetic diseases, create genetically modified organisms, and even enhance human traits. In this article, we will review the role of gene editing in human genetic engineering and the current research in this field.
Gene editing is a technique that uses molecular tools to make precise and specific changes in the DNA sequence of an organism. This technique allows scientists to alter, insert, or delete genes in a genome, making it possible to treat genetic disorders or create new traits. One of the most popular methods for gene editing is the CRISPR-Cas9 system. This system uses RNA molecules to guide a Cas9 enzyme to a specific location in the genome, where it makes a precise cut in the DNA. This cut can then be repaired by the cell’s natural DNA repair mechanisms, or new genes can be inserted into the cut site.
The most significant role of gene editing in human genetic engineering is the treatment of genetic diseases. Many genetic disorders are caused by mutations in a single gene, which makes them ideal targets for gene editing. With gene editing, it is possible to correct these mutations and restore the normal function of the affected gene. Gene editing has already shown promise in treating several genetic disorders, including sickle cell anemia and Huntington’s disease.
Another potential application of gene editing is in the creation of genetically modified organisms. Scientists can use gene editing to create crops that are more resistant to disease, produce higher yields, or have better nutritional value. They can also use gene editing to create animals with specific traits, such as leaner meat or increased milk production. However, the creation of genetically modified organisms has raised ethical concerns, particularly in the case of humans.
Gene editing has also opened up the possibility of enhancing human traits. With gene editing, it is possible to alter the genes responsible for physical and mental traits, such as height, intelligence, or athletic ability. While the idea of enhancing human traits may seem appealing, it has raised ethical concerns about creating a genetic elite and widening the gap between the haves and have-nots. There are also concerns about unintended consequences, such as unintended effects on other traits or negative effects on health.
Despite the potential benefits of gene editing, there are still many technical and ethical challenges that need to be addressed. One of the most significant challenges is the off-target effects of gene editing. Gene editing can cause unintended changes in the genome, which can have unintended consequences on the organism’s health. Scientists need to develop better methods for predicting and preventing off-target effects.
Another challenge is the ethical considerations surrounding the use of gene editing in humans. Many people are concerned about the potential misuse of gene editing, such as creating a genetic elite or allowing parents to choose their children’s traits. There are also concerns about the long-term effects of gene editing on human evolution and the potential for unintended consequences.
In conclusion, gene editing is a powerful tool in human genetic engineering that has the potential to revolutionize medicine, agriculture, and human evolution. However, there are still many technical and ethical challenges that need to be addressed before gene editing can be widely used. As researchers continue to develop and refine gene editing techniques, it is essential to consider the ethical implications of this technology and ensure that it is used for the benefit of all.
|
<urn:uuid:90987468-a967-4aa2-87a5-3048e0475ed7>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649105.40/warc/CC-MAIN-20230603032950-20230603062950-00613.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9443796277046204,
"pii_count": 0,
"score": 3.96875,
"token_count": 725,
"url": "https://thetattyjournal.wordpress.com/2023/03/26/the-role-of-gene-editing-in-human-genetic-engineering-a-review-of-current-research/"
}
|
Human genetic engineering has come a long way since its inception, and gene editing is one of the newest and most promising technologies in this field. Gene editing allows scientists to precisely alter the DNA sequence of an organism, including human beings. With the advent of gene editing, it has become possible to treat genetic diseases, create genetically modified organisms, and even enhance human traits. In this article, we will review the role of gene editing in human genetic engineering and the current research in this field.
Gene editing is a technique that uses molecular tools to make precise and specific changes in the DNA sequence of an organism. This technique allows scientists to alter, insert, or delete genes in a genome, making it possible to treat genetic disorders or create new traits. One of the most popular methods for gene editing is the CRISPR-Cas9 system. This system uses RNA molecules to guide a Cas9 enzyme to a specific location in the genome, where it makes a precise cut in the DNA. This cut can then be repaired by the cell’s natural DNA repair mechanisms, or new genes can be inserted into the cut site.
The most significant role of gene editing in human genetic engineering is the treatment of genetic diseases. Many genetic disorders are caused by mutations in a single gene, which makes them ideal targets for gene editing. With gene editing, it is possible to correct these mutations and restore the normal function of the affected gene. Gene editing has already shown promise in treating several genetic disorders, including sickle cell anemia and Huntington’s disease.
Another potential application of gene editing is in the creation of genetically modified organisms. Scientists can use gene editing to create crops that are more resistant to disease, produce higher yields, or have better nutritional value. They can also use gene editing to create animals with specific traits, such as leaner meat or increased milk production. However, the creation of genetically modified organisms has raised ethical concerns, particularly in the case of humans.
Gene editing has also opened up the possibility of enhancing human traits. With gene editing, it is possible to alter the genes responsible for physical and mental traits, such as height, intelligence, or athletic ability. While the idea of enhancing human traits may seem appealing, it has raised ethical concerns about creating a genetic elite and widening the gap between the haves and have-nots. There are also concerns about unintended consequences, such as unintended effects on other traits or negative effects on health.
Despite the potential benefits of gene editing, there are still many technical and ethical challenges that need to be addressed. One of the most significant challenges
|
is the off-target effects of gene editing. Gene editing can cause unintended changes in the genome, which can have unintended consequences on the organism’s health. Scientists need to develop better methods for predicting and preventing off-target effects.
Another challenge is the ethical considerations surrounding the use of gene editing in humans. Many people are concerned about the potential misuse of gene editing, such as creating a genetic elite or allowing parents to choose their children’s traits. There are also concerns about the long-term effects of gene editing on human evolution and the potential for unintended consequences.
In conclusion, gene editing is a powerful tool in human genetic engineering that has the potential to revolutionize medicine, agriculture, and human evolution. However, there are still many technical and ethical challenges that need to be addressed before gene editing can be widely used. As researchers continue to develop and refine gene editing techniques, it is essential to consider the ethical implications of this technology and ensure that it is used for the benefit of all.
|
Off the top of your head, name any marine animal, and we can bet that climate change has put them through the wringer in recent times. Sharks have been moving polewards to escape the warming oceans, inadvertently altering entire fragile ecosystems in their anxious scurry. Dolphins are tragically seeing lowered birth rates as heat waves strike their habitat, putting entire species in underserved jeopardy.
We already knew that the warming oceans have been making the going turn tougher and tougher for the extremely sensitive marine life that resides in it. And things have only gotten worse in 2022, with this year being the hottest the oceans have ever been!
While the abundance of water on Earth might make the blue planet look incredibly snazzy from space, it does little to prevent heat from being trapped within it. In fact, the large surface area covered by water means our oceans absorb about 90% of the excess heat from greenhouse gas emissions.
This helps us land-dwellers, but unfortunately spells tragedy of Atlantic proportions for our marine neighbours. Just last year, heat content in our oceans exceeded 2021 levels by one zeta joules (1,000,000,000,000,000,000,000 J), which beats out current worldwide electricity production by a hundred-fold.
"Until we reach net zero emissions, that heating will continue, and we'll continue to break ocean heat content records, as we did this year," explains Michael Mann, an author of the study. "Better awareness and understanding of the oceans are a basis for the actions to combat climate change."
In addition, the ongoing acidification of our oceans is robbing them of the crucial oxygen that allows life to survive underwater, leaving countless species on literally some of their last breaths.
"Deoxygenation itself is a nightmare for not only marine life and ecosystems, but also for humans and our terrestrial ecosystems," the researchers said in a statement.
Even above-water, countries worldwide are regularly falling prey to climate change. Places where droughts have become more common face repeated risk of wildfires, while torrential floods wreak havoc in others.
Many of these impacts can indeed be traced back to the increasingly warming oceans, which in turn, brings about significant changes to the planet’s hydrological cycles, cascading into deadlier and more frequent disasters around the planet.
The findings of this research have been published in Atmospheric Sciences and can be accessed here.
For weather, science, space, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!
|
<urn:uuid:ae52f205-618e-43f1-abc8-6b81a1d8babe>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00453.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9399064779281616,
"pii_count": 0,
"score": 3.578125,
"token_count": 528,
"url": "https://weather.com/en-IN/india/climate-change/news/2023-01-12-worlds-oceans-were-at-their-hottest-ever-in-2022"
}
|
Off the top of your head, name any marine animal, and we can bet that climate change has put them through the wringer in recent times. Sharks have been moving polewards to escape the warming oceans, inadvertently altering entire fragile ecosystems in their anxious scurry. Dolphins are tragically seeing lowered birth rates as heat waves strike their habitat, putting entire species in underserved jeopardy.
We already knew that the warming oceans have been making the going turn tougher and tougher for the extremely sensitive marine life that resides in it. And things have only gotten worse in 2022, with this year being the hottest the oceans have ever been!
While the abundance of water on Earth might make the blue planet look incredibly snazzy from space, it does little to prevent heat from being trapped within it. In fact, the large surface area covered by water means our oceans absorb about 90% of the excess heat from greenhouse gas emissions.
This helps us land-dwellers, but unfortunately spells tragedy of Atlantic proportions for our marine neighbours. Just last year, heat content in our oceans exceeded 2021 levels by one zeta joules (1,000,000,000,000,000,000,000 J), which beats out current worldwide electricity production by a hundred-fold.
"Until we reach net zero emissions, that heating will continue, and we'll continue to break ocean heat content records, as we did this year," explains Michael Mann, an author of the study. "Better awareness and understanding of the oceans are a basis for the actions to combat climate change."
In addition, the ongoing acidification of our oceans is robbing them of the crucial oxygen that allows life to survive underwater, leaving countless species on literally some of their last breaths.
"Deoxygenation itself is a nightmare for not only marine life and ecosystems, but also for humans and our terrestrial ecosystems," the researchers said in a statement.
Even above-water, countries worldwide are regularly falling prey to climate change. Places where droughts have become more common face repeated risk of wildfires, while torrential floods wreak havoc in others.
Many of these impacts can indeed be traced back to the increasingly warming oceans, which in turn, brings about significant changes to the planet’s hydrological cycles, cascading into deadlier and more frequent disasters around the planet.
The findings of this research have been published in Atmospheric Sciences and can be accessed here.
For weather, science
|
, space, and COVID-19 updates on the go, download The Weather Channel App (on Android and iOS store). It's free!
|
History of equalization of property rights by members of Agua Caliente Band of the Cahuilla Indians
The struggle for the property rights of the individual members of the Agua Caliente Band of the Cahuilla Indians has been long and difficult. In 1958, Tribal Council Chairman Vyola Olinger (later Ortner) presided over “one of the biggest real estate deals in the country” in championing the rights of her people.
The heart of the reservation of the Agua Caliente Band of Cahuilla Indians is Section 14, a 640-acre parcel in the center of downtown Palm Springs. It was at the center of a possible multimillion-dollar development opportunity that was crucial for the Tribe’s survival.
Two issues needed to be adjudicated before any development was possible. The first was to finalize the equalization of values of individual allotments, parcels of land given to each tribal member, and the second was to increase the time that tribal members could lease their land in order to attract developers.
Allotment of the land on the Palm Springs reservation, originally established under 1891 law, to individual members of the Agua Caliente Band was begun in 1923. Because of various complications in legislation, administration and litigation, the first allotment schedule was not approved by the Secretary of the Interior until 1949.
A decade-long battle began when an attempt to “equalize” allotments of land to individual members of the tribe was made by the federal government. Making the values of allotments equal and fair to all tribal members was a daunting task. Allotments varied considerably according to Native American expert Dr. Lowell Bean in his book From Time Immemorial.
In 1954, Clemente Segundo et al vs. United States was settled with a ruling that each plaintiff in the case was to be allotted “total lands of as nearly equal value to the lands allotted to each of the other members.” This decision by the court ended the legal back-and-forth of individual tribal members and the federal government, setting up the framework for both individual members’ property rights and the Tribe’s future.
The question of how to equalize the values of real estate was studied by the federal government through the Bureau of Indian Affairs (BIA) and the all-female Tribal Council led by Vyola Ortner. According to an article published in the Desert Sun on Feb. 13, 1957, four solutions were proposed: 1) Dividing the tribal lands 2) Selling the tribal lands and distributing the money 3) Using tribal land income as a dividend to allottees 4) Combining two or more of the above-mentioned plans.
On June 14, 1957, comments were made through the Bureau of Indian Affairs, by the United States Department of the Interior, Office of the Secretary, stating that to make the management of tribal and individual assets more manageable a corporation should be managed by appointed people familiar with corporate ways. The allotted lands would be turned over to the corporation and tribal members would receive stock in the corporation. The reservation would be sold off and the proceeds distributed to tribal members.
At a hearing held on Oct. 2, 1957, Tribal Chairman Vyola Olinger (Ortner) said in her testimony, “that the Indians felt that the bill established a liquidating corporation, one that would sell as rapidly as possible their tribal lands…and was not acceptable to the members of the Tribe.” She proposed that except for tribal reserves all other lands would be allotted and Congress should pass a bill to complete the equalization.
Rex Lee of the BIA announced on June 25, 1958 that tribal lands (31,610 acres valued at approximately $12,000,00 by the Secretary of the Interior in 1957) would be divided up with the goal of bringing each member’s land value up to $350,000 and any member whose allotments were already valued between $87,000 and $629,000 would not benefit from the changes.
On July 19, 1958, the BIA received a letter from the Commissioner of Indian Affairs telling him to proceed with the process of allotting the land including the reserves (land held by the tribe collectively.) The Tribal Council responded by declaring that they would wage an all-out war with the federal government’s equalization program saying that it would “liquidate” and destroy the tribe itself.
Ortner said that the government’s plan, “creates an economic and culture crisis for both the City of Palm Springs and the Agua Caliente Band of Cahuilla Indians.
Later in the summer, Associated Press reporter John Beckler reported that “BIA Director, Leonard Hill, admitted that plans for developing Section 14 as a single package had fallen through.” Beckler added, “Real Estate people now are faced with the problem of dealing with the Indians on an individual basis and developing in a piecemeal fashion.”
In an article in the Desert Sun published on July 29, 1958, Ortner asked the City of Palm Springs for help. She stated that even though tribal members were allotted over half of the real estate within the city limits, they were “land rich but cash poor.”
In a combined lobbying effort between Congressman D.S. Saund, The Agua Caliente Tribal Council, and the City of Palm Springs, Public Law 86-3327 and Public Law 105-308 were passed by the House of Representatives, and the Senate and signed into law by President Dwight Eisenhower on Sept. 21, 1959.
The Equalization Bill (Public Law 105-308) authorized the allotments to be equalized with the target of the value set at $350,000 and the Land Lease Act (Public Law 86-327) authorized increasing the leasing period to a maximum of 99 years. Anyone not alive before Sept. 21, 1959, was excluded from the equalization process and would not be allotted land.
In accordance with this law, most of the reservation land was allotted to 92 individual members (31 adults and 61 minors) except for the lands held by the tribe in common which included two cemeteries, the Catholic Church located on Section 14, Our Lady of Guadalupe, the hot mineral spring including the area around it, and the lands in Tahquitz, Palm, Murray, and Andreas canyons.
The tribal members were able to stop the incorporation and complete land liquidation proposal, but the associated notion of appointment of guardians and conservators for all minors and adults judged as “incapable to handle their own affairs” was passed by Congress and created another struggle for the native people of Palm Springs.
|
<urn:uuid:b6a51005-a7d0-48f3-bde6-fb334738b6b8>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644867.89/warc/CC-MAIN-20230529141542-20230529171542-00413.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9677209854125977,
"pii_count": 0,
"score": 3.8125,
"token_count": 1386,
"url": "https://www.desertsun.com/story/life/history/2023/04/30/history-property-rights-struggle-by-members-of-agua-caliente-band-of-the-cahuilla-indians-has-been-d/70165045007/"
}
|
History of equalization of property rights by members of Agua Caliente Band of the Cahuilla Indians
The struggle for the property rights of the individual members of the Agua Caliente Band of the Cahuilla Indians has been long and difficult. In 1958, Tribal Council Chairman Vyola Olinger (later Ortner) presided over “one of the biggest real estate deals in the country” in championing the rights of her people.
The heart of the reservation of the Agua Caliente Band of Cahuilla Indians is Section 14, a 640-acre parcel in the center of downtown Palm Springs. It was at the center of a possible multimillion-dollar development opportunity that was crucial for the Tribe’s survival.
Two issues needed to be adjudicated before any development was possible. The first was to finalize the equalization of values of individual allotments, parcels of land given to each tribal member, and the second was to increase the time that tribal members could lease their land in order to attract developers.
Allotment of the land on the Palm Springs reservation, originally established under 1891 law, to individual members of the Agua Caliente Band was begun in 1923. Because of various complications in legislation, administration and litigation, the first allotment schedule was not approved by the Secretary of the Interior until 1949.
A decade-long battle began when an attempt to “equalize” allotments of land to individual members of the tribe was made by the federal government. Making the values of allotments equal and fair to all tribal members was a daunting task. Allotments varied considerably according to Native American expert Dr. Lowell Bean in his book From Time Immemorial.
In 1954, Clemente Segundo et al vs. United States was settled with a ruling that each plaintiff in the case was to be allotted “total lands of as nearly equal value to the lands allotted to each of the other members.” This decision by the court ended the legal back-and-forth of individual tribal members and the federal government, setting up the framework for both individual members’ property rights and the Tribe’s future.
The question of how to equalize the values of real estate was studied by the federal government through the Bureau of Indian Affairs (BIA) and the all-female Tribal Council led by Vyola Ortner. According to an article published in the Desert Sun on Feb. 13
|
, 1957, four solutions were proposed: 1) Dividing the tribal lands 2) Selling the tribal lands and distributing the money 3) Using tribal land income as a dividend to allottees 4) Combining two or more of the above-mentioned plans.
On June 14, 1957, comments were made through the Bureau of Indian Affairs, by the United States Department of the Interior, Office of the Secretary, stating that to make the management of tribal and individual assets more manageable a corporation should be managed by appointed people familiar with corporate ways. The allotted lands would be turned over to the corporation and tribal members would receive stock in the corporation. The reservation would be sold off and the proceeds distributed to tribal members.
At a hearing held on Oct. 2, 1957, Tribal Chairman Vyola Olinger (Ortner) said in her testimony, “that the Indians felt that the bill established a liquidating corporation, one that would sell as rapidly as possible their tribal lands…and was not acceptable to the members of the Tribe.” She proposed that except for tribal reserves all other lands would be allotted and Congress should pass a bill to complete the equalization.
Rex Lee of the BIA announced on June 25, 1958 that tribal lands (31,610 acres valued at approximately $12,000,00 by the Secretary of the Interior in 1957) would be divided up with the goal of bringing each member’s land value up to $350,000 and any member whose allotments were already valued between $87,000 and $629,000 would not benefit from the changes.
On July 19, 1958, the BIA received a letter from the Commissioner of Indian Affairs telling him to proceed with the process of allotting the land including the reserves (land held by the tribe collectively.) The Tribal Council responded by declaring that they would wage an all-out war with the federal government’s equalization program saying that it would “liquidate” and destroy the tribe itself.
Ortner said that the government’s plan, “creates an economic and culture crisis for both the City of Palm Springs and the Agua Caliente Band of Cahuilla Indians.
Later in the summer, Associated Press reporter John Beckler reported that “BIA Director, Leonard Hill, admitted that plans for developing Section 14 as a single package had fallen through.” Beckler added, “Real Estate people now are faced with the problem of dealing with the Indians on an individual basis and developing in a piecemeal fashion.”
In an article in the Desert Sun published on July 29, 1958, Ortner asked the City of Palm Springs for help. She stated that even though tribal members were allotted over half of the real estate within the city limits, they were “land rich but cash poor.”
In a combined lobbying effort between Congressman D.S. Saund, The Agua Caliente Tribal Council, and the City of Palm Springs, Public Law 86-3327 and Public Law 105-308 were passed by the House of Representatives, and the Senate and signed into law by President Dwight Eisenhower on Sept. 21, 1959.
The Equalization Bill (Public Law 105-308) authorized the allotments to be equalized with the target of the value set at $350,000 and the Land Lease Act (Public Law 86-327) authorized increasing the leasing period to a maximum of 99 years. Anyone not alive before Sept. 21, 1959, was excluded from the equalization process and would not be allotted land.
In accordance with this law, most of the reservation land was allotted to 92 individual members (31 adults and 61 minors) except for the lands held by the tribe in common which included two cemeteries, the Catholic Church located on Section 14, Our Lady of Guadalupe, the hot mineral spring including the area around it, and the lands in Tahquitz, Palm, Murray, and Andreas canyons.
The tribal members were able to stop the incorporation and complete land liquidation proposal, but the associated notion of appointment of guardians and conservators for all minors and adults judged as “incapable to handle their own affairs” was passed by Congress and created another struggle for the native people of Palm Springs.
|
How much do we care about global warming? In theory, 93% of European Union citizens believe climate change is a serious threat, and 67% believe national governments are not doing enough to tackle it. In practice, however, European voters are punishing governments for their efforts to reduce emissions. Anti-green parties in Germany and the Netherlands are siphoning votes by opposing mandated emission cuts. Meanwhile, Sweden’s ruling coalition is cutting fossil fuel taxes to appease voters.
Europeans seem to want a solution to climate change, but only if they don’t have to pay for it. Indeed, Ipsos polling suggests that less than a third of EU citizens would pay more in income taxes to prevent climate change. For comparison, a University of Chicago poll finds that just 38% of Americans would be willing to pay a $1 monthly carbon fee to reduce emissions, down 14 percentage points from two years ago.
Clearly, political parties that wish to remain in power will have to back off decarbonization by decree. Instead, they will have to rely on market-based solutions like emissions trading systems (ETSs).
But are ETSs up to the job of holding climate change to 1.5° C? While carbon markets have immense potential to curb emissions, current iterations suffer from a litany of perverse incentives and unintended consequences. Let’s consider the limitations of the world’s posterchild ETS—the European Union’s Emissions Trading System—and then identify refinements that will equip ETSs to win the war against climate change. With COP28 starting November 30 in Dubai, now is the time to build global consensus for bold reforms in our carbon markets.
Fit for nothing?
For context, ETSs use a cap-and-trade policy: emission limits are set, free credits are allocated and fines are issued if a company emits more carbon than they’re allotted. Companies that reduce carbon usage avoid having to buy credits and profit by selling excess credits to companies that have exceeded their allowance. Ideally, this incentivizes companies to adopt cleaner technologies. In practice, however, it seems ETSs often enable firms to foist carbon costs onto their customers while carrying on as usual.
And the reach of these systems is growing. When the EU’s ETS debuted in 2005, it only covered emissions from heat and electricity generation, energy-intensive industries, aviation and maritime transport. But now, the EU’s Fit for 55 initiative, adopted in April 2022, aims to secure a 55% reduction in emissions by 2030 through adding cap-and-trade for buildings, transportation and fuels.
Cambridge Econometrics, a policy analysis firm, finds that Fit for 55 will increase gas-fueled household heating prices by 30% on average and raise the cost to fill a petrol vehicle by 16%. That is precisely the type of outcome that leads voters to reject state-lead climate policies.
To mollify critics, EU members are creating a $150+ billion Social Climate Fund that will “cut bills for vulnerable households and small businesses” according to European Commission President Ursula von der Leyen. Effectively, the EU is taxing emitters and giving the proceeds to consumers to pay back to the emitters. In October, Canada did likewise by exempting households in Atlantic Canada from the carbon tax on their home heating bills. This is a pointless circular exchange of funds. Moreover, why would fossil fuel companies invest in decarbonization when governments subsidize the purchase of their dirty products?
A hole in the bucket
Another unintended consequence of carbon pricing and cap and trade is carbon leakage, which occurs when carbon prices rise so high that heavy emitters relocate abroad to avoid the costs. Of course, jobs and tax revenue leak out alongside the emitters.
For a while, fears of leakage from the EU ETS were mild, especially after a 2019 analysis found no evidence for it. However, in 2019 when the paper was published, the cost of an EU allowance peaked at €29 per metric ton. Since then, allowances have nearly tripled to €84. The higher the cost of an allowance, the more incentivized companies are to relocate to nations without ETSs or carbon taxes.
To curb leakage, the EU hands out free carbon credits to sectors at the greatest risk of leaving the EU. These sectors include oil refining, mining, petrochemicals, cement and coke oven products like steel. In other words, cleaner sectors pay for every cent of carbon they emit while the dirtiest industries are coddled. The free allowance will wind down to zero in 2034, by which point a 1.5° C world will be far gone. And at that juncture, why wouldn’t companies relocate to unregulated nations to take advantage of the lower costs, turning an emissions reduction plan into a not-in-my-backyard plan?
Is this thing even on?
Officially, emissions in sectors covered by the EU’s ETS have dropped by 43% since 2005. Yet, it is unclear whether the ETS even caused these emission reductions. Other incentives including feed-in tariffs, green certificates and the Large Combustion Plant Directive may deserve more of the credit. Because so many climate policies operate simultaneously, it’s difficult if not impossible to attribute the impact to any single initiative.
If we’re not even certain ETSs work, why are they still growing in popularity? The answer is efficiency. Government agencies are not investment firms. They struggle to find and fund the most efficient carbon-reducing technologies throughout the economy. The inherent greed of the free market is more efficient. It can drive capital to the technologies that result in the cheapest ton of CO2, saving taxpayer dollars and expediting the green transition.
ETSs distribute carbon costs based on a firm’s performance rather than government stewardship, incentivizing innovation. This enables investments like point-source carbon capture, which may provide little benefit to a firm’s bottom line but could have a tremendous impact on the environment. While decrees offer only sticks, ETSs provide carrots. No wonder that ETSs remain the most compelling way to incentivize decarbonization.
We need an air-tight carbon market
Despite ETSs current popularity, they need several refinements to prevent industries from leaking into less regulated markets or getting subsidies for dirty products. Heavy emitters will have to be whipped into shape and incentivized to adopt the most direct and measurable forms of decarbonization.
To achieve that, governments must collaborate on either a global ETS scheme or a minimum global carbon tax to disincentivize leakage. Recall that in 2021, 140 nations rallied around a plan to close off tax havens and institute a global minimum corporate tax rate of 15%. An excellent example! It’s time we close the carbon havens. Where better than COP28 to advocate for these improvements?
Heavy emitters should pay fully for the planetary damage they cause. And given that credits are prone to corruption, ETSs should provide stronger incentives to reduce or prevent emissions instead of merely buying credits. For example, a coal-fired power plant that invests in point-source carbon capture does more to mitigate emissions than a power plant that pays an organization to plant trees (which might burn down anyway). Paying someone else to build a green business isn’t equivalent to eliminating your business’s emissions.
Also important is to ensure that carbon costs are not merely passed onto consumers of inelastic goods like power and heating. Governments need to stop subsidizing consumer carbon emissions and instead subsidize adoption of technologies like electric vehicles, geothermal heating and heat pumps that reduce dependence on fossil fuels.
Without such incentives, the war on carbon may soon be lost. Although the odds are daunting, we have a responsibility to try everything possible, everywhere, to avoid the earth-shattering consequences of unchecked climate change. The dramatic climate events we have seen in recent years may just be a harbinger of bigger calamities to come, especially if global temperatures rise beyond 2° C.
The public has spoken. Though concerned about climate change, citizens will reject policies that spare heavy emitters from accountability while raising the cost of living for households. ETSs remain the best possible alternative that could have real impact. Let’s recreate carbon markets to achieve what decarbonization by decree has not: a good future for our children.
|
<urn:uuid:0f233750-d224-4417-88a4-8ddc759fe89e>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00186.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9343565702438354,
"pii_count": 0,
"score": 2.828125,
"token_count": 1730,
"url": "https://www.forbes.com/sites/walvanlierop/2023/11/08/decarbonization-by-decree-fails-can-carbon-markets-fill-the-void/"
}
|
How much do we care about global warming? In theory, 93% of European Union citizens believe climate change is a serious threat, and 67% believe national governments are not doing enough to tackle it. In practice, however, European voters are punishing governments for their efforts to reduce emissions. Anti-green parties in Germany and the Netherlands are siphoning votes by opposing mandated emission cuts. Meanwhile, Sweden’s ruling coalition is cutting fossil fuel taxes to appease voters.
Europeans seem to want a solution to climate change, but only if they don’t have to pay for it. Indeed, Ipsos polling suggests that less than a third of EU citizens would pay more in income taxes to prevent climate change. For comparison, a University of Chicago poll finds that just 38% of Americans would be willing to pay a $1 monthly carbon fee to reduce emissions, down 14 percentage points from two years ago.
Clearly, political parties that wish to remain in power will have to back off decarbonization by decree. Instead, they will have to rely on market-based solutions like emissions trading systems (ETSs).
But are ETSs up to the job of holding climate change to 1.5° C? While carbon markets have immense potential to curb emissions, current iterations suffer from a litany of perverse incentives and unintended consequences. Let’s consider the limitations of the world’s posterchild ETS—the European Union’s Emissions Trading System—and then identify refinements that will equip ETSs to win the war against climate change. With COP28 starting November 30 in Dubai, now is the time to build global consensus for bold reforms in our carbon markets.
Fit for nothing?
For context, ETSs use a cap-and-trade policy: emission limits are set, free credits are allocated and fines are issued if a company emits more carbon than they’re allotted. Companies that reduce carbon usage avoid having to buy credits and profit by selling excess credits to companies that have exceeded their allowance. Ideally, this incentivizes companies to adopt cleaner technologies. In practice, however, it seems ETSs often enable firms to foist carbon costs onto their customers while carrying on as usual.
And the reach of these systems is growing. When the EU’s ETS debuted in 2005, it only covered emissions from heat and electricity generation, energy-intensive industries, aviation and maritime transport. But now, the EU’s Fit for 55 initiative, adopted in April
|
2022, aims to secure a 55% reduction in emissions by 2030 through adding cap-and-trade for buildings, transportation and fuels.
Cambridge Econometrics, a policy analysis firm, finds that Fit for 55 will increase gas-fueled household heating prices by 30% on average and raise the cost to fill a petrol vehicle by 16%. That is precisely the type of outcome that leads voters to reject state-lead climate policies.
To mollify critics, EU members are creating a $150+ billion Social Climate Fund that will “cut bills for vulnerable households and small businesses” according to European Commission President Ursula von der Leyen. Effectively, the EU is taxing emitters and giving the proceeds to consumers to pay back to the emitters. In October, Canada did likewise by exempting households in Atlantic Canada from the carbon tax on their home heating bills. This is a pointless circular exchange of funds. Moreover, why would fossil fuel companies invest in decarbonization when governments subsidize the purchase of their dirty products?
A hole in the bucket
Another unintended consequence of carbon pricing and cap and trade is carbon leakage, which occurs when carbon prices rise so high that heavy emitters relocate abroad to avoid the costs. Of course, jobs and tax revenue leak out alongside the emitters.
For a while, fears of leakage from the EU ETS were mild, especially after a 2019 analysis found no evidence for it. However, in 2019 when the paper was published, the cost of an EU allowance peaked at €29 per metric ton. Since then, allowances have nearly tripled to €84. The higher the cost of an allowance, the more incentivized companies are to relocate to nations without ETSs or carbon taxes.
To curb leakage, the EU hands out free carbon credits to sectors at the greatest risk of leaving the EU. These sectors include oil refining, mining, petrochemicals, cement and coke oven products like steel. In other words, cleaner sectors pay for every cent of carbon they emit while the dirtiest industries are coddled. The free allowance will wind down to zero in 2034, by which point a 1.5° C world will be far gone. And at that juncture, why wouldn’t companies relocate to unregulated nations to take advantage of the lower costs, turning an emissions reduction plan into a not-in-my-backyard plan?
Is this thing even on?
Officially, emissions in sectors covered by the EU’s ETS have dropped by 43% since 2005. Yet, it is unclear whether the ETS even caused these emission reductions. Other incentives including feed-in tariffs, green certificates and the Large Combustion Plant Directive may deserve more of the credit. Because so many climate policies operate simultaneously, it’s difficult if not impossible to attribute the impact to any single initiative.
If we’re not even certain ETSs work, why are they still growing in popularity? The answer is efficiency. Government agencies are not investment firms. They struggle to find and fund the most efficient carbon-reducing technologies throughout the economy. The inherent greed of the free market is more efficient. It can drive capital to the technologies that result in the cheapest ton of CO2, saving taxpayer dollars and expediting the green transition.
ETSs distribute carbon costs based on a firm’s performance rather than government stewardship, incentivizing innovation. This enables investments like point-source carbon capture, which may provide little benefit to a firm’s bottom line but could have a tremendous impact on the environment. While decrees offer only sticks, ETSs provide carrots. No wonder that ETSs remain the most compelling way to incentivize decarbonization.
We need an air-tight carbon market
Despite ETSs current popularity, they need several refinements to prevent industries from leaking into less regulated markets or getting subsidies for dirty products. Heavy emitters will have to be whipped into shape and incentivized to adopt the most direct and measurable forms of decarbonization.
To achieve that, governments must collaborate on either a global ETS scheme or a minimum global carbon tax to disincentivize leakage. Recall that in 2021, 140 nations rallied around a plan to close off tax havens and institute a global minimum corporate tax rate of 15%. An excellent example! It’s time we close the carbon havens. Where better than COP28 to advocate for these improvements?
Heavy emitters should pay fully for the planetary damage they cause. And given that credits are prone to corruption, ETSs should provide stronger incentives to reduce or prevent emissions instead of merely buying credits. For example, a coal-fired power plant that invests in point-source carbon capture does more to mitigate emissions than a power plant that pays an organization to plant trees (which might burn down anyway). Paying someone else to build a green business isn’t equivalent to eliminating your business’s emissions.
Also important is to ensure that carbon costs are not merely passed onto consumers of inelastic goods like power and heating. Governments need to stop subsidizing consumer carbon emissions and instead subsidize adoption of technologies like electric vehicles, geothermal heating and heat pumps that reduce dependence on fossil fuels.
Without such incentives, the war on carbon may soon be lost. Although the odds are daunting, we have a responsibility to try everything possible, everywhere, to avoid the earth-shattering consequences of unchecked climate change. The dramatic climate events we have seen in recent years may just be a harbinger of bigger calamities to come, especially if global temperatures rise beyond 2° C.
The public has spoken. Though concerned about climate change, citizens will reject policies that spare heavy emitters from accountability while raising the cost of living for households. ETSs remain the best possible alternative that could have real impact. Let’s recreate carbon markets to achieve what decarbonization by decree has not: a good future for our children.
|
Parts of Europe are baking in an unusually warm October, raising further concerns of the impact of rising greenhouse gas emissions on the climate while delaying the start of the continent’s heating season.
(Bloomberg) — Parts of Europe are baking in an unusually warm October, raising further concerns of the impact of rising greenhouse gas emissions on the climate while delaying the start of the continent’s heating season.
Portugal has issued yellow alerts for high temperatures over the next few days. In Spain, highs in Madrid will clock in above 30C (86F) on both Saturday and Sunday, at least 6C above normal, according to a forecast by Maxar Technologies.
London, which endured some cold and bleak weather during the summer, is forecast to reach at least 22C on three of the next five days, while Paris climbs to 26C on Tuesday. Temperatures are also expected to be above average in Germany and Italy.
“Climate change is already significantly affecting the UK’s weather, with more and more events becoming ever more extreme,” said Grahame Madge, a spokesman for the UK’s Met Office. He said Great Britain is expected to see temperatures about 6C above average for early October.
Read more: September Broke the Global Heat Record by an Absurd Margin
Portugal’s Institute for Sea and Atmosphere said on Wednesday that the current weather pattern may extend into next week.
Spain experienced six consecutive days of extraordinarily high temperatures from Sept. 28 to Oct. 4, and summer-like readings could persist for an unprecedented 16 days, according to Aemet, the country’s meteorological agency.
“It’s definitely quite unseasonably warm and it’s probably pretty close to record warmth,” said Steven Silver, senior meteorologist at Maxar’s Weather Desk.
Read more: Europe Set for Mild October, Delaying Start of Heating Season
On the island of Madeira in the Atlantic, south of Portugal, a record high for October was recorded on Thursday in the city of Funchal. The temperature reached 34.7C, exceeding the prior high of 34.3C set in October 2014.
Beaches in the greater Lisbon area were packed on Thursday, a national holiday, and may fill up again over the weekend as locals and tourists seek some autumn swim-time.
The Portuguese Federation of Lifeguards issued a statement last week warning about the risk of drowning since many beaches are no longer monitored by lifeguards after the main summer season ends, the news agency Lusa reported.
Earlier this week, scientists confirmed that the September global average temperature broke records by an “extraordinary” margin. The temperature anomaly for the month was roughly 1.7C above pre-industrial levels, above the symbolic 1.5C mark set as the stretch goal in the Paris Agreement.
Carbon dioxide emissions from burning fossil fuels regarded as are the main driver of rising temperatures. The global average temperature this year has also seen a boost from El Niño, a natural climate shift in the Pacific. Other factors, such as a decline in cooling aerosol pollution from ships, may also be pushing temperatures up incrementally.
–With assistance from Laura Millan and Ellie Harmsworth.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.
|
<urn:uuid:9a6909ff-7f5d-4e72-a480-67258f604e1a>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00765.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9422169327735901,
"pii_count": 0,
"score": 2.875,
"token_count": 695,
"url": "https://ec2-15-188-152-128.eu-west-3.compute.amazonaws.com/2023/10/07/climate-change-blamed-as-europes-cities-see-autumn-heat-waves/"
}
|
Parts of Europe are baking in an unusually warm October, raising further concerns of the impact of rising greenhouse gas emissions on the climate while delaying the start of the continent’s heating season.
(Bloomberg) — Parts of Europe are baking in an unusually warm October, raising further concerns of the impact of rising greenhouse gas emissions on the climate while delaying the start of the continent’s heating season.
Portugal has issued yellow alerts for high temperatures over the next few days. In Spain, highs in Madrid will clock in above 30C (86F) on both Saturday and Sunday, at least 6C above normal, according to a forecast by Maxar Technologies.
London, which endured some cold and bleak weather during the summer, is forecast to reach at least 22C on three of the next five days, while Paris climbs to 26C on Tuesday. Temperatures are also expected to be above average in Germany and Italy.
“Climate change is already significantly affecting the UK’s weather, with more and more events becoming ever more extreme,” said Grahame Madge, a spokesman for the UK’s Met Office. He said Great Britain is expected to see temperatures about 6C above average for early October.
Read more: September Broke the Global Heat Record by an Absurd Margin
Portugal’s Institute for Sea and Atmosphere said on Wednesday that the current weather pattern may extend into next week.
Spain experienced six consecutive days of extraordinarily high temperatures from Sept. 28 to Oct. 4, and summer-like readings could persist for an unprecedented 16 days, according to Aemet, the country’s meteorological agency.
“It’s definitely quite unseasonably warm and it’s probably pretty close to record warmth,” said Steven Silver, senior meteorologist at Maxar’s Weather Desk.
Read more: Europe Set for Mild October, Delaying Start of Heating Season
On the island of Madeira in the Atlantic, south of Portugal, a record high for October was recorded on Thursday in the city of Funchal. The temperature reached 34.7C, exceeding the prior high of 34.3C set in October 2014.
Beaches in the greater Lisbon area were packed on Thursday, a national holiday, and may fill up again over the weekend as locals and tourists seek some autumn swim-time.
The Portuguese Federation of Lifeguards issued a statement last week warning about the risk of drowning since many beaches are no longer monitored by lifeguards
|
after the main summer season ends, the news agency Lusa reported.
Earlier this week, scientists confirmed that the September global average temperature broke records by an “extraordinary” margin. The temperature anomaly for the month was roughly 1.7C above pre-industrial levels, above the symbolic 1.5C mark set as the stretch goal in the Paris Agreement.
Carbon dioxide emissions from burning fossil fuels regarded as are the main driver of rising temperatures. The global average temperature this year has also seen a boost from El Niño, a natural climate shift in the Pacific. Other factors, such as a decline in cooling aerosol pollution from ships, may also be pushing temperatures up incrementally.
–With assistance from Laura Millan and Ellie Harmsworth.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.
|
Canned tuna is known to contain low levels of mercury, but a new Consumer Reports investigation has found spikes of the neurotoxin in some cans.
The organization tested five popular tuna brands, CBS News reported.
While the mercury levels were all within U.S. Food and Drug Administration standards, Consumer Reports urged pregnant women to "avoid canned tuna altogether."
FDA guidelines say pregnant women can eat canned tuna in limited quantities.
"While canned tuna, especially light varieties, has relatively low average levels of mercury, individual cans can sometimes have much higher levels," Consumer Reports said.
"From can to can, mercury levels can spike in unpredictable ways that might jeopardize the health of a fetus," said James Rogers, director of food safety research and testing at the independent nonprofit.
Mercury can affect neurodevelopment, said CBS News medical contributor Dr. David Agus.
The effects may include impaired brain function and developmental delays in children. If a fetus is exposed to high levels of mercury, it may lead to thinking and memory issues later on, he said.
"Young children and pregnant women especially need to keep mercury away from those neurons that are developing," Agus told CBS News.
Consumer Reports tested 10 tuna products from five brands: Bumble Bee, Chicken of the Sea, Safe Catch, StarKist and Wild Planet. The tests included about 30 samples of both albacore and light tuna from each brand, all packed in water, CBS News reported.
Light tuna tended to have less mercury than albacore, the tests showed.
"But you can't tell by just looking how much mercury a specific can has," said Consumer Reports senior scientist Michael Hansen.
Consumer Reports found six individual spikes in mercury content among the 30 samples "that would change the FDA's recommendation about how often someone should eat that particular tuna," CBS News reported.
Consumer Reports said kids should eat only light and skipjack tuna in limited quantities and adults who aren't pregnant should aim for 8 to 12 ounces per week of fish that is lower in mercury.
"That could include up to three servings of light or skipjack tuna. ... You can eat albacore, but only one 4-ounce serving per week," Consumer Reports said.
The tests provide information on what consumers "may experience at a moment in time when eating these brands of tuna, and underscore the importance of making safer choices in their daily routines," Consumer Reports said.
"One big takeaway is that albacore has much more mercury than light or skipjack tuna, regardless of the brand," the organization said. "That's not surprising, since albacore is larger and lives longer than the tunas that make up the light tuna or skipjack tunas. But the disparity was quite wide: The albacore products had three times more mercury, on average, than the others."
Mercury is an impurity in coal. Elemental mercury goes to the clouds when coal is burned and then comes down when it rains, Agus told CBS News. Marine animals consume it when it ends up in the ocean as methylmercury.
It can't be removed through cooking.
While higher levels of mercury appear in shark and swordfish, trout and salmon contain little.
Four of the five tuna companies told CBS News their products are safe and within FDA standards. Bumble Bee did not immediately respond, but told Consumer Reports that "health benefits of consuming seafood far outweigh any potential risk, including concerns about mercury," CBS News reported.
|
<urn:uuid:c4e3204e-1f76-4d42-8a94-7e54c4201839>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652494.25/warc/CC-MAIN-20230606082037-20230606112037-00636.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9580734968185425,
"pii_count": 0,
"score": 2.59375,
"token_count": 719,
"url": "https://cloudflarepoc.newsmax.com/health/diet-and-fitness/tuna-canned-mercury/2023/02/13/id/1108388/"
}
|
Canned tuna is known to contain low levels of mercury, but a new Consumer Reports investigation has found spikes of the neurotoxin in some cans.
The organization tested five popular tuna brands, CBS News reported.
While the mercury levels were all within U.S. Food and Drug Administration standards, Consumer Reports urged pregnant women to "avoid canned tuna altogether."
FDA guidelines say pregnant women can eat canned tuna in limited quantities.
"While canned tuna, especially light varieties, has relatively low average levels of mercury, individual cans can sometimes have much higher levels," Consumer Reports said.
"From can to can, mercury levels can spike in unpredictable ways that might jeopardize the health of a fetus," said James Rogers, director of food safety research and testing at the independent nonprofit.
Mercury can affect neurodevelopment, said CBS News medical contributor Dr. David Agus.
The effects may include impaired brain function and developmental delays in children. If a fetus is exposed to high levels of mercury, it may lead to thinking and memory issues later on, he said.
"Young children and pregnant women especially need to keep mercury away from those neurons that are developing," Agus told CBS News.
Consumer Reports tested 10 tuna products from five brands: Bumble Bee, Chicken of the Sea, Safe Catch, StarKist and Wild Planet. The tests included about 30 samples of both albacore and light tuna from each brand, all packed in water, CBS News reported.
Light tuna tended to have less mercury than albacore, the tests showed.
"But you can't tell by just looking how much mercury a specific can has," said Consumer Reports senior scientist Michael Hansen.
Consumer Reports found six individual spikes in mercury content among the 30 samples "that would change the FDA's recommendation about how often someone should eat that particular tuna," CBS News reported.
Consumer Reports said kids should eat only light and skipjack tuna in limited quantities and adults who aren't pregnant should aim for 8 to 12 ounces per week of fish that is lower in mercury.
"That could include up to three servings of light or skipjack tuna. ... You can eat albacore, but only one 4-ounce serving per week," Consumer Reports said.
The tests provide information on what consumers "may experience at a moment in time when eating these brands of tuna, and underscore the importance of making safer choices in their daily routines," Consumer Reports said.
"One big takeaway is
|
that albacore has much more mercury than light or skipjack tuna, regardless of the brand," the organization said. "That's not surprising, since albacore is larger and lives longer than the tunas that make up the light tuna or skipjack tunas. But the disparity was quite wide: The albacore products had three times more mercury, on average, than the others."
Mercury is an impurity in coal. Elemental mercury goes to the clouds when coal is burned and then comes down when it rains, Agus told CBS News. Marine animals consume it when it ends up in the ocean as methylmercury.
It can't be removed through cooking.
While higher levels of mercury appear in shark and swordfish, trout and salmon contain little.
Four of the five tuna companies told CBS News their products are safe and within FDA standards. Bumble Bee did not immediately respond, but told Consumer Reports that "health benefits of consuming seafood far outweigh any potential risk, including concerns about mercury," CBS News reported.
|
On November 30, 2022, OpenAI announced the public release of ChatGPT-3, a large language model (LLM) that can engage in astonishingly human-like conversations and answer an incredible variety of questions. Three weeks later, Google's management — wary that they had been publicly eclipsed by a competitor in the artificial intelligence technology space — issued a "Code Red" to staff.
Google's core business is its search engine, which currently accounts for 84% of the global search market. Their search engine is so dominant that searching the internet is generically called "googling." When a user poses a search request, Google's search engine returns dozens of helpful links along with targeted advertisements based on its knowledge of the user (and it knows much more than it should about us). The links are generally helpful, but it can take several minutes to sift through the links to find what one is searching for.
GPT, in contrast, provides direct answers to user questions. No more jumping from link to link in a treasure hunt for information. Microsoft, which has its own competing search engine, thinks that's a big deal. It invested $1 billion in OpenAI in 2019 and has now reportedly agreed to invest $10 billion more over several years with the hope that GPT will supercharge its languishing search engine, Bing, and wrench market share away from Google. Google thinks it's a big deal too. Hence, the "Code Red."
Google has been working on its own large language model, named LaMDA, though it has not made it available for public testing nor allowed unauthorized Google employees to disclose any information about how well it performs. Prodded by the avalanche of excitement over GPT, Google responded with a live public demonstration of LaMDA's prowess on February 7.
Unfortunately, LaMDA bombed its job interview. When asked, "What new discoveries from the James Webb Space Telescope can I tell my 9-year old about?," LaMDA gave two correct responses but then incorrectly stated that the Webb telescope had taken the first pictures of a planet outside our solar system. (The European Southern Observatory's Very Large Telescope had been the first.) Google's stock price promptly dropped 8%, representing a $100 billion loss in the company's market value.
These models are programmed to assert their answers with great confidence, but they do not know what words mean and consequently have no way of assessing the truth of their confident assertions.
That astonishing and sudden dip speaks to the degree to which AI has become the latest obsession for investors. Yet their confidence in AI — indeed, their very understanding of and definition of it — is misplaced.
GPT is not as great as many think, and LaMDA is not woefully far behind. LaMDA's bad answer is simply a reflection of the inherent flaw in all LLMs, including GPT. Indeed, no one initially noticed but Bing also gave some wrong answers when Microsoft demonstrated it to reporters.
LLMs are mere text generators. Trained on unimaginable amounts of text, they string together words in coherent sentences based on statistical probability of words following other words. But they are not "intelligent" in any real way — they are just automated calculators that spit out words. These models are programmed to assert their answers with great confidence, but they do not know what words mean and consequently have no way of assessing the truth of their confident assertions. Here is one example:
Human: Who was the first female president of California?
GPT: The first female President of California was Erin Cruz, who took office on April 02, 2021.
Erin Cruz is a California Republican who ran for Congress in 2020. She lost. We asked GPT for more information about Ms. Cruz and the program informed us that she is 46 years old, has a Bachelor of Arts degree in Political Science from Stanford, and is married to Derek Stone. None of that is true.
We asked GPT to write a biography of Gary and here is what we got:
Human: Please write a biography of Gary Smith, professor of economics at Pomona College
GPT: Gary Smith is a professor of economics at Pomona College and holds a distinguished position within the economics department. He earned his PhD in economics from Stanford University in 1986 and has taught at Pomona College since then. He is renowned for his work on labor economics, public finance, the economics of education, and industrial organization. He is widely published in books and journals and has served as a consultant to the World Bank, the International Monetary Fund, and United Nations Development Program. Smith is also highly involved in local communities, giving lectures and teaching seminars whenever possible. He has been awarded numerous honors for his work in economics, including the Founder's Award from the Claremont Graduate University in 2010.
The only correct fact in this bio was given by the prompt, that Gary is a professor of economics at Pomona College.
A reliable way of demonstrating GPT-3's unreliability is to ask it to write a biography of yourself. Do try this at home!
AI optimists might respond to GPT-3's shortcomings by proposing that it has simply not taken in enough information. But training it on larger databases will not solve the inherent problem: LLMs are unreliable because they do not know what words mean. Period. In fact, training on future databases that increasingly include the BS spouted by LLMs will make them even less trustworthy.
LLMs are the latest wave of artificial intelligence hype. IBM's Dr. Watson was supposed to revolutionize health care. Ten years and $15 billion later, it was sold for parts. Radiologists were supposed to be obsolete by 2021; there are now more radiologists than ever. Fully self-driving cars were supposed to be zipping back and forth between Los Angeles and New York by 2017; we're still waiting for a car that can drive down a street reliably avoiding pedestrians, bicyclists and construction crews.
Now Bill Gates says GPT "will change our world" That may well be true, but not in the ways that most people think.
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.
LLMs can be used for search queries, but people who know that LLMs can't trusted won't rely on them. People who don't know that LLMs are unreliable will learn the hard way. LLMs can be used to handle customer service queries, but how many companies will be willing to jeopardize their reputation by giving their customers incorrect information? LLMs will certainly be used to fuel a firehose of internet falsehoods, but we count the coming disinformation tsunami as a very big negative.
We also count their impact on electricity usage and carbon emissions as a negative. When we asked GPT, "Who won the Super Bowl this year?," it responded, "The Tampa Bay Buccaneers defeated the Kansas City Chiefs in Super Bowl LV, which was held on February 7, 2021." To keep current, LLMs will have to be retrained frequently, which is enormously expensive. It has also been estimated that involving LLMs in the search process will require "at least four or five times more computing per search."
Against these enormous costs, where are the big payoffs? As a Financial Times headline blared: "Artificial intelligence stocks soar on ChatGPT hype." The undeniable magic of the human-like conversations generated by GPT will undoubtedly enrich many who peddle the false narrative that computers are now smarter than us and can be trusted to make decisions for us. The AI bubble is inflating rapidly.
That's our code red.
|
<urn:uuid:5363a7a2-dd70-4b42-9809-4bee56ce50cb>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474659.73/warc/CC-MAIN-20240226094435-20240226124435-00420.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.971164345741272,
"pii_count": 0,
"score": 2.6875,
"token_count": 1565,
"url": "https://www.salon.com/2023/02/21/ai-chatbots-are-having-their-tulip-mania-moment/"
}
|
On November 30, 2022, OpenAI announced the public release of ChatGPT-3, a large language model (LLM) that can engage in astonishingly human-like conversations and answer an incredible variety of questions. Three weeks later, Google's management — wary that they had been publicly eclipsed by a competitor in the artificial intelligence technology space — issued a "Code Red" to staff.
Google's core business is its search engine, which currently accounts for 84% of the global search market. Their search engine is so dominant that searching the internet is generically called "googling." When a user poses a search request, Google's search engine returns dozens of helpful links along with targeted advertisements based on its knowledge of the user (and it knows much more than it should about us). The links are generally helpful, but it can take several minutes to sift through the links to find what one is searching for.
GPT, in contrast, provides direct answers to user questions. No more jumping from link to link in a treasure hunt for information. Microsoft, which has its own competing search engine, thinks that's a big deal. It invested $1 billion in OpenAI in 2019 and has now reportedly agreed to invest $10 billion more over several years with the hope that GPT will supercharge its languishing search engine, Bing, and wrench market share away from Google. Google thinks it's a big deal too. Hence, the "Code Red."
Google has been working on its own large language model, named LaMDA, though it has not made it available for public testing nor allowed unauthorized Google employees to disclose any information about how well it performs. Prodded by the avalanche of excitement over GPT, Google responded with a live public demonstration of LaMDA's prowess on February 7.
Unfortunately, LaMDA bombed its job interview. When asked, "What new discoveries from the James Webb Space Telescope can I tell my 9-year old about?," LaMDA gave two correct responses but then incorrectly stated that the Webb telescope had taken the first pictures of a planet outside our solar system. (The European Southern Observatory's Very Large Telescope had been the first.) Google's stock price promptly dropped 8%, representing a $100 billion loss in the company's market value.
These models are programmed to assert their answers with great confidence, but they do not know what words mean and consequently have no way of
|
assessing the truth of their confident assertions.
That astonishing and sudden dip speaks to the degree to which AI has become the latest obsession for investors. Yet their confidence in AI — indeed, their very understanding of and definition of it — is misplaced.
GPT is not as great as many think, and LaMDA is not woefully far behind. LaMDA's bad answer is simply a reflection of the inherent flaw in all LLMs, including GPT. Indeed, no one initially noticed but Bing also gave some wrong answers when Microsoft demonstrated it to reporters.
LLMs are mere text generators. Trained on unimaginable amounts of text, they string together words in coherent sentences based on statistical probability of words following other words. But they are not "intelligent" in any real way — they are just automated calculators that spit out words. These models are programmed to assert their answers with great confidence, but they do not know what words mean and consequently have no way of assessing the truth of their confident assertions. Here is one example:
Human: Who was the first female president of California?
GPT: The first female President of California was Erin Cruz, who took office on April 02, 2021.
Erin Cruz is a California Republican who ran for Congress in 2020. She lost. We asked GPT for more information about Ms. Cruz and the program informed us that she is 46 years old, has a Bachelor of Arts degree in Political Science from Stanford, and is married to Derek Stone. None of that is true.
We asked GPT to write a biography of Gary and here is what we got:
Human: Please write a biography of Gary Smith, professor of economics at Pomona College
GPT: Gary Smith is a professor of economics at Pomona College and holds a distinguished position within the economics department. He earned his PhD in economics from Stanford University in 1986 and has taught at Pomona College since then. He is renowned for his work on labor economics, public finance, the economics of education, and industrial organization. He is widely published in books and journals and has served as a consultant to the World Bank, the International Monetary Fund, and United Nations Development Program. Smith is also highly involved in local communities, giving lectures and teaching seminars whenever possible. He has been awarded numerous honors for his work in economics, including the Founder's Award from the Claremont Graduate University in 2010.
The only correct fact in this bio was given by the prompt, that Gary is a professor of economics at Pomona College.
A reliable way of demonstrating GPT-3's unreliability is to ask it to write a biography of yourself. Do try this at home!
AI optimists might respond to GPT-3's shortcomings by proposing that it has simply not taken in enough information. But training it on larger databases will not solve the inherent problem: LLMs are unreliable because they do not know what words mean. Period. In fact, training on future databases that increasingly include the BS spouted by LLMs will make them even less trustworthy.
LLMs are the latest wave of artificial intelligence hype. IBM's Dr. Watson was supposed to revolutionize health care. Ten years and $15 billion later, it was sold for parts. Radiologists were supposed to be obsolete by 2021; there are now more radiologists than ever. Fully self-driving cars were supposed to be zipping back and forth between Los Angeles and New York by 2017; we're still waiting for a car that can drive down a street reliably avoiding pedestrians, bicyclists and construction crews.
Now Bill Gates says GPT "will change our world" That may well be true, but not in the ways that most people think.
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.
LLMs can be used for search queries, but people who know that LLMs can't trusted won't rely on them. People who don't know that LLMs are unreliable will learn the hard way. LLMs can be used to handle customer service queries, but how many companies will be willing to jeopardize their reputation by giving their customers incorrect information? LLMs will certainly be used to fuel a firehose of internet falsehoods, but we count the coming disinformation tsunami as a very big negative.
We also count their impact on electricity usage and carbon emissions as a negative. When we asked GPT, "Who won the Super Bowl this year?," it responded, "The Tampa Bay Buccaneers defeated the Kansas City Chiefs in Super Bowl LV, which was held on February 7, 2021." To keep current, LLMs will have to be retrained frequently, which is enormously expensive. It has also been estimated that involving LLMs in the search process will require "at least four or five times more computing per search."
Against these enormous costs, where are the big payoffs? As a Financial Times headline blared: "Artificial intelligence stocks soar on ChatGPT hype." The undeniable magic of the human-like conversations generated by GPT will undoubtedly enrich many who peddle the false narrative that computers are now smarter than us and can be trusted to make decisions for us. The AI bubble is inflating rapidly.
That's our code red.
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more. Some of the most brilliant minds in science will be catapulted from academic obscurity this week when the Nobel Prizes in physics, chemistry, and medicine or physiology are announced. The honors, established by Swedish industrialist Alfred Nobel more than a century ago, represent the pinnacle of scientific achievement, celebrating transformative breakthroughs that are often decades in the making. In addition to the huge publicity, the prizes also draw their share of flak, sometimes triggering controversy and resentment over who gets chosen and who is left out, said Martin Rees, British cosmologist and physicist and former president of the Royal Society, the world’s oldest scientific society. Rees said one challenge for the Nobel committees is the increasingly collaborative nature of most scientific research. The image of the lone genius having a eureka moment is long gone, if it ever truly existed. Additionally, discoveries can be made simultaneously by different teams. However, the Nobel selection committees, according to the rules laid down by Alfred Nobel in 1895, can only honor up to three people per prize. This requirement can prove to be a headache, Rees said. “It may be a project where several people have done work in parallel, and they single out some and not others. It may be that there’s a team, and it’s not obvious that the ones they’ve singled from the team are the dominant figures,” said Rees, who is the UK’s astronomer royal and author of “If Science Is to Save Us.” For example, the 2017 Nobel in physics recognized the detection of gravitational waves — “ripples” in space generated by colliding black holes 1 billion or more light-years away. The key papers reporting this discovery had almost 1,000 authors, Rees noted. However, only three were rewarded the prize — Rainer Weiss, Barry Barish and Kip Thorne. Similarly, one often discussed candidate for the medicine or chemistry Nobel Prize is the mapping of the human genome, a transformative project that was only fully completed in 2022 and involved hundreds of people. David Pendlebury, head of research analysis at Clarivate’s Institute for Scientific Information, who identifies “Nobel worthy” individuals by analyzing how often fellow scientists cite their key scientific papers throughout the years, agrees that the three-person rule is a constraint. “It really has become a huge transformation in science that it’s more and more team science — huge groups tackling more difficult problems, international collaborative networks,” Pendlebury said. “This rule of three does seem to be an impediment if they wanted to recognize a team.” The rule that a prize can only be awarded to three people comes from the statutes of the Nobel Foundation, which is responsible for fulfilling the intentions of Nobel’s will, according to the Nobel Prize website. Peter Brzezinski, the secretary of the committee for the Nobel chemistry prize, said there were no plans to change the rule. However, he said that the committee follows a detailed process once the nominations have been made by the end of January. “We start the process by asking a number of experts from around the world to write reports describing the field in which the discovery has been made, to outline the main discoveries in this field and also to mention individuals who have made the most important contributions,” he explained via email. “We read all relevant literature, attend conferences and write reports also within the committee,” Brzezinski added. “With time, we often succeed in identifying a limited number of scientists who have made the discovery. If this is not possible, we are not able to propose a Prize to the Academy.” Retrospective view The Nobel committees typically single out work that happened decades earlier — a retrospective view that’s often needed given that it can take time for the significance of some scientific research to become clear. The Nobels also focus on three scientific disciplines, as designated in the will of Alfred Nobel. Fields including mathematics, computer science, earth and climate science and oceanography are excluded. Even within the fields of chemistry, physics and medicine and physiology, just five areas out of 114 different scientific subdisciplines account for more than half of Nobel Prizes awarded from 1995 to 2017, according to one 2020 study. These are particle physics, atomic physics, cell biology, neuroscience and molecular chemistry. Rees, however, noted that taking the long view and giving greater recognition to certain fields can, at times, make the Nobel committees seem out of touch with the scientific priorities of the day. One example is artificial intelligence, or AI, which is transforming people’s lives at an unprecedented pace. Two hot names in the field are Demis Hassabis and John Jumper, the Google DeepMind inventors of AlphaFold — an AI program that decodes the 3D structures of proteins from amino acid sequences. They won the $250,000 Lasker Prize this year and the Breakthrough Prize a year earlier. Since their key paper was published just over two years ago, it has been cited more than 8,500 times, Pendlebury said. “That is, in my experience, just incredible in terms of the speed at which the citations have accrued, so obviously, it’s a huge, important intellectual discovery,” said Pendlebury, who has been compiling his list of “citation laureates” since 2002. The Nobel committees have on occasion awarded accolades to recent breakthroughs — such as when the chemistry prize went to Emmanuelle Charpentier and Jennifer Doudna in 2020, less than 10 years after their key 2012 paper on the CRISPR-Cas9 gene editing technique — but Pendlebury thinks a Nobel Prize for AI this year is still a long shot. He said the Nobel Prize committees, at least for science prizes, are “innately conservative.” Diversity Other criticism leveled at the Nobel Prizes includes the lack of diversity among winners. More female scientists have gotten the call from Stockholm in recent years, but it’s been a trickle rather than a torrent. Last year, Carolyn Bertozzi, who won the chemistry prize, was the only female winner of a science prize. There were no female science recipients in 2021 or in 2019, when the Nobel committee asked nominators to consider diversity in gender, geography and field. Astrophysicist Andrea Ghez shared the physics prize in 2020, the same year as Doudna and Charpentier’s chemistry win. Pendlebury said he believes lack of diversity on the Nobel stage is essentially a pipeline problem. “They’re looking at work typically published 20 or 30 years ago, when the number of women in science at elite levels was not as much as it is today,” he said. “And so I think as time goes forward, you see more and more women being selected.” Others point to the issue as more evidence of systemic bias in science, with women already less likely to be given credit or named as lead author on scientific papers. “There are several women who made Nobel-level contributions to science, contributions for which male colleagues were awarded, but they were not,” said Naomi Oreskes, a Henry Charles Lea Professor of the history of science and an affiliated professor of Earth and planetary sciences at Harvard University. “These examples prove that even when there were qualified women, they were systematically passed over.” Rees attributes the diversity problem to a lack of transparency. The Nobel short list is secret, as are the nominators, and documents revealing the details of the selection process are sealed from public view for 50 years. Of course, these flaws and gaps only matter because the Nobels are far better known than other science prizes, Rees added. He prefers so-called challenge prizes, such as the XPrize, which incentivize future efforts to tackle an important problem, rather than rewarding past success. The Nobel Prize in physiology or medicine will be announced on Monday, followed by the physics prize on Tuesday and the Nobel Prize in chemistry on Wednesday. The Nobel Prize for literature and the Nobel Peace Prize will be announced on Thursday and Friday, respectively.
|
<urn:uuid:0c3d5d6f-8651-4008-b3b8-fff4143a1691>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476592.66/warc/CC-MAIN-20240304232829-20240305022829-00895.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9617148637771606,
"pii_count": 0,
"score": 2.671875,
"token_count": 1711,
"url": "https://www.cnn.com/2023/09/29/world/nobel-prize-rule-of-three-diversity-scn/index.html"
}
|
Sign up for CNN’s Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more. Some of the most brilliant minds in science will be catapulted from academic obscurity this week when the Nobel Prizes in physics, chemistry, and medicine or physiology are announced. The honors, established by Swedish industrialist Alfred Nobel more than a century ago, represent the pinnacle of scientific achievement, celebrating transformative breakthroughs that are often decades in the making. In addition to the huge publicity, the prizes also draw their share of flak, sometimes triggering controversy and resentment over who gets chosen and who is left out, said Martin Rees, British cosmologist and physicist and former president of the Royal Society, the world’s oldest scientific society. Rees said one challenge for the Nobel committees is the increasingly collaborative nature of most scientific research. The image of the lone genius having a eureka moment is long gone, if it ever truly existed. Additionally, discoveries can be made simultaneously by different teams. However, the Nobel selection committees, according to the rules laid down by Alfred Nobel in 1895, can only honor up to three people per prize. This requirement can prove to be a headache, Rees said. “It may be a project where several people have done work in parallel, and they single out some and not others. It may be that there’s a team, and it’s not obvious that the ones they’ve singled from the team are the dominant figures,” said Rees, who is the UK’s astronomer royal and author of “If Science Is to Save Us.” For example, the 2017 Nobel in physics recognized the detection of gravitational waves — “ripples” in space generated by colliding black holes 1 billion or more light-years away. The key papers reporting this discovery had almost 1,000 authors, Rees noted. However, only three were rewarded the prize — Rainer Weiss, Barry Barish and Kip Thorne. Similarly, one often discussed candidate for the medicine or chemistry Nobel Prize is the mapping of the human genome, a transformative project that was only fully completed in 2022 and involved hundreds of people. David Pendlebury, head of research analysis at Clarivate’s Institute for Scientific Information, who identifies “Nobel worthy” individuals by analyzing how often fellow scientists cite their key scientific papers throughout the years, agrees that the three-person rule is a constraint. “It really has become a huge transformation
|
in science that it’s more and more team science — huge groups tackling more difficult problems, international collaborative networks,” Pendlebury said. “This rule of three does seem to be an impediment if they wanted to recognize a team.” The rule that a prize can only be awarded to three people comes from the statutes of the Nobel Foundation, which is responsible for fulfilling the intentions of Nobel’s will, according to the Nobel Prize website. Peter Brzezinski, the secretary of the committee for the Nobel chemistry prize, said there were no plans to change the rule. However, he said that the committee follows a detailed process once the nominations have been made by the end of January. “We start the process by asking a number of experts from around the world to write reports describing the field in which the discovery has been made, to outline the main discoveries in this field and also to mention individuals who have made the most important contributions,” he explained via email. “We read all relevant literature, attend conferences and write reports also within the committee,” Brzezinski added. “With time, we often succeed in identifying a limited number of scientists who have made the discovery. If this is not possible, we are not able to propose a Prize to the Academy.” Retrospective view The Nobel committees typically single out work that happened decades earlier — a retrospective view that’s often needed given that it can take time for the significance of some scientific research to become clear. The Nobels also focus on three scientific disciplines, as designated in the will of Alfred Nobel. Fields including mathematics, computer science, earth and climate science and oceanography are excluded. Even within the fields of chemistry, physics and medicine and physiology, just five areas out of 114 different scientific subdisciplines account for more than half of Nobel Prizes awarded from 1995 to 2017, according to one 2020 study. These are particle physics, atomic physics, cell biology, neuroscience and molecular chemistry. Rees, however, noted that taking the long view and giving greater recognition to certain fields can, at times, make the Nobel committees seem out of touch with the scientific priorities of the day. One example is artificial intelligence, or AI, which is transforming people’s lives at an unprecedented pace. Two hot names in the field are Demis Hassabis and John Jumper, the Google DeepMind inventors of AlphaFold — an AI program that decodes the 3D structures of proteins from amino acid sequences. They won the $250,000 Lasker Prize this year and the Breakthrough Prize a year earlier. Since their key paper was published just over two years ago, it has been cited more than 8,500 times, Pendlebury said. “That is, in my experience, just incredible in terms of the speed at which the citations have accrued, so obviously, it’s a huge, important intellectual discovery,” said Pendlebury, who has been compiling his list of “citation laureates” since 2002. The Nobel committees have on occasion awarded accolades to recent breakthroughs — such as when the chemistry prize went to Emmanuelle Charpentier and Jennifer Doudna in 2020, less than 10 years after their key 2012 paper on the CRISPR-Cas9 gene editing technique — but Pendlebury thinks a Nobel Prize for AI this year is still a long shot. He said the Nobel Prize committees, at least for science prizes, are “innately conservative.” Diversity Other criticism leveled at the Nobel Prizes includes the lack of diversity among winners. More female scientists have gotten the call from Stockholm in recent years, but it’s been a trickle rather than a torrent. Last year, Carolyn Bertozzi, who won the chemistry prize, was the only female winner of a science prize. There were no female science recipients in 2021 or in 2019, when the Nobel committee asked nominators to consider diversity in gender, geography and field. Astrophysicist Andrea Ghez shared the physics prize in 2020, the same year as Doudna and Charpentier’s chemistry win. Pendlebury said he believes lack of diversity on the Nobel stage is essentially a pipeline problem. “They’re looking at work typically published 20 or 30 years ago, when the number of women in science at elite levels was not as much as it is today,” he said. “And so I think as time goes forward, you see more and more women being selected.” Others point to the issue as more evidence of systemic bias in science, with women already less likely to be given credit or named as lead author on scientific papers. “There are several women who made Nobel-level contributions to science, contributions for which male colleagues were awarded, but they were not,” said Naomi Oreskes, a Henry Charles Lea Professor of the history of science and an affiliated professor of Earth and planetary sciences at Harvard University. “These examples prove that even when there were qualified women, they were systematically passed over.” Rees attributes the diversity problem to a lack of transparency. The Nobel short list is secret, as are the nominators, and documents revealing the details of the selection process are sealed from public view for 50 years. Of course, these flaws and gaps only matter because the Nobels are far better known than other science prizes, Rees added. He prefers so-called challenge prizes, such as the XPrize, which incentivize future efforts to tackle an important problem, rather than rewarding past success. The Nobel Prize in physiology or medicine will be announced on Monday, followed by the physics prize on Tuesday and the Nobel Prize in chemistry on Wednesday. The Nobel Prize for literature and the Nobel Peace Prize will be announced on Thursday and Friday, respectively.
|
Old age often is seen as adulthood-minus, a situation in which normal losses in appearance, capacity and resources create an overriding impression that agers become less than they were as adults. At first glance that seems logical. But when one looks at the situation in greater depth it possible to see that adults who survive into advanced aging while losing some capacities have also gained wisdom and skills they didn’t have as adults.
When the young move into adulthood, they gain new skills and build on previous knowledge and skills as they learn from living with adulthood. They also leave behind some parts of childhood that are no longer appropriate in adulthood. The same process holds true when we as adults move into the years of advanced aging. For both groups this type of learning tends to emerge quietly, day by day and experience by experience. It is noticed only when looking back over time.
When we adults moved into post-adulthood, we brought with us the capacities developed in adult years that then were available to use and build on as we moved ever more deeply into aging. The significant difference is that here we’ve had to manage daily living requirements and overlying conditions with:
- ongoing, normal age related changes (ARCs) to our capacities that are not only continually emerging and progressing (each in its own way)
- pairs or clusters of ARCs each in their current status aiding and abetting each other. E.g. vision, strength, proprioception each affecting the ARC of balance; loss of sense of smell (anosmia) and aging taste buds et al.
Have you ever felt as though daily living is like trying to walk on an unstable wavy, slack tight rope?
A concomitant outcome is that day-by-day and situation-by changing situation we’re having to learn how to fine-tune:
- our awareness of each presenting situation and then
- alterations in present strategies or creation of new ones to deal with them.
The consequences can range from maddening inconvenience to life-threatening risks But, through both failures and successes learning goes on.
It’s not unusual for youngsters and adults to acknowledge and take pride when they manage a new age-related achievement and for others to do so as well. Somehow this acknowledgement of achievement seems to be less prevalent for old folks. Perhaps it is because of its focus on tiny details and its constancy. Whatever the reason agers’ achievements in growth in knowledge and skill tend to be either not or less recognized. But recognized or not, post adulthood offers a remarkable, multidimensional Adulthood Plus.
I love the term ‘adulthood-plus’. Though not as old nor as experienced at aging as you, I feel many age-related changes now that I’m in my late 70s. I also watch my husband in his late 80s as he adapts his daily habits to cope with physical changes. He suffers from mobility limitations due to peripheral neuropathy. Thank goodness for his custom-made leg braces that help him continue walking. We often discuss observations from your experiences with aging on your blog. It is a wonderful resource!
Agree with all you say here, as I so often do. I wish our society’s tendency to believe that they will never age wasn’t such a huge factor in the prevalence of the kind of ageism that doesn’t recognize the triumphs of agers in all aspects of life.
I love reading your blog. So insightful and well put.
LikeLiked by 1 person
It is true that so many hard-won accomplishments are achieved in Post-Adulthood! I am thinking of a lovely 80-something-year-old lady in my neighborhood, who is losing her eyesight and having mobility problems but who still can be seen walking with two canes and the assistance of her daughter or a nearby neighbor. Now THERE’S a picture of resilience, ingenuity, and courage!!
LikeLiked by 1 person
Thank you for a new blog post, even as your 101st birthday is only a few days away. I’m almost 20 years younger than you, but I already notice some of the plusses of old age. For example, my short term memory is shot, my long term memory was never good, and yet certain other cognitive functions seem to be stronger than ever. Creativity is booming. I know you’ll have an explanation for this 🙂
LikeLiked by 3 people
Hello hope you are okay. Been following your blog and hope that by making successful adaptations i too can stay in my home as long as possible.
Just stopping to say hello
|
<urn:uuid:f99ce08a-1a40-46a9-b29a-b39cdeb10782>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949573.84/warc/CC-MAIN-20230331051439-20230331081439-00085.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9726274609565735,
"pii_count": 0,
"score": 2.6875,
"token_count": 975,
"url": "https://engagingwithagingblog.wordpress.com/2023/01/09/aging-as-adulthood-plus/"
}
|
Old age often is seen as adulthood-minus, a situation in which normal losses in appearance, capacity and resources create an overriding impression that agers become less than they were as adults. At first glance that seems logical. But when one looks at the situation in greater depth it possible to see that adults who survive into advanced aging while losing some capacities have also gained wisdom and skills they didn’t have as adults.
When the young move into adulthood, they gain new skills and build on previous knowledge and skills as they learn from living with adulthood. They also leave behind some parts of childhood that are no longer appropriate in adulthood. The same process holds true when we as adults move into the years of advanced aging. For both groups this type of learning tends to emerge quietly, day by day and experience by experience. It is noticed only when looking back over time.
When we adults moved into post-adulthood, we brought with us the capacities developed in adult years that then were available to use and build on as we moved ever more deeply into aging. The significant difference is that here we’ve had to manage daily living requirements and overlying conditions with:
- ongoing, normal age related changes (ARCs) to our capacities that are not only continually emerging and progressing (each in its own way)
- pairs or clusters of ARCs each in their current status aiding and abetting each other. E.g. vision, strength, proprioception each affecting the ARC of balance; loss of sense of smell (anosmia) and aging taste buds et al.
Have you ever felt as though daily living is like trying to walk on an unstable wavy, slack tight rope?
A concomitant outcome is that day-by-day and situation-by changing situation we’re having to learn how to fine-tune:
- our awareness of each presenting situation and then
- alterations in present strategies or creation of new ones to deal with them.
The consequences can range from maddening inconvenience to life-threatening risks But, through both failures and successes learning goes on.
It’s not unusual for youngsters and adults to acknowledge and take pride when they manage a new age-related achievement and for others to do so as well. Somehow this acknowledgement of achievement seems to be less prevalent for old folks. Perhaps it is because of its focus on tiny details and its constancy. Whatever the reason agers’ achievements in growth in knowledge and skill tend to be either not or less recognized. But recognized or not, post adulthood offers a remarkable, multidimensional Adulthood Plus.
I love the term ‘
|
adulthood-plus’. Though not as old nor as experienced at aging as you, I feel many age-related changes now that I’m in my late 70s. I also watch my husband in his late 80s as he adapts his daily habits to cope with physical changes. He suffers from mobility limitations due to peripheral neuropathy. Thank goodness for his custom-made leg braces that help him continue walking. We often discuss observations from your experiences with aging on your blog. It is a wonderful resource!
Agree with all you say here, as I so often do. I wish our society’s tendency to believe that they will never age wasn’t such a huge factor in the prevalence of the kind of ageism that doesn’t recognize the triumphs of agers in all aspects of life.
I love reading your blog. So insightful and well put.
LikeLiked by 1 person
It is true that so many hard-won accomplishments are achieved in Post-Adulthood! I am thinking of a lovely 80-something-year-old lady in my neighborhood, who is losing her eyesight and having mobility problems but who still can be seen walking with two canes and the assistance of her daughter or a nearby neighbor. Now THERE’S a picture of resilience, ingenuity, and courage!!
LikeLiked by 1 person
Thank you for a new blog post, even as your 101st birthday is only a few days away. I’m almost 20 years younger than you, but I already notice some of the plusses of old age. For example, my short term memory is shot, my long term memory was never good, and yet certain other cognitive functions seem to be stronger than ever. Creativity is booming. I know you’ll have an explanation for this 🙂
LikeLiked by 3 people
Hello hope you are okay. Been following your blog and hope that by making successful adaptations i too can stay in my home as long as possible.
Just stopping to say hello
|
What happens if we can’t access money from the ATM or our credit card? What if hackers take down the US Treasury? Why do we scrupulously protect ourselves in the offline world with locks, rules, borders, police, and armies, but fall short in cyberspace? It takes a former financial regulator and futurist to ask such probing questions. Thomas P. Vartanian, founder of the Financial Technology & Cybersecurity Center explores this in his new book The Unhackable Internet: How Rebuilding Cyberspace Can Build Real Security and Prevent Financial Collapse, which follows his American Financial Panics: Crashes, Recessions and Depressions and the Technology That Will Change It All.
Despite reams of regulatory policies from dozens of federal and state authorities, severe and devastating cyberattacks still occur. Hackers outsmart our policies and security upgrades, and the internet is the medium of choice for child sexual abuse material, drugs, weapons, human trafficking, espionage, money laundering, and terrorism. Vartanian says we need to shift toward building systems which are engineered for a security first environment, impose tougher standards on intermediaries, and require more oomph from end users.
In some ways the problem of internet security was predictable. The ARPANET of 1969 was conceived to serve academic and research purposes (and perhaps email), had a finite set of known users, and had the assumption of a secure environment. No one expected that we would be buying stocks and transferring money with our internet connected smartphones. To be sure, the financial industry invests significantly to secure their systems. While they innovate a variety of important security enhancements, they have limited ability to fix the vulnerabilities of the end user’s set up, much less deter malicious state-based and other actors which want to compromise their systems. This is presumably the job of the US military.
The problem of upgrading the internet is similar to the Global Positioning System (GPS), another important US military system which was never intended to become a commercial platform. The debate is whether to build a new GPS (very expensive), or to identify the priority PNT (Positioning, Navigation, and Timing) services and back them up accordingly, preferably by a reliable, non-government actor. For example, mobile wireless standards are driven by the 3GPP (the third generation partnership), and they are more resilient than GPS. An exhaustive RAND Institute report describes that a single alternative to GPS is not only impractical but cost prohibitive. It is smarter to build out dedicated PNT (position, navigation, and timing) capabilities through a piecemeal approach based on function and application with private sector actors funding the effort. In fact companies are already doing this.
Vartanian suggests we could pursue something similar with key internet applications. Indeed this is already underway with private and offline networks: build the network from the ground up with security as the key requirement; block malicious ports, content, and actors outright; have strict rules on access and behavior. One would enter this online system just as one enters a secure physical location: supervision over access, identification, screening, removal of non-essential items, and entry only for specific purposes. This is very difficult from the notion of a public internet in which the user enjoys all possible end points and applications.
An unhackable internet likely requires enhanced authentication (no more anonymous users) as well as upgrades to more secure hardware and software. The transition to clean hardware is afoot; many service providers now rip and replace malign equipment from Chinese government aligned manufacturers, though this thinking falls short to include the billions of connected Internet of Things devices running questionable code. Notably states like Florida and Georgia prohibit the use of such products in state government (think TikTok and laptops from Chinese government owned Lenovo), but protections for consumers are limited. The Federal Communications Commission which authorizes equipment to use connectivity restricts only a handful of entities, even though dozens of sketchy manufacturers have been identified by the US government. Moreover there is nothing to stop malicious actors from using white label products or from embedding compromised chips into consumer products (hence the concerns about Apple working with Chinese military aligned chip fab YMTC).
Vartanian’s tour de force analyzes a continuum of solutions to make the Internet safer and more unhackable. It lays out a sober assessment of the financial Armageddon, however unthinkable, that could be avoided but for the demoralizing and dispiriting failure of leadership which has followed a predictable cycle of attack, rinse, and repeat. The Unhackable Internet offers comprehensive and insightful cybersecurity solutions that just may change the Internet forever. It should be required reading for every executive, and policymaker.
|
<urn:uuid:c0c46b2c-7bc4-4a00-bd04-0968f31e640c>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00200.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9415674209594727,
"pii_count": 0,
"score": 2.703125,
"token_count": 952,
"url": "https://www.forbes.com/sites/roslynlayton/2023/02/08/toward-an-unhackable-internet/?sh=4976686517e0"
}
|
What happens if we can’t access money from the ATM or our credit card? What if hackers take down the US Treasury? Why do we scrupulously protect ourselves in the offline world with locks, rules, borders, police, and armies, but fall short in cyberspace? It takes a former financial regulator and futurist to ask such probing questions. Thomas P. Vartanian, founder of the Financial Technology & Cybersecurity Center explores this in his new book The Unhackable Internet: How Rebuilding Cyberspace Can Build Real Security and Prevent Financial Collapse, which follows his American Financial Panics: Crashes, Recessions and Depressions and the Technology That Will Change It All.
Despite reams of regulatory policies from dozens of federal and state authorities, severe and devastating cyberattacks still occur. Hackers outsmart our policies and security upgrades, and the internet is the medium of choice for child sexual abuse material, drugs, weapons, human trafficking, espionage, money laundering, and terrorism. Vartanian says we need to shift toward building systems which are engineered for a security first environment, impose tougher standards on intermediaries, and require more oomph from end users.
In some ways the problem of internet security was predictable. The ARPANET of 1969 was conceived to serve academic and research purposes (and perhaps email), had a finite set of known users, and had the assumption of a secure environment. No one expected that we would be buying stocks and transferring money with our internet connected smartphones. To be sure, the financial industry invests significantly to secure their systems. While they innovate a variety of important security enhancements, they have limited ability to fix the vulnerabilities of the end user’s set up, much less deter malicious state-based and other actors which want to compromise their systems. This is presumably the job of the US military.
The problem of upgrading the internet is similar to the Global Positioning System (GPS), another important US military system which was never intended to become a commercial platform. The debate is whether to build a new GPS (very expensive), or to identify the priority PNT (Positioning, Navigation, and Timing) services and back them up accordingly, preferably by a reliable, non-government actor. For example, mobile wireless standards are driven by the 3GPP (the third generation partnership), and they are more resilient than GPS. An exhaustive RAND Institute report describes that a single alternative to GPS is not only impractical
|
but cost prohibitive. It is smarter to build out dedicated PNT (position, navigation, and timing) capabilities through a piecemeal approach based on function and application with private sector actors funding the effort. In fact companies are already doing this.
Vartanian suggests we could pursue something similar with key internet applications. Indeed this is already underway with private and offline networks: build the network from the ground up with security as the key requirement; block malicious ports, content, and actors outright; have strict rules on access and behavior. One would enter this online system just as one enters a secure physical location: supervision over access, identification, screening, removal of non-essential items, and entry only for specific purposes. This is very difficult from the notion of a public internet in which the user enjoys all possible end points and applications.
An unhackable internet likely requires enhanced authentication (no more anonymous users) as well as upgrades to more secure hardware and software. The transition to clean hardware is afoot; many service providers now rip and replace malign equipment from Chinese government aligned manufacturers, though this thinking falls short to include the billions of connected Internet of Things devices running questionable code. Notably states like Florida and Georgia prohibit the use of such products in state government (think TikTok and laptops from Chinese government owned Lenovo), but protections for consumers are limited. The Federal Communications Commission which authorizes equipment to use connectivity restricts only a handful of entities, even though dozens of sketchy manufacturers have been identified by the US government. Moreover there is nothing to stop malicious actors from using white label products or from embedding compromised chips into consumer products (hence the concerns about Apple working with Chinese military aligned chip fab YMTC).
Vartanian’s tour de force analyzes a continuum of solutions to make the Internet safer and more unhackable. It lays out a sober assessment of the financial Armageddon, however unthinkable, that could be avoided but for the demoralizing and dispiriting failure of leadership which has followed a predictable cycle of attack, rinse, and repeat. The Unhackable Internet offers comprehensive and insightful cybersecurity solutions that just may change the Internet forever. It should be required reading for every executive, and policymaker.
|
- Since President Joe Biden signed the historic Inflation Reduction Act into law last August, companies have announced more than 100,000 clean energy jobs across the country.
- The wind, solar and EV manufacturing sectors are creating the new positions, according to a report by the nonprofit Climate Power.
- The IRA provides tax credits for zero-carbon energy production and is projected to cut domestic carbon emissions by about 40% by 2030.
Since President Joe Biden signed the historic Inflation Reduction Act into law last August, companies have announced more than 100,000 clean energy jobs across the country, according to a report by the nonprofit Climate Power.
As of the end of January, companies have announced more than 90 new clean energy projects in small towns and larger cities in the U.S., totaling $89.5 billion in new investments, the report found. The wind, solar and EV manufacturing sectors are creating the new positions, which include electricians, mechanics, construction workers and technicians.
Most of the jobs are located in Arizona, Georgia, Michigan, Ohio, South Carolina, Tennessee and Texas.
The IRA is the biggest climate bill ever passed by Congress and provides $369 billion in funding for initiatives like cutting emissions, manufacturing clean energy products and advancing environmental justice initiatives. The bill also includes tax credits for zero-carbon energy production.
The bill's climate provisions are also projected to cut domestic carbon emissions by about 40% by 2030. Early in his presidency, Biden pledged to reduce U.S. emissions from 2005 levels at least in half by 2030 and achieve net-zero emissions by 2050.
Plans include 40 new battery manufacturing sites in states like Arizona, Michigan and South Carolina, according to Climate Power. So far, 22 companies have unveiled plans for new or expanded EV manufacturing in Alabama, Oklahoma and Michigan. And an additional 24 companies have released plans to expand wind and solar manufacturing in Colorado, Ohio and Texas.
The legislation is projected to create more than 9 million clean energy and climate-related jobs over the next decade, according to an analysis by the Political Economy Research Institute at the University of Massachusetts Amherst.
"Businesses are investing in manufacturing like never before, and planning to create good paying jobs in every corner of the country," Lori Lodes, executive director of Climate Power, said in a statement.
"This is only the beginning — we're months after the passage of the Inflation Reduction Act and we're already at the precipice of a renewed manufacturing, Made in America boom that will create opportunities for millions of Americans," Lodes said.
|
<urn:uuid:6ae576e6-cb1b-48ee-b339-7731266a252f>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644855.6/warc/CC-MAIN-20230529105815-20230529135815-00382.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9529736042022705,
"pii_count": 0,
"score": 2.59375,
"token_count": 521,
"url": "https://www.cnbc.com/2023/02/07/inflation-reduction-act-which-states-have-most-new-green-jobs-so-far.html?&doc=107191411"
}
|
- Since President Joe Biden signed the historic Inflation Reduction Act into law last August, companies have announced more than 100,000 clean energy jobs across the country.
- The wind, solar and EV manufacturing sectors are creating the new positions, according to a report by the nonprofit Climate Power.
- The IRA provides tax credits for zero-carbon energy production and is projected to cut domestic carbon emissions by about 40% by 2030.
Since President Joe Biden signed the historic Inflation Reduction Act into law last August, companies have announced more than 100,000 clean energy jobs across the country, according to a report by the nonprofit Climate Power.
As of the end of January, companies have announced more than 90 new clean energy projects in small towns and larger cities in the U.S., totaling $89.5 billion in new investments, the report found. The wind, solar and EV manufacturing sectors are creating the new positions, which include electricians, mechanics, construction workers and technicians.
Most of the jobs are located in Arizona, Georgia, Michigan, Ohio, South Carolina, Tennessee and Texas.
The IRA is the biggest climate bill ever passed by Congress and provides $369 billion in funding for initiatives like cutting emissions, manufacturing clean energy products and advancing environmental justice initiatives. The bill also includes tax credits for zero-carbon energy production.
The bill's climate provisions are also projected to cut domestic carbon emissions by about 40% by 2030. Early in his presidency, Biden pledged to reduce U.S. emissions from 2005 levels at least in half by 2030 and achieve net-zero emissions by 2050.
Plans include 40 new battery manufacturing sites in states like Arizona, Michigan and South Carolina, according to Climate Power. So far, 22 companies have unveiled plans for new or expanded EV manufacturing in Alabama, Oklahoma and Michigan. And an additional 24 companies have released plans to expand wind and solar manufacturing in Colorado, Ohio and Texas.
The legislation is projected to create more than 9 million clean energy and climate-related jobs over the next decade, according to an analysis by the Political Economy Research Institute at the University of Massachusetts Amherst.
"Businesses are investing in manufacturing like never before, and planning to create good paying jobs in every corner of the country," Lori Lodes, executive director of Climate Power, said in a statement.
"This is only the beginning — we're months
|
after the passage of the Inflation Reduction Act and we're already at the precipice of a renewed manufacturing, Made in America boom that will create opportunities for millions of Americans," Lodes said.
|
Sign up for CNN's Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
(CNN)When archaeologists uncovered the burial site of two brothers who lived during the 15th century BC in Israel, they were surprised to discover that one of them had brain surgery shortly before he died.
The finding marks the earliest example of trephination, a type of cranial surgery, found in the ancient Near East.
Trephination, also known as trepanation, involves cutting a hole in the skull — and there are examples of the medical procedure dating back thousands of years.
The remains of the brothers, who lived during the Bronze Age between 1550 BC and 1450 BC, were found during an excavation of a tomb in the ancient city of Tel Megiddo.
The older brother, estimated to be between 20 and 40 years old, had angular notched trephination. His scalp was cut and then a sharp, beveled-edge instrument was used to make four intersecting lines on the frontal bone of the skull that made a 30-millimeter (1.2-inch) square-shaped hole.
A study detailing the findings published Wednesday in the journal PLOS ONE.
"We have evidence that trephination has been this universal, widespread type of surgery for thousands of years," said study author Rachel Kalisher in a statement. She is a doctoral candidate at Brown University's Joukowsky Institute for Archaeology and the Ancient World in Providence, Rhode Island.
"But in the Near East, we don't see it so often — there are only about a dozen examples of trephination in this entire region. My hope is that adding more examples to the scholarly record will deepen our field's understanding of medical care and cultural dynamics in ancient cities in this area."
Oddly enough, the bone pieces removed from the skull were included in the grave — but it wasn't the only unusual discovery made about the brothers as researchers studied their bones.
Bronze Age brothers
The city of Tel Megiddo was part of the Via Maris 4,000 years ago. This crucial land route connected Egypt, Syria, Mesopotamia and Anatolia, according to study coauthor Israel Finkelstein, director of the University of Haifa's School of Archaeology and Maritime Cultures.
Tel Megiddo controlled part of this trade route, making it a wealthy and cosmopolitan city full of palaces, temples and fortifications.
"It's hard to overstate Megiddo's cultural and economic importance in the late Bronze Age," Finkelstein said.
The tomb was found in an area adjacent to a late Bronze Age palace in Tel Megiddo, leading researchers to believe the two men were either high-ranking elite members of society or perhaps even royals. DNA testing revealed the two were related and likely brothers.
The men were buried with Cypriot pottery, food and other valuable possessions similar to those found in other local high-status tombs.
|
<urn:uuid:69cac280-1133-4661-b498-5d31ca046c7f>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00281.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9682071805000305,
"pii_count": 0,
"score": 3.921875,
"token_count": 620,
"url": "https://www.cnn.com/2023/02/22/world/bronze-age-brain-surgery-scn/index.html"
}
|
Sign up for CNN's Wonder Theory science newsletter. Explore the universe with news on fascinating discoveries, scientific advancements and more.
(CNN)When archaeologists uncovered the burial site of two brothers who lived during the 15th century BC in Israel, they were surprised to discover that one of them had brain surgery shortly before he died.
The finding marks the earliest example of trephination, a type of cranial surgery, found in the ancient Near East.
Trephination, also known as trepanation, involves cutting a hole in the skull — and there are examples of the medical procedure dating back thousands of years.
The remains of the brothers, who lived during the Bronze Age between 1550 BC and 1450 BC, were found during an excavation of a tomb in the ancient city of Tel Megiddo.
The older brother, estimated to be between 20 and 40 years old, had angular notched trephination. His scalp was cut and then a sharp, beveled-edge instrument was used to make four intersecting lines on the frontal bone of the skull that made a 30-millimeter (1.2-inch) square-shaped hole.
A study detailing the findings published Wednesday in the journal PLOS ONE.
"We have evidence that trephination has been this universal, widespread type of surgery for thousands of years," said study author Rachel Kalisher in a statement. She is a doctoral candidate at Brown University's Joukowsky Institute for Archaeology and the Ancient World in Providence, Rhode Island.
"But in the Near East, we don't see it so often — there are only about a dozen examples of trephination in this entire region. My hope is that adding more examples to the scholarly record will deepen our field's understanding of medical care and cultural dynamics in ancient cities in this area."
Oddly enough, the bone pieces removed from the skull were included in the grave — but it wasn't the only unusual discovery made about the brothers as researchers studied their bones.
Bronze Age brothers
The city of Tel Megiddo was part of the Via Maris 4,000 years ago. This crucial land route connected Egypt, Syria, Mesopotamia and Anatolia, according to study coauthor Israel Finkelstein, director of the University of Haifa's School of Archaeology and Maritime Cultures.
Tel Megiddo controlled part of this trade route, making it a wealthy and cosmopolitan city full of palaces, temples and fortifications.
"It's hard to overstate Megid
|
do's cultural and economic importance in the late Bronze Age," Finkelstein said.
The tomb was found in an area adjacent to a late Bronze Age palace in Tel Megiddo, leading researchers to believe the two men were either high-ranking elite members of society or perhaps even royals. DNA testing revealed the two were related and likely brothers.
The men were buried with Cypriot pottery, food and other valuable possessions similar to those found in other local high-status tombs.
|
Like many people, Yingibarndi man Glen Aubrey wants to know what impact major gas and fertiliser plants are having on ancient rock art within a World Heritage-nominated area in Western Australia's north.
"I don't want to see any rock art getting destroyed through emissions or anything like that," he says.
"I'd just like to protect it."
The Murujuga region near Karratha, 1,500 kilometres north of Perth, takes in a vast expanse of land and sea country, and is believed to be home to more than one million petroglyphs.
The region is also home to large industry, including gas and fertiliser plants, which has been a source of controversy and concern within the community.
Mr Aubrey is a local ranger and is being trained in rock art monitoring as part of the Murujuga Rock Art Monitoring Program (MRAMP).
The program was started by the state government, together with the Murujuga Aboriginal Corporation (MAC), which represents five language groups with ongoing cultural connections to the area.
Described as the largest and most comprehensive rock art monitoring program ever implemented in the world, the research will be used to set new emissions standards.
"It's important to be looking after the country," Mr Aubrey says.
"Some of these rock art have been here over 50,000 years and to see the amount of rock art in the area, it's something you've got to see for yourself."
Search for definitive answer on emissions
Ben Mullins from Curtin University is the program's scientific lead and says his job is to determine whether there are any measurable anthropogenic impacts on the rock art.
"So basically if there is anything that humans are doing that is causing the rock art to degrade or have accelerated weathering," he says.
"The dominant theory up until now has been an acid rain kind of hypothesis — that you have these chemicals that are released that can form acids and they can then accelerate the weathering.".
Professor Mullins says one of the long-standing theories is that gases from industry such as nitrogen dioxide, sulphur dioxide and ammonia are impacting rock art.
Despite significant research into whether emissions are damaging rock art in the Murujuga region, opinion remains divided.
Professor Mullins says this latest research project aims to provide a "definitive answer".
"The previous work, whoever has done it, either hasn't had the statistical power to make any determination of what's happening, or has been purely anecdotal and not of any sound scientific basis," he says.
"It's obviously a challenge when we have media and other anecdotal papers coming out trying to say the answer is already there when it is clear that it's not."
Twenty-two new air quality monitors have been installed across Murujuga to collect more data for the project.
The surface of the rocks is also being analysed.
"We're using colour measurement instruments that measure a spectrum that goes beyond human vision and we're able to even pick up chemical changes within that spectrum," fine art conservator Andrew Thorn says.
"The primary measure is a change in colour on the surface, but within that we can also see precursors to colour change, which are chemical reactions possibly.
"We have no guarantees of what we're going to find."
Significance of research not lost on scientists
A team of 55 people are working on the project and Professor Mullins says the field-work will be combined with laboratory work to create a complete understanding of the impact of emissions on the rock art.
The MRAMP will cost about $27 million over six years and run until early 2026.
Woodside, Yara Pilbara and Rio Tinto, have all contributed funding to the project, through the state government.
"Certainly there are a lot of people watching it, we do feel that to some extent," Professor Mullins says.
"I think this is a world-first, that a scientific research study is going to directly impact industry licences.
"We feel the pressure, but we've assembled a team of some of the world's leading scientists in their respective fields, so I think everyone is fine with the pressure."
Industry supports expanded research
Woodside, Yara Pilbara and Rio Tinto say they take their responsibility to protect and manage cultural heritage seriously and support expanded testing through the MRAMP.
"Woodside recognises the need for further research…which we believe will provide the necessary certainty to guide management and protection of the rock art," a Woodside spokeswoman says.
"The data and analysis derived from continuing and increasing the Murujuga Rock Art Monitoring Program will be critical to inform both public and industry discourse," a Rio Tinto spokeswoman says.
"We continually focus on limiting the impact of our operations, which have been estimated by the Western Australian government as a part of the Murujuga Rock Art Strategy to contribute less than 5 per cent of the total emissions of concern to the Murujuga air shed," Yara Pilbara general manager Laurent Trost says.
In the coming years, MAC rangers such as Mr Aubrey will ramp up their involvement in the project and will be responsible for the ongoing monitoring that will be used by the WA government to regulate industry.
That ongoing monitoring will also be used to help secure World Heritage listing, by showing a plan is in place to monitor and protect the culturally significant region.
"It's still a long process to actually know that anything is actually affecting the rock art," ranger Mr Aubrey says.
"From my point of view, does anybody really know what is happening?
"Until we can actually get some proper results about the rock art and to see this is happening, we've got no answers really."
|
<urn:uuid:c0a9eafa-837d-4945-9b50-56bc922d659d>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00737.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9609121680259705,
"pii_count": 0,
"score": 3.0625,
"token_count": 1187,
"url": "https://www.abc.net.au/news/2023-04-11/emissions-damage-to-indigenous-rock-art-at-murujuga/102196718?utm_campaign=abc_news_web&utm_content=link&utm_medium=content_shared&utm_source=abc_news_web"
}
|
Like many people, Yingibarndi man Glen Aubrey wants to know what impact major gas and fertiliser plants are having on ancient rock art within a World Heritage-nominated area in Western Australia's north.
"I don't want to see any rock art getting destroyed through emissions or anything like that," he says.
"I'd just like to protect it."
The Murujuga region near Karratha, 1,500 kilometres north of Perth, takes in a vast expanse of land and sea country, and is believed to be home to more than one million petroglyphs.
The region is also home to large industry, including gas and fertiliser plants, which has been a source of controversy and concern within the community.
Mr Aubrey is a local ranger and is being trained in rock art monitoring as part of the Murujuga Rock Art Monitoring Program (MRAMP).
The program was started by the state government, together with the Murujuga Aboriginal Corporation (MAC), which represents five language groups with ongoing cultural connections to the area.
Described as the largest and most comprehensive rock art monitoring program ever implemented in the world, the research will be used to set new emissions standards.
"It's important to be looking after the country," Mr Aubrey says.
"Some of these rock art have been here over 50,000 years and to see the amount of rock art in the area, it's something you've got to see for yourself."
Search for definitive answer on emissions
Ben Mullins from Curtin University is the program's scientific lead and says his job is to determine whether there are any measurable anthropogenic impacts on the rock art.
"So basically if there is anything that humans are doing that is causing the rock art to degrade or have accelerated weathering," he says.
"The dominant theory up until now has been an acid rain kind of hypothesis — that you have these chemicals that are released that can form acids and they can then accelerate the weathering.".
Professor Mullins says one of the long-standing theories is that gases from industry such as nitrogen dioxide, sulphur dioxide and ammonia are impacting rock art.
Despite significant research into whether emissions are damaging rock art in the Murujuga region, opinion remains divided.
Professor Mullins says this latest research project aims to provide a "definitive answer".
"The previous work, whoever has done it, either hasn't had the statistical power to make any determination of what's happening, or has been purely anecdotal and not of any sound scientific basis," he says
|
.
"It's obviously a challenge when we have media and other anecdotal papers coming out trying to say the answer is already there when it is clear that it's not."
Twenty-two new air quality monitors have been installed across Murujuga to collect more data for the project.
The surface of the rocks is also being analysed.
"We're using colour measurement instruments that measure a spectrum that goes beyond human vision and we're able to even pick up chemical changes within that spectrum," fine art conservator Andrew Thorn says.
"The primary measure is a change in colour on the surface, but within that we can also see precursors to colour change, which are chemical reactions possibly.
"We have no guarantees of what we're going to find."
Significance of research not lost on scientists
A team of 55 people are working on the project and Professor Mullins says the field-work will be combined with laboratory work to create a complete understanding of the impact of emissions on the rock art.
The MRAMP will cost about $27 million over six years and run until early 2026.
Woodside, Yara Pilbara and Rio Tinto, have all contributed funding to the project, through the state government.
"Certainly there are a lot of people watching it, we do feel that to some extent," Professor Mullins says.
"I think this is a world-first, that a scientific research study is going to directly impact industry licences.
"We feel the pressure, but we've assembled a team of some of the world's leading scientists in their respective fields, so I think everyone is fine with the pressure."
Industry supports expanded research
Woodside, Yara Pilbara and Rio Tinto say they take their responsibility to protect and manage cultural heritage seriously and support expanded testing through the MRAMP.
"Woodside recognises the need for further research…which we believe will provide the necessary certainty to guide management and protection of the rock art," a Woodside spokeswoman says.
"The data and analysis derived from continuing and increasing the Murujuga Rock Art Monitoring Program will be critical to inform both public and industry discourse," a Rio Tinto spokeswoman says.
"We continually focus on limiting the impact of our operations, which have been estimated by the Western Australian government as a part of the Murujuga Rock Art Strategy to contribute less than 5 per cent of the total emissions of concern to the Murujuga air shed," Yara Pilbara general manager Laurent Trost says.
In the coming years, MAC rangers such as Mr Aubrey will ramp up their involvement in the project and will be responsible for the ongoing monitoring that will be used by the WA government to regulate industry.
That ongoing monitoring will also be used to help secure World Heritage listing, by showing a plan is in place to monitor and protect the culturally significant region.
"It's still a long process to actually know that anything is actually affecting the rock art," ranger Mr Aubrey says.
"From my point of view, does anybody really know what is happening?
"Until we can actually get some proper results about the rock art and to see this is happening, we've got no answers really."
|
Why it’s crucial for SA supermarkets to cut prices of 10 key foods by a fifth
The food industry will get a tax break to ease the effects of load shedding on the cost of groceries. But there’s more that the industry can do to keep a basic basket of foods affordable.
Makhulu Klaas, 74, from Mdantsane, a township near East London in the Eastern Cape, is grateful for the R90 extra she’ll get each month as part of her old-age pension, announced in Finance Minister Enoch Godongwana’s budget speech last week.
Klaas lives with her daughter, who works as a cleaner, and two grandchildren, of two and 10 years, in a corrugated iron shack. On the one side, there’s a tarred road, on the other, just behind the tiny backyard of her small unfenced plot, a stream.
The R90 added to her current R1,980 in pension from April, will help Klaas to buy an extra 5kg bag of mealie meal and half a kilogram more sugar beans. These items are daily staples, which analysts include in tracking the prices of a basket of basic foods.
But while the government gives with one hand, it takes with the other.
From 1 April, electricity will cost 18.65% more, after the National Energy Regulator of South Africa approved a price hike by Eskom. Klaas’s household spends about R500 a month on electricity, which means R93 extra for lights and heating — her entire grant increase.
Mealie meal and dried beans in Klaas’s grocery basket give her family carbohydrates and some proteins — what nutrition scientists call macronutrients. But to stay healthy, people also need foods like meat or eggs, dairy products and vegetables and fruit, say the South African food-based dietary guidelines. These add important micronutrients — vitamins and minerals — to someone’s diet.
But about one in five South African households don’t have enough food on the table, 2021 statistics show, let alone spare cash to spend on things like dairy and vegetables.
Klaas at least is somewhat better off than many others in townships like hers because she can grow beans, spinach and butternut in her small backyard vegetable patch.
In many densely packed informal settlements, though, such home gardens are not viable. These areas easily become “food deserts” — places where it’s difficult to find nutritious foods but high-calorie, low-protein fast foods such as slap chips, sugary drinks and white bread are much easier to buy.
So, how should South Africa tackle the problem of getting people to eat healthy things if they have too little money to spend on food in the first place? Treasury — and the food industry — can take one more step to make food more affordable.
The quick fix
The government’s lifebuoys — such as upping the old-age pension — offer some relief.
In his budget speech, the finance minister now also included food manufacturers in the list of industries that can benefit from the diesel refund, which is already in place for primary producers such as farming, forestry and fishing businesses. This will apply until 2025 to help ease the pressure of inflation on the price of basic nutritious food items, such as maize meal, rice, eggs and amasi.
Child support grants — R480 per child per month — can fund about three-quarters of a child’s basic food needs. This support is for a parent who earns less than R4,400 a month and, according to the national budget for 2023, will be upped to R500 from April and R510 a month from October.
Covid-19 relief grants (R350 a month for people older than 18 who have no income and no other financial help from government) have proved a critical safety net over the past two years. This extra money, which will be available until March next year, spared 1.1-million children from slipping below the food poverty line — a boundary that affected 6.9-million kids in 2020. The food poverty line, which sits at R663 a month at the moment, is the least amount of money someone needs for buying food to give them enough energy for a day.
Visit Daily Maverick’s home page for more news, analysis and investigations
But these financial buffers are not enough. Stats SA estimates that in 2022 each person in a household needed R945 to live on, taking into account food, clothing and other costs. At R500 a month, this year’s child support grant will only cover about half of what is needed.
The long game
The long-term solution is to shift to a national food and agricultural policy that is designed to make healthy foods more affordable. The United Nations Food and Agriculture Organisation (FAO) says that government subsidies work best if they help consumers to buy more food.
But if primary producers such as farmers simply have to accept the prices food manufacturers and large retailers offer for their products, the plan could fail, FAO says, because it would undermine their livelihood.
South Africa’s Competition Commission has found that food manufacturers and big shopping chains have a big say in setting food prices in the country. While the profit margins on foods such as meat, dairy and chicken are fairly low, the Essential Food Price Monitoring Reports of the Competition Commission find that they still contribute substantially to retail prices. For example, in 2020, only about 30% of what retailers charged for a litre of fresh milk ended up in the farmer’s pocket, despite the farmer doing far more work to produce the milk than the store to sell it. Similarly, the August 2022 report notes that the price consumers pay for bread, and to some extent also maize, has risen faster than what retailers pay the manufacturers, which, the Commission says, suggests price-setting behaviour along the chain.
The responsibility of big manufacturers and supermarkets is not only a moral one, but a financial one too: public funding for social grants is a massive contributor to the revenue of the food retail market in South Africa, equal to 30% of total turnover. This means that taxpayers effectively subsidise part of the profits of the big supermarket chains like Shoprite, Spar and Pick n Pay.
A bold move from food manufacturers and retailers is needed to help South Africans buy nutritious food with the money they have. Being willing to waive their mark-ups on an essential basket of foods rated by the Grow Great zero-stunting campaign as the “10 best buys”, including eggs, speckled beans, pilchards and peanut butter — if government finances matched that commitment — will go a long way.
Based on the reports of the Competition Commission, a reasonable estimate is that such discounting by manufacturers and retailers would let the cost of the specified basket of goods drop by about 20%. A matching subsidy by government would make the basket go one-fifth further, helping to close the gap between what households need and what they earn. Together with their existing VAT exempt status, the basket price could be half of what the market might dictate.
It’s not business as usual
The initiative for a basket of discounted foods will have to come from industry. In Egypt, India and the Philippines, targeted food subsidies increased household access to food and reduced the prevalence of underweight children. In field studies in several other countries, including South Africa, retail subsidies helped people to buy healthier foods.
But it doesn’t always work. For example, in China and Iran, the support has little effect, because the subsidy, in this case, provided by the government to reduce the cost of food, was kept by the retailer and not passed on to the consumer.
If this idea is going to work, it can’t be done grudgingly. Without food retailers readily sharing information about how they determine margins on items, it’s possible that the potential of a discounted food basket to improve nutrition over the next five years may be overestimated. Moreover, times are tough also for these businesses and it’s possible that the items people choose to put in their grocery baskets may not all be healthy.
These are risks that need to be considered and for which the answers are not yet clear. But what is clear, is that it’s no longer acceptable for the food industry to excuse itself from public debate about how we confront the challenge of a nutritionally underpowered nation. MC
David Harrison is the chief executive officer of the DG Murray Trust. The Trust sees itself as a public innovator through strategic investment in early child development, education and support for young people entering the labour market.
|
<urn:uuid:e3b7ce62-fe7a-4f4c-8c13-b27a778d997a>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474700.89/warc/CC-MAIN-20240228080245-20240228110245-00519.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9562616944313049,
"pii_count": 0,
"score": 2.546875,
"token_count": 1810,
"url": "https://www.dailymaverick.co.za/article/2023-03-03-why-its-crucial-for-sa-supermarkets-to-cut-prices-of-10-key-foods-by-a-fifth/"
}
|
Why it’s crucial for SA supermarkets to cut prices of 10 key foods by a fifth
The food industry will get a tax break to ease the effects of load shedding on the cost of groceries. But there’s more that the industry can do to keep a basic basket of foods affordable.
Makhulu Klaas, 74, from Mdantsane, a township near East London in the Eastern Cape, is grateful for the R90 extra she’ll get each month as part of her old-age pension, announced in Finance Minister Enoch Godongwana’s budget speech last week.
Klaas lives with her daughter, who works as a cleaner, and two grandchildren, of two and 10 years, in a corrugated iron shack. On the one side, there’s a tarred road, on the other, just behind the tiny backyard of her small unfenced plot, a stream.
The R90 added to her current R1,980 in pension from April, will help Klaas to buy an extra 5kg bag of mealie meal and half a kilogram more sugar beans. These items are daily staples, which analysts include in tracking the prices of a basket of basic foods.
But while the government gives with one hand, it takes with the other.
From 1 April, electricity will cost 18.65% more, after the National Energy Regulator of South Africa approved a price hike by Eskom. Klaas’s household spends about R500 a month on electricity, which means R93 extra for lights and heating — her entire grant increase.
Mealie meal and dried beans in Klaas’s grocery basket give her family carbohydrates and some proteins — what nutrition scientists call macronutrients. But to stay healthy, people also need foods like meat or eggs, dairy products and vegetables and fruit, say the South African food-based dietary guidelines. These add important micronutrients — vitamins and minerals — to someone’s diet.
But about one in five South African households don’t have enough food on the table, 2021 statistics show, let alone spare cash to spend on things like dairy and vegetables.
Klaas at least is somewhat better off than many others in townships like hers because she can grow beans, spinach and butternut in her small backyard vegetable patch.
In many densely packed informal settlements, though, such home gardens are not viable. These areas easily become “food deserts” — places
|
where it’s difficult to find nutritious foods but high-calorie, low-protein fast foods such as slap chips, sugary drinks and white bread are much easier to buy.
So, how should South Africa tackle the problem of getting people to eat healthy things if they have too little money to spend on food in the first place? Treasury — and the food industry — can take one more step to make food more affordable.
The quick fix
The government’s lifebuoys — such as upping the old-age pension — offer some relief.
In his budget speech, the finance minister now also included food manufacturers in the list of industries that can benefit from the diesel refund, which is already in place for primary producers such as farming, forestry and fishing businesses. This will apply until 2025 to help ease the pressure of inflation on the price of basic nutritious food items, such as maize meal, rice, eggs and amasi.
Child support grants — R480 per child per month — can fund about three-quarters of a child’s basic food needs. This support is for a parent who earns less than R4,400 a month and, according to the national budget for 2023, will be upped to R500 from April and R510 a month from October.
Covid-19 relief grants (R350 a month for people older than 18 who have no income and no other financial help from government) have proved a critical safety net over the past two years. This extra money, which will be available until March next year, spared 1.1-million children from slipping below the food poverty line — a boundary that affected 6.9-million kids in 2020. The food poverty line, which sits at R663 a month at the moment, is the least amount of money someone needs for buying food to give them enough energy for a day.
Visit Daily Maverick’s home page for more news, analysis and investigations
But these financial buffers are not enough. Stats SA estimates that in 2022 each person in a household needed R945 to live on, taking into account food, clothing and other costs. At R500 a month, this year’s child support grant will only cover about half of what is needed.
The long game
The long-term solution is to shift to a national food and agricultural policy that is designed to make healthy foods more affordable. The United Nations Food and Agriculture Organisation (FAO) says that government subsidies work best if they help consumers to buy more food.
But if primary producers such as farmers simply have to accept the prices food manufacturers and large retailers offer for their products, the plan could fail, FAO says, because it would undermine their livelihood.
South Africa’s Competition Commission has found that food manufacturers and big shopping chains have a big say in setting food prices in the country. While the profit margins on foods such as meat, dairy and chicken are fairly low, the Essential Food Price Monitoring Reports of the Competition Commission find that they still contribute substantially to retail prices. For example, in 2020, only about 30% of what retailers charged for a litre of fresh milk ended up in the farmer’s pocket, despite the farmer doing far more work to produce the milk than the store to sell it. Similarly, the August 2022 report notes that the price consumers pay for bread, and to some extent also maize, has risen faster than what retailers pay the manufacturers, which, the Commission says, suggests price-setting behaviour along the chain.
The responsibility of big manufacturers and supermarkets is not only a moral one, but a financial one too: public funding for social grants is a massive contributor to the revenue of the food retail market in South Africa, equal to 30% of total turnover. This means that taxpayers effectively subsidise part of the profits of the big supermarket chains like Shoprite, Spar and Pick n Pay.
A bold move from food manufacturers and retailers is needed to help South Africans buy nutritious food with the money they have. Being willing to waive their mark-ups on an essential basket of foods rated by the Grow Great zero-stunting campaign as the “10 best buys”, including eggs, speckled beans, pilchards and peanut butter — if government finances matched that commitment — will go a long way.
Based on the reports of the Competition Commission, a reasonable estimate is that such discounting by manufacturers and retailers would let the cost of the specified basket of goods drop by about 20%. A matching subsidy by government would make the basket go one-fifth further, helping to close the gap between what households need and what they earn. Together with their existing VAT exempt status, the basket price could be half of what the market might dictate.
It’s not business as usual
The initiative for a basket of discounted foods will have to come from industry. In Egypt, India and the Philippines, targeted food subsidies increased household access to food and reduced the prevalence of underweight children. In field studies in several other countries, including South Africa, retail subsidies helped people to buy healthier foods.
But it doesn’t always work. For example, in China and Iran, the support has little effect, because the subsidy, in this case, provided by the government to reduce the cost of food, was kept by the retailer and not passed on to the consumer.
If this idea is going to work, it can’t be done grudgingly. Without food retailers readily sharing information about how they determine margins on items, it’s possible that the potential of a discounted food basket to improve nutrition over the next five years may be overestimated. Moreover, times are tough also for these businesses and it’s possible that the items people choose to put in their grocery baskets may not all be healthy.
These are risks that need to be considered and for which the answers are not yet clear. But what is clear, is that it’s no longer acceptable for the food industry to excuse itself from public debate about how we confront the challenge of a nutritionally underpowered nation. MC
David Harrison is the chief executive officer of the DG Murray Trust. The Trust sees itself as a public innovator through strategic investment in early child development, education and support for young people entering the labour market.
|
Though dogs are so close genetically to wolves that many taxonomists consider them to be a subspecies, most people wouldn't let a wolf lick their hand as readily as a Shih Tzu. When animals are domesticated, as the dog was, their traits change; an artificial selection occurs over many generations, which, in the case of the dog, probably happened through unconscious selection bias among ancient humans and their canid hangers-on.
Other animals, too, saw similar phenotype changes through the process of domestication. When wild boars were first domesticated in areas of both modern Turkey and China, the farmers who bred them preferred animals with less fur, more meat and a tamer disposition. Similarly, a famous 2020 study on wild foxes in urban areas found that the city-dwelling creatures had distinctly different physical traits, such as "a noticeably shortened wider snout with a reduced maxillary [jaw] region" and with "a braincase appeared to be smaller in the urban habitat" — which suggested they were becoming semi-domesticated.
Indeed, there are so many physical traits that humans observe as distinguishing domesticated animals from wild ones that the process is sometimes called "domestication syndrome." As famously laid out by Russian zoologist Dmitry Belyayev in the 1960s and 1970s during his "silver fox experiments," domestication syndrome posits that there are certain common physical traits that emerge among most species as they get domesticated by humans.
"If there isn't a single trait that is common to all these different definitions [of domesticated], what is there?" Larson told Salon.
Yet what if there is no pattern between which traits correspond with domestication? What if those alterations are in fact species-specific, and arguments to the contrary are simply scientists succumbing to apophenia (a tendency to wrongly discern patterns in unrelated things)?
Some researchers believe this may be the case — which, in turn, means we have been thinking about domestication all wrong.
* * *
According to Dr. Greger Larson — an archaeology professor at the University of Oxford — advocates of domestication syndrome simply have not provided enough evidence to support their theory. In 2020, he and several other scientists wrote a paper for the journal Trends in Ecology & Evolution called "The History of Farm Foxes Undermines the Animal Domestication Syndrome" that made precisely this point. It started by deconstructing Belyayev's famous experiment with farm foxes, the one that seemed to prove domestication theory. Belyayev found that after multiple generations of controlled breeding silver foxes on Prince Edward Island, the animals began to develop traits associated with domesticity: Docility, floppy ears, spotted coats and curled tails.
Yet there are two problems with concluding that this proves domestication theory, as the 2020 paper pointed out: First, it ignores how Belyayev used foxes bought from a fur farm, and who therefore may have already had preselected traits. Additionally, an analysis of the different domesticated animals does not show any consistent patterns in terms of evolutionary traits.
"These are the general categories of things that people have used to distinguish a domesticated animal from a wild animal," Larson told Salon. He had pulled out a chart with a list of nine domesticated animals: Dogs, cats, goats, pigs, rabbits, rats, mice, foxes, and the original Russian farm-foxes. These were then cross-referenced with traits associated with domestication such as changes in their skeletons, coats, ears, tails, brain sizes and seasonalities. When placed on a grid, it became patently obvious that no patterns existed. For example, on some occasions both types of foxes along with dogs, cats and goats experienced increased variation in their coat coloring due to domestication — but that did not happen all the time, and it was by far the most prevalent "yes" category for showing supposed signs of "domestication syndrome." Quite often there was simply not enough data, and when it came to traits like skeletal changes and tail evolution, no meaningful trends or patterns existed at all.
"If there isn't a single trait that is common to all these different definitions, what is there?" Larson told Salon. "If you don't have a single characteristic amongst what, 30 separate characteristics here, that is the same across 10 different, separate definitions of it, what the hell is it? Everybody assumes they know what it is, but as soon as you start looking at it, it just vanishes before your fingertips."
Kathryn Lord, a postdoctoral associate in the Karlsson Lab who works at the Broad Institute of MIT and Harvard (and whom Larson described as "brilliant"), was instrumental in co-authoring the 2020 paper. She argued that their research undermined Belyayev's thesis, even though his experiment is still impressive.
"While [Belyayev]'s did indeed successfully select for increased tameability in his foxes, all of the traits that supposedly came along with that selection previously existed in the population from Prince Edwards Island decades before the experiment," Lord told Salon by email. "Therefore, [Belyayev]'s experiment, while still fantastically interesting for changes in tameability, does not provide support for the idea of the domestication syndrome."
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.
After all, Lord pointed out, there is no getting around the facts from their own paper: "We found that none of the traits show up across the 7 very common domestic mammals we looked into," Lord wrote to Salon. "In most cases where domestication syndrome traits were reported they were appearing in specific modern breeds. The problem with this is that modern animal breeds only came into existence in the 19th century and are the result of selection on already domestic animals."
* * *
So can science definitively determine whether domestication syndrome is a real thing?
Larson explained that the community must first come to a more concrete understanding of what it would actually look like. For now, it is more difficult to study domestication because the benchmarks can move around. Larson compared this to his experience working in a very different industry — music.
"There is increasing evidence to suggest that many domestic animals simply adapted to an environment we created and that we may have rarely, if ever, had a direct hand in the process."
"I was kind of a consultant on the Pearl Jam film 'Pearl Jam Twenty' that came out in 2011," Larson recalled. "There was a group of us then who were meeting, and one of the guys suggested that maybe what we should do is try and figure out how many live shows they'd actually played because it appeared to all of us said they might be getting close to a thousand live shows." That seemed like a cool idea — until the experts realized that "live show" had a rather fluid definition.
"Very much like the domestication syndrome, you've got to define it," Larson recalled. "Well, what constitutes a live show? Did the three songs on "Saturday Night Live" count? What about when they only played in front of small audiences? If they appeared on a radio with only two members of the band, did that count?
"It's the same thing with how you define domestication syndrome," Larson told Salon. "It definitely depends on what you're looking for and what you're counting. So all we are saying in that paper is that everybody has just assumed that it's a real thing, but nobody's actually gone through and tried to define it. If everybody had first said, 'Look, it's a thing, now we require an explanation to describe the thing,' that can work, but nobody ever actually tested whether or not this thing exists. So if the thing doesn't exist, why would you ever expect a single unifying cause for it?"
Even the one trait that seemed to be most prevalent among all different types of domesticated animals — their tameability — becomes murkier when places in a scientific context.
"Increased tameabillity may be a common theme across domestic animals," Lord wrote, but "the definition of domestication often includes increased tameability so it is a bit of a circularity problem. We will only know if increased tameability is really a thing across domestic animals if we agree upon a definition of domestication that doesn't require it." Similarly, when trying to determine how animals are changed by humans, "many definitions require human control precluding our ability to study how much humans were actively involved in the process. There is increasing evidence to suggest that many domestic animals simply adapted to an environment we created and that we may have rarely if ever had a direct (let alone intentional) hand in the process."
|
<urn:uuid:3d5f7a9e-5f60-428b-8b2c-31511f92b972>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473735.7/warc/CC-MAIN-20240222061937-20240222091937-00037.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9746107459068298,
"pii_count": 0,
"score": 3.578125,
"token_count": 1807,
"url": "https://www.salon.com/2023/04/26/scientists-once-thought-they-understood-how-domestication-worked-now-theyre-not-so-sure/"
}
|
Though dogs are so close genetically to wolves that many taxonomists consider them to be a subspecies, most people wouldn't let a wolf lick their hand as readily as a Shih Tzu. When animals are domesticated, as the dog was, their traits change; an artificial selection occurs over many generations, which, in the case of the dog, probably happened through unconscious selection bias among ancient humans and their canid hangers-on.
Other animals, too, saw similar phenotype changes through the process of domestication. When wild boars were first domesticated in areas of both modern Turkey and China, the farmers who bred them preferred animals with less fur, more meat and a tamer disposition. Similarly, a famous 2020 study on wild foxes in urban areas found that the city-dwelling creatures had distinctly different physical traits, such as "a noticeably shortened wider snout with a reduced maxillary [jaw] region" and with "a braincase appeared to be smaller in the urban habitat" — which suggested they were becoming semi-domesticated.
Indeed, there are so many physical traits that humans observe as distinguishing domesticated animals from wild ones that the process is sometimes called "domestication syndrome." As famously laid out by Russian zoologist Dmitry Belyayev in the 1960s and 1970s during his "silver fox experiments," domestication syndrome posits that there are certain common physical traits that emerge among most species as they get domesticated by humans.
"If there isn't a single trait that is common to all these different definitions [of domesticated], what is there?" Larson told Salon.
Yet what if there is no pattern between which traits correspond with domestication? What if those alterations are in fact species-specific, and arguments to the contrary are simply scientists succumbing to apophenia (a tendency to wrongly discern patterns in unrelated things)?
Some researchers believe this may be the case — which, in turn, means we have been thinking about domestication all wrong.
* * *
According to Dr. Greger Larson — an archaeology professor at the University of Oxford — advocates of domestication syndrome simply have not provided enough evidence to support their theory. In 2020, he and several other scientists wrote a paper for the journal Trends in Ecology & Evolution called "The History of Farm Foxes Undermines the Animal Domestication Syndrome" that made precisely this point. It started by deconstructing Belyayev's famous experiment with farm
|
foxes, the one that seemed to prove domestication theory. Belyayev found that after multiple generations of controlled breeding silver foxes on Prince Edward Island, the animals began to develop traits associated with domesticity: Docility, floppy ears, spotted coats and curled tails.
Yet there are two problems with concluding that this proves domestication theory, as the 2020 paper pointed out: First, it ignores how Belyayev used foxes bought from a fur farm, and who therefore may have already had preselected traits. Additionally, an analysis of the different domesticated animals does not show any consistent patterns in terms of evolutionary traits.
"These are the general categories of things that people have used to distinguish a domesticated animal from a wild animal," Larson told Salon. He had pulled out a chart with a list of nine domesticated animals: Dogs, cats, goats, pigs, rabbits, rats, mice, foxes, and the original Russian farm-foxes. These were then cross-referenced with traits associated with domestication such as changes in their skeletons, coats, ears, tails, brain sizes and seasonalities. When placed on a grid, it became patently obvious that no patterns existed. For example, on some occasions both types of foxes along with dogs, cats and goats experienced increased variation in their coat coloring due to domestication — but that did not happen all the time, and it was by far the most prevalent "yes" category for showing supposed signs of "domestication syndrome." Quite often there was simply not enough data, and when it came to traits like skeletal changes and tail evolution, no meaningful trends or patterns existed at all.
"If there isn't a single trait that is common to all these different definitions, what is there?" Larson told Salon. "If you don't have a single characteristic amongst what, 30 separate characteristics here, that is the same across 10 different, separate definitions of it, what the hell is it? Everybody assumes they know what it is, but as soon as you start looking at it, it just vanishes before your fingertips."
Kathryn Lord, a postdoctoral associate in the Karlsson Lab who works at the Broad Institute of MIT and Harvard (and whom Larson described as "brilliant"), was instrumental in co-authoring the 2020 paper. She argued that their research undermined Belyayev's thesis, even though his experiment is still impressive.
"While [Belyayev]'s did indeed successfully select for increased tameability in his foxes, all of the traits that supposedly came along with that selection previously existed in the population from Prince Edwards Island decades before the experiment," Lord told Salon by email. "Therefore, [Belyayev]'s experiment, while still fantastically interesting for changes in tameability, does not provide support for the idea of the domestication syndrome."
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter The Vulgar Scientist.
After all, Lord pointed out, there is no getting around the facts from their own paper: "We found that none of the traits show up across the 7 very common domestic mammals we looked into," Lord wrote to Salon. "In most cases where domestication syndrome traits were reported they were appearing in specific modern breeds. The problem with this is that modern animal breeds only came into existence in the 19th century and are the result of selection on already domestic animals."
* * *
So can science definitively determine whether domestication syndrome is a real thing?
Larson explained that the community must first come to a more concrete understanding of what it would actually look like. For now, it is more difficult to study domestication because the benchmarks can move around. Larson compared this to his experience working in a very different industry — music.
"There is increasing evidence to suggest that many domestic animals simply adapted to an environment we created and that we may have rarely, if ever, had a direct hand in the process."
"I was kind of a consultant on the Pearl Jam film 'Pearl Jam Twenty' that came out in 2011," Larson recalled. "There was a group of us then who were meeting, and one of the guys suggested that maybe what we should do is try and figure out how many live shows they'd actually played because it appeared to all of us said they might be getting close to a thousand live shows." That seemed like a cool idea — until the experts realized that "live show" had a rather fluid definition.
"Very much like the domestication syndrome, you've got to define it," Larson recalled. "Well, what constitutes a live show? Did the three songs on "Saturday Night Live" count? What about when they only played in front of small audiences? If they appeared on a radio with only two members of the band, did that count?
"It's the same thing with how you define domestication syndrome," Larson told Salon. "It definitely depends on what you're looking for and what you're counting. So all we are saying in that paper is that everybody has just assumed that it's a real thing, but nobody's actually gone through and tried to define it. If everybody had first said, 'Look, it's a thing, now we require an explanation to describe the thing,' that can work, but nobody ever actually tested whether or not this thing exists. So if the thing doesn't exist, why would you ever expect a single unifying cause for it?"
Even the one trait that seemed to be most prevalent among all different types of domesticated animals — their tameability — becomes murkier when places in a scientific context.
"Increased tameabillity may be a common theme across domestic animals," Lord wrote, but "the definition of domestication often includes increased tameability so it is a bit of a circularity problem. We will only know if increased tameability is really a thing across domestic animals if we agree upon a definition of domestication that doesn't require it." Similarly, when trying to determine how animals are changed by humans, "many definitions require human control precluding our ability to study how much humans were actively involved in the process. There is increasing evidence to suggest that many domestic animals simply adapted to an environment we created and that we may have rarely if ever had a direct (let alone intentional) hand in the process."
|
Milder winters mean Connecticut's ticks are now active all year long
Milder, shorter winters in Connecticut have all but eliminated tick “seasons” as state officials now say ticks are active year-round.
The state’s Active Tick Surveillance Program used to get 50 tick specimens from December to March. But in recent months, it’s received 800, according to Dr. Goudarz Molaei, who leads the program and is with the Connecticut Agricultural Experiment Station.
When temperatures drop below freezing, ticks burrow under leaves or snow, and that cold weather can actually kill tick populations. But more mild winters can keep ticks from going dormant and also reduce mortality, leading to a population increase.
It also means they can present a bigger danger to people.
“Once the temperature increases to above freezing, temperature close to 40 degrees [Fahrenheit], they come out and actively seek hosts,” Molaei said.
Average air temperatures in Connecticut have risen by more than two degrees Fahrenheit in recent decades, which can often be the difference between freezing and not freezing, according to the state Department of Energy and Environmental Protection. Winter seasonal temperature averages have experienced the biggest jump, according to the Connecticut Institute for Resilience & Climate Adaptation.
Jessica Spaccio, a climatologist at the Northeast Regional Climate Center, said higher temperatures are being seen year-round in Connecticut and in the rest of New England.
”Winter is just another part of that,” Spaccio said. “Where we are seeing the long-term trends showing that overall, that the temperatures are continuing to climb due to anthropogenic climate change.”
NRCC projects that temperatures in the state will remain mild for the rest of January.
Long term, people in the Northeast will be infected with more tick-borne diseases, Molaei added, noting that Lyme disease has been detected equally among tick specimens in the winter and summer. Lyme, primarily carried by blacklegged ticks, has already been steadily rising in the U.S., and its prevalence nearly doubled between 1991 and 2018.
In addition, invasive tick species like the Asian longhorn tick and Gulf Coast tick have been detected in the state, particularly in Fairfield and New Haven counties.
Molaei said it’s time for people to realize that Connecticut is a state with pervasive tick areas and that it’s incorrect to think that ticks go away during the winter – or that they’re not dangerous.
“There is no such thing that [in] wintertime we are dealing with less ticks, or the ticks that we are having in winter times may not be able to transmit Lyme disease or other tick-borne diseases,” Molaei said.
To prevent tick bites, the CDC recommends:
- Avoiding potential tick-filled places like high grassy or wooded areas
- Tucking pant legs into socks
- Walking along the center of wooded trails
- Checking your clothing, gear and pets for ticks upon return
- For pets, talking to your vet about prevention products
- Showering less than two hours after being outdoors
- Doing a full body check to search for ticks that may have bit
For information on submitting a tick to test for diseases or identification, visit the Connecticut Agricultural Experiment Station website.
|
<urn:uuid:2cfe437f-3eb3-4cb5-9f35-080d8c05d6c4>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00218.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9453860521316528,
"pii_count": 0,
"score": 3.109375,
"token_count": 693,
"url": "https://www.ctpublic.org/news/2023-01-18/milder-winters-mean-connecticuts-ticks-are-now-active-all-year-long"
}
|
Milder winters mean Connecticut's ticks are now active all year long
Milder, shorter winters in Connecticut have all but eliminated tick “seasons” as state officials now say ticks are active year-round.
The state’s Active Tick Surveillance Program used to get 50 tick specimens from December to March. But in recent months, it’s received 800, according to Dr. Goudarz Molaei, who leads the program and is with the Connecticut Agricultural Experiment Station.
When temperatures drop below freezing, ticks burrow under leaves or snow, and that cold weather can actually kill tick populations. But more mild winters can keep ticks from going dormant and also reduce mortality, leading to a population increase.
It also means they can present a bigger danger to people.
“Once the temperature increases to above freezing, temperature close to 40 degrees [Fahrenheit], they come out and actively seek hosts,” Molaei said.
Average air temperatures in Connecticut have risen by more than two degrees Fahrenheit in recent decades, which can often be the difference between freezing and not freezing, according to the state Department of Energy and Environmental Protection. Winter seasonal temperature averages have experienced the biggest jump, according to the Connecticut Institute for Resilience & Climate Adaptation.
Jessica Spaccio, a climatologist at the Northeast Regional Climate Center, said higher temperatures are being seen year-round in Connecticut and in the rest of New England.
”Winter is just another part of that,” Spaccio said. “Where we are seeing the long-term trends showing that overall, that the temperatures are continuing to climb due to anthropogenic climate change.”
NRCC projects that temperatures in the state will remain mild for the rest of January.
Long term, people in the Northeast will be infected with more tick-borne diseases, Molaei added, noting that Lyme disease has been detected equally among tick specimens in the winter and summer. Lyme, primarily carried by blacklegged ticks, has already been steadily rising in the U.S., and its prevalence nearly doubled between 1991 and 2018.
In addition, invasive tick species like the Asian longhorn tick and Gulf Coast tick have been detected in the state, particularly in Fairfield and New Haven counties.
Molaei said it’s time for people to realize that Connecticut is a state with pervasive tick areas and that it’s incorrect to think that ticks go away during the winter – or that they’re not dangerous.
“There is no such thing that [in] wintertime we are dealing
|
with less ticks, or the ticks that we are having in winter times may not be able to transmit Lyme disease or other tick-borne diseases,” Molaei said.
To prevent tick bites, the CDC recommends:
- Avoiding potential tick-filled places like high grassy or wooded areas
- Tucking pant legs into socks
- Walking along the center of wooded trails
- Checking your clothing, gear and pets for ticks upon return
- For pets, talking to your vet about prevention products
- Showering less than two hours after being outdoors
- Doing a full body check to search for ticks that may have bit
For information on submitting a tick to test for diseases or identification, visit the Connecticut Agricultural Experiment Station website.
|
Lung cancer survival rates rise, but low screening rates leave many people at risk
Survival rates for lung cancer are improving, especially among historically marginalized communities of color, according toa new survey from the American Lung Association released Tuesday.
The findings are a bright note amid deepening racial disparities in many areas in health care.
The five-year lung cancer survival rate increased by 22% in the five years between 2015 to 2019. It currently stands at 26.6% across all racial and ethnic groups. Among people of color, the survival rate increased by 17% in just two years (2017-2019), and now stands at 23.7%.
The survey results were "unexpected," says Zach Jump, director of epidemiology and statistics for the American Lung Association, adding that the speed with which racial disparities appear to be closing is remarkable.
"We are encouraged by the work being done to eliminate lung cancer stigma, increase lung cancer screening and improve lung cancer treatment," said Harold Wimmer, president and CEO of the American Lung Association in a statement.
Lung cancer is still the cancer thatkills the most Americans, with 127,000 deaths last year. People of color tend to be diagnosed at later stages than their white counterparts, and are less likely to get access to treatments like surgery, which historically have reduced their likelihood of survival.
Survival improvements are not equal across all the races and some disparities still exist. The white survival rate is 25%, but the survival rate is 21% for Black Americans, 22% for Indigenous peoples, and 23% for Hispanics. These rates are an improvement over data from two years earlier, when the survival rates were only 18% for Black Americans, and 19% for Indigenous peoples and Hispanics.
Asian Americans survive lung cancer at higher rates than whites, and their survival rate jumped from 23.4% to 29% over two years.
Jump says he hopes these improvements can be continued, and replicated across other racial disparities in health care. "Honestly, that is our next question: Trying to find out what the driving factor is behind it."
The report also notes some stark geographic disparities in lung cancer survival rates. Patients in Rhode Island had a 33% survival rate, while Oklahoma's was 21%.
Overall lung cancer five-year survival rates are markedly lower than many other cancers. Breast cancer, for instance, has a 91% five-year survival rate, and colorectal cancer's rate is around 65%.
Survival rates for lung cancer could be higher, Jump says, if more people at high-risk got annual low-dose CT scans, which are an effective way to catch the disease early. When caught at an early stage, lung cancer's five-year survival rate is much higher at 63%.
But last year only 4.5% of those eligible were screened for lung cancer — a rate far below that for breast or colorectal cancers.
In fact, just over a quarter of lung cancer cases are diagnosed at an early stage, according to the report, and 44% of cases are not caught until a late stage when the survival rate is only 8%.
Jump says lung cancer does not have to be the same dire diagnosis it once was, thanks to recent new treatments that are proving very effective, especially when used at an early stage. "Suddenly you started getting these targeted immunotherapies, and it was a paradigm shift," he says.
Jump says he hopes screening rates will improve, pushing survival rates higher.
It's rare to see such dramatic improvements in cancer care, and survival rates over such a short time, especially in ways that benefit disadvantaged communities.
"So often, cancer care in general and lung cancer especially moves at a pretty slow pace," Jump says. "So being able to see significant progress over a couple of years has been very exciting and definitely a cause for optimism."
Copyright 2024 NPR. To see more, visit https://www.npr.org.
|
<urn:uuid:1ed96fbb-635d-4587-804c-713ec54503cc>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00029.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9687252044677734,
"pii_count": 0,
"score": 2.765625,
"token_count": 808,
"url": "https://www.kgou.org/health/2023-11-13/lung-cancer-survival-rates-rise-but-low-screening-rates-leave-many-people-at-risk"
}
|
Lung cancer survival rates rise, but low screening rates leave many people at risk
Survival rates for lung cancer are improving, especially among historically marginalized communities of color, according toa new survey from the American Lung Association released Tuesday.
The findings are a bright note amid deepening racial disparities in many areas in health care.
The five-year lung cancer survival rate increased by 22% in the five years between 2015 to 2019. It currently stands at 26.6% across all racial and ethnic groups. Among people of color, the survival rate increased by 17% in just two years (2017-2019), and now stands at 23.7%.
The survey results were "unexpected," says Zach Jump, director of epidemiology and statistics for the American Lung Association, adding that the speed with which racial disparities appear to be closing is remarkable.
"We are encouraged by the work being done to eliminate lung cancer stigma, increase lung cancer screening and improve lung cancer treatment," said Harold Wimmer, president and CEO of the American Lung Association in a statement.
Lung cancer is still the cancer thatkills the most Americans, with 127,000 deaths last year. People of color tend to be diagnosed at later stages than their white counterparts, and are less likely to get access to treatments like surgery, which historically have reduced their likelihood of survival.
Survival improvements are not equal across all the races and some disparities still exist. The white survival rate is 25%, but the survival rate is 21% for Black Americans, 22% for Indigenous peoples, and 23% for Hispanics. These rates are an improvement over data from two years earlier, when the survival rates were only 18% for Black Americans, and 19% for Indigenous peoples and Hispanics.
Asian Americans survive lung cancer at higher rates than whites, and their survival rate jumped from 23.4% to 29% over two years.
Jump says he hopes these improvements can be continued, and replicated across other racial disparities in health care. "Honestly, that is our next question: Trying to find out what the driving factor is behind it."
The report also notes some stark geographic disparities in lung cancer survival rates. Patients in Rhode Island had a 33% survival rate, while Oklahoma's was 21%.
Overall lung cancer five-year survival rates are markedly lower than many
|
other cancers. Breast cancer, for instance, has a 91% five-year survival rate, and colorectal cancer's rate is around 65%.
Survival rates for lung cancer could be higher, Jump says, if more people at high-risk got annual low-dose CT scans, which are an effective way to catch the disease early. When caught at an early stage, lung cancer's five-year survival rate is much higher at 63%.
But last year only 4.5% of those eligible were screened for lung cancer — a rate far below that for breast or colorectal cancers.
In fact, just over a quarter of lung cancer cases are diagnosed at an early stage, according to the report, and 44% of cases are not caught until a late stage when the survival rate is only 8%.
Jump says lung cancer does not have to be the same dire diagnosis it once was, thanks to recent new treatments that are proving very effective, especially when used at an early stage. "Suddenly you started getting these targeted immunotherapies, and it was a paradigm shift," he says.
Jump says he hopes screening rates will improve, pushing survival rates higher.
It's rare to see such dramatic improvements in cancer care, and survival rates over such a short time, especially in ways that benefit disadvantaged communities.
"So often, cancer care in general and lung cancer especially moves at a pretty slow pace," Jump says. "So being able to see significant progress over a couple of years has been very exciting and definitely a cause for optimism."
Copyright 2024 NPR. To see more, visit https://www.npr.org.
|
After years of drought, ranchers face a historic hay shortage
Hay inventory in the United States is at a 50-year low, according to a federal agriculture statistics agency. Drought and a spike in fertilizer prices are largely to blame.
Vanessa Corriher-Olson, a professor and extension forage specialist in the Department of Soil and Crop Sciences at Texas A&M University, spoke to the Texas Standard about the shortage’s impact on farmers and ranchers in Texas.
This transcript has been edited lightly for clarity:
Texas Standard: Tell us why there’s so little hay out on the landscape right now. What’s driving this?
Vanessa Corriher-Olson: Unfortunately, our year, our hay season for 2022 was very challenging – along with drought conditions in the majority of our state of Texas, we also had high fertilizer prices.
This is going to have ripple effects down the line. How have ranchers been affected by this shortage of hay?
So at this time of year, it’s difficult to plant anything. We’re basically in the middle of winter. So for folks that didn’t plant any winter pasture last fall, they’re kind of in a bind or may possibly be in a bind because of hay availability, or just what’s available might be low in nutritive value or may not meet the needs of their livestock. So a lot of folks are kind of scrambling, possibly trying to find hay or trying to look for an alternative to meet the needs of their livestock at this point in the season.
Well, if you can find the hay, how much more are you paying for it? And does it make economic sense to go there?
Well, hay prices do kind of vary depending on where you are in the state. Most hay is likely going to be over $100 a round bale, if not more, in some parts of our state. A lot of round bales of hay can cost a producer to produce, you know, over $160 a bale. So most hay producers, even during a normal season – good growing conditions and moderate fertilizer prices – are probably underselling themselves, selling hay for cheaper price than what it likely cost them to produce.
What has to happen now for hay stocks to get back to normal?
A lot of things. One, our hay meadows have to recover from the drought of 2022. And for some producers, their recovery will take longer than others. It will depend on how those hay meadows were managed during 2022, during the drought, and whether or not they were fertilized at all during 2022 with higher fertilizer prices.
So one of my recommendations at this point as we move forward for those hay producers is to soil test to determine what their nutrient status is and what is needed to promote that desired warm, seasonal perennial and what is going to be needed to boost production if we do have good rainfall this next season in 2023. So rainfall will also be important, but nutrients – nitrogen, phosphorus and potassium – are going to be critical for recovery and production of those hay meadows in 2023.
I’m trying to get a 30,000-foot view of this. What does this mean if hay stocks and prices don’t return to normal? Are we talking about ranches that could go out of business here?
I don’t think it’s necessarily that dire. It is possible it could force some folks to make decisions about their future and ranching. Hopefully, it will make folks reevaluate their management of their forage systems, maybe look at being more thorough or at completing soil test using that very inexpensive process to determine what nutrients are needed for their forage production. And, you know, visiting with county extension agents or, you know, specialists like myself to get recommendations on how to move forward despite fertilizer prices, if they remain fairly high, and making good best management practices and decisions to promote that desired forage and to continue to be productive.
We do live in Texas, so we have to be prepared, and we know that the drought is always a possibility during our summers. And so I’m hoping that moreso our previous situation is going to open up people’s eyes in regards to following best management practices to be more effective with their forage systems, whether we’re in a drought or have good growing conditions; whether we have high fertilizer prices or fertilizer prices come down. So hopefully folks will reevaluate their production systems before they completely get out of the business and see if there’s a way they can make some improvements to be more efficient.
If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.
|
<urn:uuid:45e0fd5f-9d36-4636-9f04-016631916179>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943746.73/warc/CC-MAIN-20230321193811-20230321223811-00404.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9566308856010437,
"pii_count": 0,
"score": 2.65625,
"token_count": 1007,
"url": "https://www.kut.org/texasstandard/2023-02-08/texas-ranchers-face-historic-hay-shortage-after-years-drought"
}
|
After years of drought, ranchers face a historic hay shortage
Hay inventory in the United States is at a 50-year low, according to a federal agriculture statistics agency. Drought and a spike in fertilizer prices are largely to blame.
Vanessa Corriher-Olson, a professor and extension forage specialist in the Department of Soil and Crop Sciences at Texas A&M University, spoke to the Texas Standard about the shortage’s impact on farmers and ranchers in Texas.
This transcript has been edited lightly for clarity:
Texas Standard: Tell us why there’s so little hay out on the landscape right now. What’s driving this?
Vanessa Corriher-Olson: Unfortunately, our year, our hay season for 2022 was very challenging – along with drought conditions in the majority of our state of Texas, we also had high fertilizer prices.
This is going to have ripple effects down the line. How have ranchers been affected by this shortage of hay?
So at this time of year, it’s difficult to plant anything. We’re basically in the middle of winter. So for folks that didn’t plant any winter pasture last fall, they’re kind of in a bind or may possibly be in a bind because of hay availability, or just what’s available might be low in nutritive value or may not meet the needs of their livestock. So a lot of folks are kind of scrambling, possibly trying to find hay or trying to look for an alternative to meet the needs of their livestock at this point in the season.
Well, if you can find the hay, how much more are you paying for it? And does it make economic sense to go there?
Well, hay prices do kind of vary depending on where you are in the state. Most hay is likely going to be over $100 a round bale, if not more, in some parts of our state. A lot of round bales of hay can cost a producer to produce, you know, over $160 a bale. So most hay producers, even during a normal season – good growing conditions and moderate fertilizer prices – are probably underselling themselves, selling hay for cheaper price than what it likely cost them to produce.
What has to happen now for hay stocks to get back to normal?
A lot of things. One, our hay meadows have to recover from the drought of 2022. And for some producers, their recovery will take longer than others. It will depend on how those hay meadows were
|
managed during 2022, during the drought, and whether or not they were fertilized at all during 2022 with higher fertilizer prices.
So one of my recommendations at this point as we move forward for those hay producers is to soil test to determine what their nutrient status is and what is needed to promote that desired warm, seasonal perennial and what is going to be needed to boost production if we do have good rainfall this next season in 2023. So rainfall will also be important, but nutrients – nitrogen, phosphorus and potassium – are going to be critical for recovery and production of those hay meadows in 2023.
I’m trying to get a 30,000-foot view of this. What does this mean if hay stocks and prices don’t return to normal? Are we talking about ranches that could go out of business here?
I don’t think it’s necessarily that dire. It is possible it could force some folks to make decisions about their future and ranching. Hopefully, it will make folks reevaluate their management of their forage systems, maybe look at being more thorough or at completing soil test using that very inexpensive process to determine what nutrients are needed for their forage production. And, you know, visiting with county extension agents or, you know, specialists like myself to get recommendations on how to move forward despite fertilizer prices, if they remain fairly high, and making good best management practices and decisions to promote that desired forage and to continue to be productive.
We do live in Texas, so we have to be prepared, and we know that the drought is always a possibility during our summers. And so I’m hoping that moreso our previous situation is going to open up people’s eyes in regards to following best management practices to be more effective with their forage systems, whether we’re in a drought or have good growing conditions; whether we have high fertilizer prices or fertilizer prices come down. So hopefully folks will reevaluate their production systems before they completely get out of the business and see if there’s a way they can make some improvements to be more efficient.
If you found the reporting above valuable, please consider making a donation to support it here. Your gift helps pay for everything you find on texasstandard.org and KUT.org. Thanks for donating today.
|
Pumpkin or sweet potato pie? There's a rich history behind both
For many Americans, the Thanksgiving feast just isn't complete until they dig into a slice of pumpkin or sweet potato pie — and for some, there's simply no room for both.
The two desserts have similarities — both are orange, sweet and have a bit of spice — but they have distinct histories that have led to dedicated followings.
At Ol' Henry Restaurant in suburban St. Louis, opinions are strong.
"Sweet potato pie. Definitely," said owner Ada Joyce Taylor.
Her granddaughter, operational manager Audrianna Black, agrees. "I'm looking for that sweetness in a pie," she added.
Taylor started the soul food business almost 10 years ago, naming it after her late husband, Henry Edward Taylor. Today, the walls are lined with signed photos and memorabilia, and it's a well-known destination for those craving sweet potato pie.
"On a normal week, I would say that we go through 15 to 20 sweet potato pies," said chef Tracy Stevenson. "During the holiday, I would say 40 or 50."
Both sweet potato and pumpkin pies are American staples with a starchy custardlike filling. Pumpkin pie tends to be more heavy on spice, while sweet potato pie is typically sweeter and lighter.
But the reasons why people choose one over the other often trace back to where they were raised and their race.
"Always sweet potato pie"
The origins of these two desserts stretch back to the 15th and 16th centuries, when European settlers brought sweet potatoes and pumpkins back to Europe from the Americas.
With sweet potato pie, settlers may have been trying to replicate the European dish carrot pie, a dessert with layered slices of carrots, according to soul food historian and James Beard Award-winning author Adrian Miller.
"You'll see recipes in the historical record that have sweet potato pie in layered slices," Miller said. "They know that they're sweet and saying, 'Hey, we've got this natural sweet vegetable that's orange, I could use this as a substitute for carrot pie.'"
Miller said carrot pie recipes of the time had similar methods and spicing as those used for sweet potato pie.
As colonization expanded across the Americas, enslaved Africans learned how to cook sweet potatoes. Miller said the potatoes may have resonated among enslaved people because they're somewhat similar to yams, a root vegetable grown in West Africa.
However, he said, the idea of putting sweet potato into pie form came from European culture.
"It's really the expertise of these enslaved cooks, making something unfamiliar to them to please their slaveholders, that eventually gets embraced and adopted," Miller said. "It's the same with things like fried chicken, barbecue and all these other things that were not necessarily part of the West African food heritage."
Since then, sweet potato pie has become a hallmark of Black cuisine and common on dinner tables across the South.
For Miller, sweet potato pie means family and culture.
"We've never had a pumpkin pie grace our table, unless somebody who doesn't know us brought one over, if we invited someone," Miller said with a smile. "And that's on us, right? Because we should have given them a warning not to bring that stuff over. So it's always sweet potato pie."
"The perfect Thanksgiving dessert"
But it's pumpkin pie that plays a central role in the portrait of a quintessential Thanksgiving dinner.
Like sweet potato pie, pumpkin pie's roots go back to a time when European settlers were experimenting with foods grown locally. They learned how to boil or steam pumpkins that were first domesticated by Native Americans. Early pumpkin pie recipes can be found in both early Colonial and French cookbooks.
"The pumpkin pie itself has this whole world history within it, not only the Columbian Exchange, but the spice exchange," said culinary historian and Roosevelt University Professor Emeritus Bruce Kraig.
But an 1827 novel is largely responsible for the dessert's connection to the holiday. In Northwood: A Tale of New England, author and activist Sarah Josepha Hale dedicated a chapter to describing what is the now-classic Thanksgiving dinner.
"She virtually invented it — the turkeys, the cranberry sauce," Kraig said. "She said pumpkin pie is the American pie."
In 1863, Hale convinced President Abraham Lincoln to declare a national day of Thanksgiving.
Pumpkin pie remains closely linked to the holiday.
At The Blue Owl Bakery in House Springs, Mo., owner Kim Byerly said demand is through the roof for the restaurant's pumpkin pies. They expect to sell several hundred and are expanding by shipping the pies across the country this year.
Byerly is a fan of pumpkin pie, even if it took her years to actually try it.
"I just like the flavor and the consistency of it," Byerly said. "You mix it with that whipped cream and it's just like the perfect Thanksgiving dessert."
The Blue Owl's bakers also make several variations of the traditional pumpkin pie, including pumpkin gooey butter cheesecake, but so far, no sweet potato pie.
Cindi Wittu, a baker at The Blue Owl, said that while she's had both pies, she prefers pumpkin.
"I'm partial to the spices that are in it," she said. "It just reminds me of home, of family."
Copyright 2023 Harvest Public Media
|
<urn:uuid:ec0e92f4-dcb9-40f8-bdf5-87f340c0a943>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475203.41/warc/CC-MAIN-20240301062009-20240301092009-00819.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9621683955192566,
"pii_count": 0,
"score": 2.671875,
"token_count": 1123,
"url": "https://www.ctpublic.org/2023-11-22/pumpkin-or-sweet-potato-pie-theres-a-rich-history-behind-both"
}
|
Pumpkin or sweet potato pie? There's a rich history behind both
For many Americans, the Thanksgiving feast just isn't complete until they dig into a slice of pumpkin or sweet potato pie — and for some, there's simply no room for both.
The two desserts have similarities — both are orange, sweet and have a bit of spice — but they have distinct histories that have led to dedicated followings.
At Ol' Henry Restaurant in suburban St. Louis, opinions are strong.
"Sweet potato pie. Definitely," said owner Ada Joyce Taylor.
Her granddaughter, operational manager Audrianna Black, agrees. "I'm looking for that sweetness in a pie," she added.
Taylor started the soul food business almost 10 years ago, naming it after her late husband, Henry Edward Taylor. Today, the walls are lined with signed photos and memorabilia, and it's a well-known destination for those craving sweet potato pie.
"On a normal week, I would say that we go through 15 to 20 sweet potato pies," said chef Tracy Stevenson. "During the holiday, I would say 40 or 50."
Both sweet potato and pumpkin pies are American staples with a starchy custardlike filling. Pumpkin pie tends to be more heavy on spice, while sweet potato pie is typically sweeter and lighter.
But the reasons why people choose one over the other often trace back to where they were raised and their race.
"Always sweet potato pie"
The origins of these two desserts stretch back to the 15th and 16th centuries, when European settlers brought sweet potatoes and pumpkins back to Europe from the Americas.
With sweet potato pie, settlers may have been trying to replicate the European dish carrot pie, a dessert with layered slices of carrots, according to soul food historian and James Beard Award-winning author Adrian Miller.
"You'll see recipes in the historical record that have sweet potato pie in layered slices," Miller said. "They know that they're sweet and saying, 'Hey, we've got this natural sweet vegetable that's orange, I could use this as a substitute for carrot pie.'"
Miller said carrot pie recipes of the time had similar methods and spicing as those used for sweet potato pie.
As colonization expanded across the Americas, enslaved Africans learned how to cook sweet potatoes. Miller said the potatoes may have resonated among enslaved people because they're somewhat similar to yams, a
|
root vegetable grown in West Africa.
However, he said, the idea of putting sweet potato into pie form came from European culture.
"It's really the expertise of these enslaved cooks, making something unfamiliar to them to please their slaveholders, that eventually gets embraced and adopted," Miller said. "It's the same with things like fried chicken, barbecue and all these other things that were not necessarily part of the West African food heritage."
Since then, sweet potato pie has become a hallmark of Black cuisine and common on dinner tables across the South.
For Miller, sweet potato pie means family and culture.
"We've never had a pumpkin pie grace our table, unless somebody who doesn't know us brought one over, if we invited someone," Miller said with a smile. "And that's on us, right? Because we should have given them a warning not to bring that stuff over. So it's always sweet potato pie."
"The perfect Thanksgiving dessert"
But it's pumpkin pie that plays a central role in the portrait of a quintessential Thanksgiving dinner.
Like sweet potato pie, pumpkin pie's roots go back to a time when European settlers were experimenting with foods grown locally. They learned how to boil or steam pumpkins that were first domesticated by Native Americans. Early pumpkin pie recipes can be found in both early Colonial and French cookbooks.
"The pumpkin pie itself has this whole world history within it, not only the Columbian Exchange, but the spice exchange," said culinary historian and Roosevelt University Professor Emeritus Bruce Kraig.
But an 1827 novel is largely responsible for the dessert's connection to the holiday. In Northwood: A Tale of New England, author and activist Sarah Josepha Hale dedicated a chapter to describing what is the now-classic Thanksgiving dinner.
"She virtually invented it — the turkeys, the cranberry sauce," Kraig said. "She said pumpkin pie is the American pie."
In 1863, Hale convinced President Abraham Lincoln to declare a national day of Thanksgiving.
Pumpkin pie remains closely linked to the holiday.
At The Blue Owl Bakery in House Springs, Mo., owner Kim Byerly said demand is through the roof for the restaurant's pumpkin pies. They expect to sell several hundred and are expanding by shipping the pies across the country this year.
Byerly is a fan of pumpkin pie, even if it took her years to actually try it.
"I just like the flavor and the consistency of it," Byerly said. "You mix it with that whipped cream and it's just like the perfect Thanksgiving dessert."
The Blue Owl's bakers also make several variations of the traditional pumpkin pie, including pumpkin gooey butter cheesecake, but so far, no sweet potato pie.
Cindi Wittu, a baker at The Blue Owl, said that while she's had both pies, she prefers pumpkin.
"I'm partial to the spices that are in it," she said. "It just reminds me of home, of family."
Copyright 2023 Harvest Public Media
|
This story is part of a series on the current progression in Regenerative Medicine. This piece discusses advances in the use of artificial intelligence in medical robotics.
In 1999, I defined regenerative medicine as the collection of interventions that restore to normal function tissues and organs that have been damaged by disease, injured by trauma, or worn by time. I include a full spectrum of chemical, gene, and protein-based medicines, cell-based therapies, and biomechanical interventions that achieve that goal.
Autonomous medical robots may be conducting full medical operations sooner rather than later. In a previous article for the regenerative medicine series, I discussed the oncoming proliferation of artificial intelligence-assisted medical robots in our healthcare system. One such example of these robots in use is during surgical procedures. I discussed the semi-autonomous Da Vinci robot system, which aids physicians during complicated procedures, but more recently, a new fully autonomous robotic surgery has emerged.
In an article for Science, Dr. Alan Kuntz and colleagues from the University of Utah and elsewhere describe a new autonomous medical robot that can steer a needle through live tissue. More precisely, the system was shown to acutely navigate a pig lung without causing damage, steering the needle to a specified target location. This could be the first of many such robots in the near future. Here, I discuss Kuntz and colleagues’ needle-steering robot and the implications it may have on the future of regenerative medicine.
As our understanding of medicine and surgical procedures advances, so too does the difficulty and intricacy of surgery itself. What was once a practice concurrently conducted in barbershops is now one of the most difficult professions in the workforce. Some surgical procedures require immense precision, and the slightest misstep could result in the death of a patient.
Herein lies the use case for robotics in surgery settings. If the involuntary hand movements of even the most seasoned surgeon could be removed, so could the risk of injury to the patient.
While semi-autonomous robots are typical, if not common, in today’s healthcare system, Kuntz and colleagues designed medical to fill the void of fully autonomous robot surgeons. Specifically, needle-guiding systems driven by fully autonomous robots had not yet been achieved.
Their robotic system makes use of a laser-patterned, highly flexible steerable needle. Whereas most needles are rigid and inflexible, this needle is ideal for traversing curved paths typical in surgical settings to reduce the risk of damage to nearby tissue.
The robot uses three methods to remain fully autonomous and avoid causing damage to nearby tissue: replanning, control, and safe insertion time windows.
Replanning is the real-time adjustment of the preinstalled surgical plan based on the patient’s anatomy or other unexpected events during surgery. Control is the ability of the robot to control its movements without the intervention of a surgeon, allowing the robot to adjust course based on the previous replanning step. Safe insertion time windows are the phases of the patient’s breathing cycle in which needles can be safely inserted and moved, which the robot constantly monitors simultaneously.
In a lung biopsy test on the organ of a pig, the autonomous system achieved targeting errors less than the radius of clinically relevant lung nodules, meaning in a live human patient, no notable damage would have been caused to the lung tissues. Additionally, Kuntz and colleagues found that their robotic system was more accurate than standard manual bronchoscopy techniques, suggesting that autonomous robots may prove more effective at such intricate procedures than humans.
The benefits of autonomous medical robots for intricate procedures are obvious. Increased accuracy will enable far less user error in surgical settings, and robots will likely complete the procedure far quicker, saving the hospital, the doctor, and the patient valuable time.
There are, however, drawbacks as well to the introduction of these procedures to our healthcare system. First and foremost is a robotic malfunction. No machine ever built has been without flaw, whether a coding mistake or regular wear and tear over time. At some point, one of these machines will malfunction during an operation, likely causing significant complications for the patient.
Another concern of note is cost. These robots will not be cheap; in a hospital setting, it is safe to assume some of that cost will be passed onto the patient. While those who can afford it could benefit from the implementation of these systems, for those without quality health insurance, these procedures will be out of reach financially for many years.
Ultimately, however, I see great promise in the introduction of autonomous medical robots in surgical settings. This is the future we are headed towards, so we should ensure that these systems are of the highest quality when they arrive in our hospitals.
To read more of this series, please visit www.williamhaseltine.com
|
<urn:uuid:ee27df86-ff80-4a9b-8a91-c9a844f1ad6b>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00499.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9443814754486084,
"pii_count": 0,
"score": 3.03125,
"token_count": 982,
"url": "https://www.forbes.com/sites/williamhaseltine/2023/10/06/the-needles-edge-autonomous-robotic-surgery-and-its-implications-for-medicine/?sh=531842962f7b"
}
|
This story is part of a series on the current progression in Regenerative Medicine. This piece discusses advances in the use of artificial intelligence in medical robotics.
In 1999, I defined regenerative medicine as the collection of interventions that restore to normal function tissues and organs that have been damaged by disease, injured by trauma, or worn by time. I include a full spectrum of chemical, gene, and protein-based medicines, cell-based therapies, and biomechanical interventions that achieve that goal.
Autonomous medical robots may be conducting full medical operations sooner rather than later. In a previous article for the regenerative medicine series, I discussed the oncoming proliferation of artificial intelligence-assisted medical robots in our healthcare system. One such example of these robots in use is during surgical procedures. I discussed the semi-autonomous Da Vinci robot system, which aids physicians during complicated procedures, but more recently, a new fully autonomous robotic surgery has emerged.
In an article for Science, Dr. Alan Kuntz and colleagues from the University of Utah and elsewhere describe a new autonomous medical robot that can steer a needle through live tissue. More precisely, the system was shown to acutely navigate a pig lung without causing damage, steering the needle to a specified target location. This could be the first of many such robots in the near future. Here, I discuss Kuntz and colleagues’ needle-steering robot and the implications it may have on the future of regenerative medicine.
As our understanding of medicine and surgical procedures advances, so too does the difficulty and intricacy of surgery itself. What was once a practice concurrently conducted in barbershops is now one of the most difficult professions in the workforce. Some surgical procedures require immense precision, and the slightest misstep could result in the death of a patient.
Herein lies the use case for robotics in surgery settings. If the involuntary hand movements of even the most seasoned surgeon could be removed, so could the risk of injury to the patient.
While semi-autonomous robots are typical, if not common, in today’s healthcare system, Kuntz and colleagues designed medical to fill the void of fully autonomous robot surgeons. Specifically, needle-guiding systems driven by fully autonomous robots had not yet been achieved.
Their robotic system makes use of a laser-patterned, highly flexible steerable needle. Whereas most needles are rigid and inflexible, this needle is ideal for traversing curved paths typical in surgical settings to reduce the risk of damage to nearby tissue.
The robot uses three methods to remain fully autonomous and avoid causing
|
damage to nearby tissue: replanning, control, and safe insertion time windows.
Replanning is the real-time adjustment of the preinstalled surgical plan based on the patient’s anatomy or other unexpected events during surgery. Control is the ability of the robot to control its movements without the intervention of a surgeon, allowing the robot to adjust course based on the previous replanning step. Safe insertion time windows are the phases of the patient’s breathing cycle in which needles can be safely inserted and moved, which the robot constantly monitors simultaneously.
In a lung biopsy test on the organ of a pig, the autonomous system achieved targeting errors less than the radius of clinically relevant lung nodules, meaning in a live human patient, no notable damage would have been caused to the lung tissues. Additionally, Kuntz and colleagues found that their robotic system was more accurate than standard manual bronchoscopy techniques, suggesting that autonomous robots may prove more effective at such intricate procedures than humans.
The benefits of autonomous medical robots for intricate procedures are obvious. Increased accuracy will enable far less user error in surgical settings, and robots will likely complete the procedure far quicker, saving the hospital, the doctor, and the patient valuable time.
There are, however, drawbacks as well to the introduction of these procedures to our healthcare system. First and foremost is a robotic malfunction. No machine ever built has been without flaw, whether a coding mistake or regular wear and tear over time. At some point, one of these machines will malfunction during an operation, likely causing significant complications for the patient.
Another concern of note is cost. These robots will not be cheap; in a hospital setting, it is safe to assume some of that cost will be passed onto the patient. While those who can afford it could benefit from the implementation of these systems, for those without quality health insurance, these procedures will be out of reach financially for many years.
Ultimately, however, I see great promise in the introduction of autonomous medical robots in surgical settings. This is the future we are headed towards, so we should ensure that these systems are of the highest quality when they arrive in our hospitals.
To read more of this series, please visit www.williamhaseltine.com
|
Self-driving trench diggers. Drones that digitally re-create buildings. Roomba-like devices that outline where every beam on a floor should go.
Meanwhile, start-ups are pouring even more resources into the field, including building robots that survey and monitor, as well as heavy machinery that can operate on its own. They are hoping to automate a complicated field involving moving parts and vastly different sites and buildings.
“Construction robots are a great example of how robotic technology is going to touch people’s lives,” said Matthew Johnson-Roberson, the director of the robotics institute at Carnegie Mellon University. “Many [construction] jobs … that exist today are now going to be alongside robots.”
Fueling the wave on innovation is a construction crisis. Building costs are rising, and hundreds of thousands of jobs remain unfilled. Those who are in the industry are getting older and working under sometimes deadly conditions. Amid that, federal money is pouring in to build better infrastructure.
The confluence of factors have created a situation where more construction firms are turning to robots to automate work on job sites. Still, the flurry of activity caused several workforce experts to become concerned that it could lead to job losses, or a situation where people who work alongside these robots are left working more quickly and in more unsafe work environments.
Robots are no strangers in the industry. Amazon uses a slew of robots in its operations, from the Roomba-like Kiva that moves packages, to Sparrow, which grasps things with humanlike finesse. (Amazon founder Jeff Bezos owns The Washington Post.)
Elon Musk has notoriously promised he would automate Tesla’s manufacturing and recently unveiled the prototype humanoid robot Optimus, aiming to reshape physical work. Google recently showed robots that are fueled by artificial intelligence to help humans with everyday tasks. Some robots are even learning how to cook fries.
But the dynamics in construction have been different, Johnson-Roberson said.
Big-scale projects, such as dams, bridges and roads, have seen quicker adoption of robotic technology, because the tasks are usually more defined and require less finesse. The home-building industry has been slower, because much of the tasks require fine motor control, which robots have had a harder time doing, he added.
Still, it’s unlikely that construction sites will see humanoid robots walking around job sites hammering nails into pieces of wood, Johnson-Roberson said. Rather, advances in laser range-finding technology, artificial intelligence software, robotic hardware and sensors will probably be put to use in automating big tasks, such as digging, surveying, pouring concrete and moving large items around.
“This is not artificial general intelligence,” he said. “This is not like we can do everything or it acts like a human. It really is — new pieces of heavy equipment that now have much better sensors and can do things that humans would have had to do” using multiple cumbersome steps.
For Noah Ready-Campbell, the chief executive of Built Robotics, the path to building a construction robot was natural. He grew up in a construction family, often working summers on job sites and hating it. After growing up, he did a stint at Google and sold a clothing technology start-up to eBay.
In 2016, he started his company, which created a machine called an exosystem that fits onto the back of trench diggers, such as big Caterpillar machines. The exosystem is outfitted with cameras, sensors and machine-learning software, and plugs into the machine’s onboard computer.
It becomes the eyes, ears and brain of the excavator, and it can run without anyone operating it. Most of the companies using Built Robotics’ system these days are renewable energy companies, such as solar farms, Ready-Campbell said. Often, these companies have land far from large cities, making it hard to recruit talent, and the digging that needs to be done is repetitive — ideal for automation, he said.
The company has raised $112 million to date, with $64 million of that coming in April. Ready-Campbell said his company could not have existed a decade ago, because it needed the cheaper sensors, spurred in part by smartphone and self-driving technology, better cameras and gains in artificial intelligence algorithms that have only cropped up in recent years.
“The tech would not have worked” a decade ago, he said. “There’s a whole bunch of different parts of our tech stack that weren’t available.”
Other construction start-ups include Australia’s Fastbrick Robotics, which builds a mobile robot called the Hadrian X that can lay up to 1,000 bricks an hour. Pennsylvania’s Advanced Construction Robotics builds robots that can tie up to 1,100 rebar intersections an hour. Canvas, founded in San Francisco, has a mobile robot that finishes drywall.
Tessa Lau, the chief executive of Dusty Robotics, built her construction robotics firm after remodeling her own home. During that process, she realized how many steps were involved and that there was room for error nearly everywhere, causing potentials for delay.
She had a background in artificial intelligence and robotics, and studied the construction industry, and realized one of the areas that could be automated was the physical sketching out of a home or office floor to tell builders where each beam needs to be nailed or plank needs to go.
To do that, her company created a Roomba-like device that prints an outline on the actual floor of the building project. Instead of people using chalk lines, they upload digital designs into software that directs the Roomba where to go within 1/16th of an inch of precision.
The company rents it out for a subscription that runs roughly $240,000 per year, Lau said. As of last year, the company was valued at roughly $250 million and has customers such as Turner Construction.
Lau said she is aware of the concern that robots might take jobs on the construction site but noted that more construction workers are retiring as fewer join the trade. That’s creating a crisis for construction companies, she said, who need a way to do the work during the talent shortage.
“Our robots are taking jobs,” she said. “The only way we’re going to meet the world’s demand for housing and infrastructure is if we build robots to take away those jobs, because people are not filling those jobs.”
Nik Theodore, the director of the Center for Urban Economic Development at the University of Illinois in Chicago, said the robots could hold promise if they can automate the rote, often injury-plaguing and deadly tasks construction workers have “in a very dangerous industry with large numbers of fatalities.”
But the concern, he said, is “the temptation” for automation to speed up job tasks and increase worker fatigue, leading to a situation where burnout and injuries occur more, not less.
It could “take a solution that could make the worksite less dangerous,” he said, “and in fact, make it more dangerous.”
|
<urn:uuid:42280a97-04ac-4163-858d-fdd088ccc323>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00425.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9638926386833191,
"pii_count": 0,
"score": 2.96875,
"token_count": 1489,
"url": "https://www.washingtonpost.com/technology/2023/01/30/construction-companies-robotic-technology/?tid=pm_business_pop"
}
|
Self-driving trench diggers. Drones that digitally re-create buildings. Roomba-like devices that outline where every beam on a floor should go.
Meanwhile, start-ups are pouring even more resources into the field, including building robots that survey and monitor, as well as heavy machinery that can operate on its own. They are hoping to automate a complicated field involving moving parts and vastly different sites and buildings.
“Construction robots are a great example of how robotic technology is going to touch people’s lives,” said Matthew Johnson-Roberson, the director of the robotics institute at Carnegie Mellon University. “Many [construction] jobs … that exist today are now going to be alongside robots.”
Fueling the wave on innovation is a construction crisis. Building costs are rising, and hundreds of thousands of jobs remain unfilled. Those who are in the industry are getting older and working under sometimes deadly conditions. Amid that, federal money is pouring in to build better infrastructure.
The confluence of factors have created a situation where more construction firms are turning to robots to automate work on job sites. Still, the flurry of activity caused several workforce experts to become concerned that it could lead to job losses, or a situation where people who work alongside these robots are left working more quickly and in more unsafe work environments.
Robots are no strangers in the industry. Amazon uses a slew of robots in its operations, from the Roomba-like Kiva that moves packages, to Sparrow, which grasps things with humanlike finesse. (Amazon founder Jeff Bezos owns The Washington Post.)
Elon Musk has notoriously promised he would automate Tesla’s manufacturing and recently unveiled the prototype humanoid robot Optimus, aiming to reshape physical work. Google recently showed robots that are fueled by artificial intelligence to help humans with everyday tasks. Some robots are even learning how to cook fries.
But the dynamics in construction have been different, Johnson-Roberson said.
Big-scale projects, such as dams, bridges and roads, have seen quicker adoption of robotic technology, because the tasks are usually more defined and require less finesse. The home-building industry has been slower, because much of the tasks require fine motor control, which robots have had a harder time doing, he added.
Still, it’s unlikely that construction sites will see humanoid robots walking around job sites hammering nails into pieces of wood, Johnson-Roberson said. Rather, advances in laser range-finding technology, artificial intelligence software, robotic hardware and sensors will probably be put to use in autom
|
ating big tasks, such as digging, surveying, pouring concrete and moving large items around.
“This is not artificial general intelligence,” he said. “This is not like we can do everything or it acts like a human. It really is — new pieces of heavy equipment that now have much better sensors and can do things that humans would have had to do” using multiple cumbersome steps.
For Noah Ready-Campbell, the chief executive of Built Robotics, the path to building a construction robot was natural. He grew up in a construction family, often working summers on job sites and hating it. After growing up, he did a stint at Google and sold a clothing technology start-up to eBay.
In 2016, he started his company, which created a machine called an exosystem that fits onto the back of trench diggers, such as big Caterpillar machines. The exosystem is outfitted with cameras, sensors and machine-learning software, and plugs into the machine’s onboard computer.
It becomes the eyes, ears and brain of the excavator, and it can run without anyone operating it. Most of the companies using Built Robotics’ system these days are renewable energy companies, such as solar farms, Ready-Campbell said. Often, these companies have land far from large cities, making it hard to recruit talent, and the digging that needs to be done is repetitive — ideal for automation, he said.
The company has raised $112 million to date, with $64 million of that coming in April. Ready-Campbell said his company could not have existed a decade ago, because it needed the cheaper sensors, spurred in part by smartphone and self-driving technology, better cameras and gains in artificial intelligence algorithms that have only cropped up in recent years.
“The tech would not have worked” a decade ago, he said. “There’s a whole bunch of different parts of our tech stack that weren’t available.”
Other construction start-ups include Australia’s Fastbrick Robotics, which builds a mobile robot called the Hadrian X that can lay up to 1,000 bricks an hour. Pennsylvania’s Advanced Construction Robotics builds robots that can tie up to 1,100 rebar intersections an hour. Canvas, founded in San Francisco, has a mobile robot that finishes drywall.
Tessa Lau, the chief executive of Dusty Robotics, built her construction robotics firm after remodeling her own home. During that process, she realized how many steps were involved and that there was room for error nearly everywhere, causing potentials for delay.
She had a background in artificial intelligence and robotics, and studied the construction industry, and realized one of the areas that could be automated was the physical sketching out of a home or office floor to tell builders where each beam needs to be nailed or plank needs to go.
To do that, her company created a Roomba-like device that prints an outline on the actual floor of the building project. Instead of people using chalk lines, they upload digital designs into software that directs the Roomba where to go within 1/16th of an inch of precision.
The company rents it out for a subscription that runs roughly $240,000 per year, Lau said. As of last year, the company was valued at roughly $250 million and has customers such as Turner Construction.
Lau said she is aware of the concern that robots might take jobs on the construction site but noted that more construction workers are retiring as fewer join the trade. That’s creating a crisis for construction companies, she said, who need a way to do the work during the talent shortage.
“Our robots are taking jobs,” she said. “The only way we’re going to meet the world’s demand for housing and infrastructure is if we build robots to take away those jobs, because people are not filling those jobs.”
Nik Theodore, the director of the Center for Urban Economic Development at the University of Illinois in Chicago, said the robots could hold promise if they can automate the rote, often injury-plaguing and deadly tasks construction workers have “in a very dangerous industry with large numbers of fatalities.”
But the concern, he said, is “the temptation” for automation to speed up job tasks and increase worker fatigue, leading to a situation where burnout and injuries occur more, not less.
It could “take a solution that could make the worksite less dangerous,” he said, “and in fact, make it more dangerous.”
|
Their history wasn't just forgotten, it was buried. Today we tell their stories.
For decades the role of many of Pensacola's citizens have only been told through the families whose ancestors helped shape who we are today.
Schools, museums and roads bear their names — acts that came about generations after they had passed — a late but still honorable effort to recognize the impact these once marginalized men and women had on the community. Their history is not easy to find, an all too common tragedy that played out across the United States in newspapers, archives and public records.
Their obituaries, long the place researchers could traditionally find a goldmine of information on the past, are non-existent or sparse. Their stories have been hidden.
Today we begin to tell their stories - the way they should have been told 100 years ago.
"Regardless of your ethnicity or your background, it's all valuable, it's all necessary, it's all needed because these stories are important," said Robin Reshard, one of the community members leading an effort to write the obituaries of men and woman of Pensacola's past, publishing them near the anniversaries of their death.
"It's important to share the stories and (not just) the fact that they were born and they died, but that their spirit continues in so many different ways and in different people."
Righting the past
The idea for the project came from Councilwoman Teniadé Broughton who was inspired by the Overlooked series in the New York Times — a series of obituaries about remarkable people whose deaths, beginning in 1851, went unreported.
Broughton wanted to add more breadth and thought into Pensacola’s own past, particularly the experiences of the people who walked, lived and made an impact in this community.
Righting the past:War veteran and prominent citizen, John Sunday is dead at 86
To understand Pensacola and where we are today, it is vital to move away from the "old, dead, white men" philosophy, a popular term describing how the study of philosophy in academia is dominated by deceased European men. To do so, there needs to be an intentional push to include people of color and women to open up different perspectives, histories, meanings and stories, she said.
“We were always very diverse, so we should make sure to present that in our storytelling,” Broughton said. “And I think now we've gotten better at that, more willing to look at different groups because we stand on this foundation of telling one side of the story. I think it also lends to revisiting how our history has been told, to make way for how we plan to tell it in the future.”
The obituaries, which will be published throughout 2023 in the Pensacola News Journal, will be written by community members with expertise and sometimes personal knowledge on the person in question.
The difficulty in getting information on their background illustrates the need for the project, as even photographs of many of the people who will be featured don't exist today.
Whose stories will be told?
The obituaries will include prominent figures such as John Sunday and Spencer Bibbs, as well as lesser known people such as Lillie Ann James, the mother of Daniel “Chappie” James Jr. − the first Black four-star general in any U.S. military service branch.
It will include the unburied story of Viola Edwards, who ran the first Black-owned hospital in Pensacola from 1922 to 1927. The Viola Edwards Hospital was burned down after she was accused of killing a pregnant white woman.
The project will also include a number of people who lived everyday lives such as Ezra Gerry.
Gerry, a Milton steward and cook, was one of the nearly 150 people killed in the Larchmont Disaster, which is considered the Titanic of Rhode Island. We only know of his death because of a small mention in a newspaper article that focused on a pastor defending the captain of the boat.
Gerry's family was not able to transport his body back to the area and he is buried in Providence, Rhode Island. His family's grief is just a footnote in the article.
Jamin Wells, associate professor and director of the Public History Master's Program at the University of West Florida, believes that telling these stories today can highlight the richness of the Pensacola community, honor the legacies of these families and their communities, and humanize the people of the past and how they connect with us to the present.
“This has always been a really complicated place, and (we want) to try to learn more about how the city in this community came to be and came to look the way it does today,” Wells said. “We're going to try to include avenues for folks to want to learn more about these people. And there's a number of historical societies and cultural groups who hopefully will be contributing biographies and are great resources for learning more. Hopefully the past isn't just the past − that it really shapes our world today to see those connections.”
Local historian Marion Williams plans to write about his ancestor Spencer Bibbs who has an elementary school named after him. Williams has dug deep into finding more information about Bibbs, which he has learned from his family and his own research.
Bibbs was the first African American Supervisor of Colored Schools in Escambia County. He requested and was granted a school on the east side of Pensacola − P.S. #44 − which later was renamed P.S. #102. After his death in 1922, his daughter, who worked at the school, petitioned for the school to be named after him.
Bibbs was not only a supervisor but an entrepreneur who co-owned a restaurant with his wife. He also worked as what we think of now as a taxi driver, using his horse and buggy to "drive" people across the city.
Williams wants to help solve the question as to why certain people have been honored or how others who went unnoticed helped shape our community. A project like this will help the community understand these outstanding characters and how they affect the people who call Pensacola their home today.
“It's a good way to kind of update our information and to have information available for people who are coming behind you, and people around you to know who these people were, and you know, how they impacted our lives,” Williams said.
|
<urn:uuid:36fbcd26-101f-4c65-abf0-870287b1a7fd>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00599.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9828429222106934,
"pii_count": 0,
"score": 3,
"token_count": 1353,
"url": "https://www.pnj.com/story/news/local/2023/01/16/obituary-project-will-recognize-pensacola-prominent-figures-from-past/69785742007/"
}
|
Their history wasn't just forgotten, it was buried. Today we tell their stories.
For decades the role of many of Pensacola's citizens have only been told through the families whose ancestors helped shape who we are today.
Schools, museums and roads bear their names — acts that came about generations after they had passed — a late but still honorable effort to recognize the impact these once marginalized men and women had on the community. Their history is not easy to find, an all too common tragedy that played out across the United States in newspapers, archives and public records.
Their obituaries, long the place researchers could traditionally find a goldmine of information on the past, are non-existent or sparse. Their stories have been hidden.
Today we begin to tell their stories - the way they should have been told 100 years ago.
"Regardless of your ethnicity or your background, it's all valuable, it's all necessary, it's all needed because these stories are important," said Robin Reshard, one of the community members leading an effort to write the obituaries of men and woman of Pensacola's past, publishing them near the anniversaries of their death.
"It's important to share the stories and (not just) the fact that they were born and they died, but that their spirit continues in so many different ways and in different people."
Righting the past
The idea for the project came from Councilwoman Teniadé Broughton who was inspired by the Overlooked series in the New York Times — a series of obituaries about remarkable people whose deaths, beginning in 1851, went unreported.
Broughton wanted to add more breadth and thought into Pensacola’s own past, particularly the experiences of the people who walked, lived and made an impact in this community.
Righting the past:War veteran and prominent citizen, John Sunday is dead at 86
To understand Pensacola and where we are today, it is vital to move away from the "old, dead, white men" philosophy, a popular term describing how the study of philosophy in academia is dominated by deceased European men. To do so, there needs to be an intentional push to include people of color and women to open up different perspectives, histories, meanings and stories, she said.
“We were always very diverse, so we should make sure to present that in our storytelling,” Broughton said. “And I think now we've gotten better at that, more willing to
|
look at different groups because we stand on this foundation of telling one side of the story. I think it also lends to revisiting how our history has been told, to make way for how we plan to tell it in the future.”
The obituaries, which will be published throughout 2023 in the Pensacola News Journal, will be written by community members with expertise and sometimes personal knowledge on the person in question.
The difficulty in getting information on their background illustrates the need for the project, as even photographs of many of the people who will be featured don't exist today.
Whose stories will be told?
The obituaries will include prominent figures such as John Sunday and Spencer Bibbs, as well as lesser known people such as Lillie Ann James, the mother of Daniel “Chappie” James Jr. − the first Black four-star general in any U.S. military service branch.
It will include the unburied story of Viola Edwards, who ran the first Black-owned hospital in Pensacola from 1922 to 1927. The Viola Edwards Hospital was burned down after she was accused of killing a pregnant white woman.
The project will also include a number of people who lived everyday lives such as Ezra Gerry.
Gerry, a Milton steward and cook, was one of the nearly 150 people killed in the Larchmont Disaster, which is considered the Titanic of Rhode Island. We only know of his death because of a small mention in a newspaper article that focused on a pastor defending the captain of the boat.
Gerry's family was not able to transport his body back to the area and he is buried in Providence, Rhode Island. His family's grief is just a footnote in the article.
Jamin Wells, associate professor and director of the Public History Master's Program at the University of West Florida, believes that telling these stories today can highlight the richness of the Pensacola community, honor the legacies of these families and their communities, and humanize the people of the past and how they connect with us to the present.
“This has always been a really complicated place, and (we want) to try to learn more about how the city in this community came to be and came to look the way it does today,” Wells said. “We're going to try to include avenues for folks to want to learn more about these people. And there's a number of historical societies and cultural groups who hopefully will be contributing biographies and are great resources for learning more. Hopefully the past isn't just the past − that it really shapes our world today to see those connections.”
Local historian Marion Williams plans to write about his ancestor Spencer Bibbs who has an elementary school named after him. Williams has dug deep into finding more information about Bibbs, which he has learned from his family and his own research.
Bibbs was the first African American Supervisor of Colored Schools in Escambia County. He requested and was granted a school on the east side of Pensacola − P.S. #44 − which later was renamed P.S. #102. After his death in 1922, his daughter, who worked at the school, petitioned for the school to be named after him.
Bibbs was not only a supervisor but an entrepreneur who co-owned a restaurant with his wife. He also worked as what we think of now as a taxi driver, using his horse and buggy to "drive" people across the city.
Williams wants to help solve the question as to why certain people have been honored or how others who went unnoticed helped shape our community. A project like this will help the community understand these outstanding characters and how they affect the people who call Pensacola their home today.
“It's a good way to kind of update our information and to have information available for people who are coming behind you, and people around you to know who these people were, and you know, how they impacted our lives,” Williams said.
|
Mount St. Helens sees spike in seismic activity
SKAMANIA COUNTY, Wash. (KPTV) - 400 earthquakes have been recorded under Mount St. Helens since mid-July, according to the US Geological Survey (USGS).
This is the longest series of tremors since the volcano’s last eruption ended in 2008.
Luckily, a new eruption doesn’t appear to be happening soon.
“Mount St. Helens has had a slight uptick in earthquakes, have you noticed? Most are small (less than M1.0) & not felt at the surface. No cause for concern right now - no significant changes in ground deformation or gases. Volcano remains at normal (green) background levels,” the USGS said in an update posted to its website.
“Since mid-July 2023, over 400 EQs have been located by @PNSN1. Most recently, there have been about 30 located EQs per week. To compare, since 2008, on average about 11 earthquakes have been located per month at Mount St. Helens.”
According to the USGS, there are no indications of a “imminent eruption.”
Despite not erupting since 2008, Mount St. Helens is regarded as one of the most dangerous volcanoes in the United States.
On May 18, 1980, the volcano had its most famous eruption, which destroyed 210 square miles of surrounding forest and claimed the lives of about 60 people.
The volcano saw a spike in activity before this eruption. Several earthquakes shook the area on March 20 of that year. The earthquakes increased until March 27, when there was a small eruption. The amount of volcanic activity only kept rising until there was a massive magma buildup under the volcano.
The surrounding area was rocked by an earthquake of magnitude 5, which set off the main eruption on May 18.
As of right now, the volcano is still active, and geologists expect it will erupt again one day.
However, the USGS stated that a brief spike in seismic activity in St. Helens is rather typical.
“The current seismicity represents the largest short-term increase in earthquake rates since the last eruption ended in 2008,” the USGS said. “However, longer duration sequences with more events occurred in 1988-1992, 1995-1996 and 1997-1999. None of the sequences in the 1980s and 90s directly led to eruptions.”
Copyright 2023 KPTV-KPDX. All rights reserved.
|
<urn:uuid:a2aa487c-1f1d-44e7-99be-47a65d71aaf6>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100779.51/warc/CC-MAIN-20231208212357-20231209002357-00616.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9710506200790405,
"pii_count": 0,
"score": 2.828125,
"token_count": 528,
"url": "https://www.kptv.com/2023/11/08/mount-st-helens-sees-large-spike-seismic-activity/"
}
|
Mount St. Helens sees spike in seismic activity
SKAMANIA COUNTY, Wash. (KPTV) - 400 earthquakes have been recorded under Mount St. Helens since mid-July, according to the US Geological Survey (USGS).
This is the longest series of tremors since the volcano’s last eruption ended in 2008.
Luckily, a new eruption doesn’t appear to be happening soon.
“Mount St. Helens has had a slight uptick in earthquakes, have you noticed? Most are small (less than M1.0) & not felt at the surface. No cause for concern right now - no significant changes in ground deformation or gases. Volcano remains at normal (green) background levels,” the USGS said in an update posted to its website.
“Since mid-July 2023, over 400 EQs have been located by @PNSN1. Most recently, there have been about 30 located EQs per week. To compare, since 2008, on average about 11 earthquakes have been located per month at Mount St. Helens.”
According to the USGS, there are no indications of a “imminent eruption.”
Despite not erupting since 2008, Mount St. Helens is regarded as one of the most dangerous volcanoes in the United States.
On May 18, 1980, the volcano had its most famous eruption, which destroyed 210 square miles of surrounding forest and claimed the lives of about 60 people.
The volcano saw a spike in activity before this eruption. Several earthquakes shook the area on March 20 of that year. The earthquakes increased until March 27, when there was a small eruption. The amount of volcanic activity only kept rising until there was a massive magma buildup under the volcano.
The surrounding area was rocked by an earthquake of magnitude 5, which set off the main eruption on May 18.
As of right now, the volcano is still active, and geologists expect it will erupt again one day.
However, the USGS stated that a brief spike in seismic activity in St. Helens is rather typical.
“The current seismicity represents the largest short-term increase in earthquake rates since the last eruption ended in 2008,” the USGS said. “However, longer duration sequences with more events occurred in 1988
|
-1992, 1995-1996 and 1997-1999. None of the sequences in the 1980s and 90s directly led to eruptions.”
Copyright 2023 KPTV-KPDX. All rights reserved.
|
Get inspired by a weekly roundup on living well, made simple. Sign up for CNN’s Life, But Better newsletter for information and tools designed to improve your well-being.
Handing your baby a phone or tablet to play with may seem like a harmless solution when you’re busy, but it could quickly affect their development, a new study has found.
Having anywhere from one to four hours of screen time per day at age 1 is linked with higher risks of developmental delays in communication, fine motor, problem-solving and personal and social skills by age 2, according to a study of 7,097 children published Monday in the journal JAMA Pediatrics.
“It’s a really important study because it has a very large sample size of children who’ve been followed for several years,” said Dr. Jason Nagata, associate professor of pediatrics at the University of California, San Francisco, who wasn’t involved in the study.
“The study fills an important gap because it identifies specific developmental delays (in skills) such as communication and problem-solving associated with screen time,” said Nagata, noting there haven’t been many prior studies that studied this issue with several years of follow-up data.
The children and their mothers were part of the Japan-based Tohoku Medical Megabank Project Birth and Three-Generation Cohort Study and were recruited from 50 obstetric clinics and hospitals in the Miyagi and Iwate prefectures between July 2013 and March 2017.
The study measured how many hours children used screens per day at age 1 and how they performed in several developmental domains — communication skills, fine motor skills, personal and social skills, and problem-solving skills — at ages 2 and 4. Both measures were according to the mothers’ self-reports.
By age 2, those who had had up to four hours of screen time per day were up to three times more likely to experience developmental delays in communication and problem-solving skills.
Those who had spent four or more hours with screens were 4.78 times more likely to have underdeveloped communication skills, 1.74 times more likely to have subpar fine motor skills and two times more likely to have underdeveloped personal and social skills by age 2. By age 4, risk remained only in the communication and problem-solving categories.
“One of the areas that’s relatively understudied in the whole screen time literature is looking at impacts of screen exposure on very young kids, especially when screens are introduced to babies,” said Dr. John Hutton, associate professor of general and community pediatrics at Cincinnati Children’s Hospital Medical Center, who wasn’t involved in the study. “It’s definitely a global concern, and I think the findings (here) should apply to, really, other countries as well.”
How screen time could affect development
The potential harms of screen time on communication skills may have to do with children being robbed of drivers for language development, Hutton said.
“Kids learn how to talk if they’re encouraged to talk, and very often, if they’re just watching a screen, they’re not having an opportunity to practice talking,” he said. “They may hear a lot of words, but they’re not practicing saying a lot of words or having a lot of that back-and-forth interaction.”
Technology use can take time away from interpersonal relationships that nurture social skills since real people are more multidimensional than characters on a screen, Hutton added. Looking at people’s faces is when our brains turn on to figure out how to interact with them.
“Also, (with) passive screen viewing that doesn’t have an interactive or physical component, children are more likely to be sedentary and then aren’t able to practice motor skills,” Nagata said.
If children don’t have enough time to play or are handed a tablet to pacify negative emotions, that could prevent the important developmental milestone that is the ability to navigate discomfort.
“Longer term, one of the real goals is for kids just to be able to sit quietly in their own thoughts,” Hutton said. “When they’re allowed to be a little bit bored for a second, they get a little uncomfortable, but then they’re like, ‘OK, I want to make myself more comfortable.’ And that’s how creativity happens.”
There are other factors that can affect a child’s development, such as genetics, adverse experiences such as neglect or abuse, and socioeconomic factors, Nagata said.
In the latest research, mothers of children with high levels of screen time were more likely to be younger, have never given birth before, have a lower household income, have a lower education level and have postpartum depression.
The study does have limitations. Due to social desirability bias — wanting to say the “right” or socially acceptable thing — parents may underreport their child’s screen time and overreport how their child is doing developmentally, experts said.
Additionally, the authors didn’t have details on what children’s screen time involved, and not all forms are equal in their capacity to harm or benefit, experts said.
“The other question that’s always really important is, is the parent watching with the child?” Hutton said. “When a parent is watching with a child, that tends to mitigate a lot of the negatives.”
Healthier ways to occupy your child
If you need to keep your toddler busy so you can get things done or have some solitude, try giving them a book, coloring materials or toys, experts said. They can even sometimes enjoy these activities while secured in a highchair.
If you need to rely on screens sometimes, opt for educational content or video chats with a loved one so they can still get some social interaction, Nagata said.
One issue with some online children’s content is that parents will think it’s educational because it’s marketed as such and has lots of information about the alphabet, colors, numbers or animals their children can see and hear, Hutton said. But what jumpstarts learning is content that helps children apply their knowledge beyond just rote memorization — so they can “navigate the real world, where things are more unpredictable and require more creativity and resilience,” he said.
Hutton and Nagata recommended choosing longer videos since watching lots of short videos could affect children’s attention span and ability to understand what they’re watching.
Be choosy about when you rely on screen time, and turn devices off when they’re not in use, Nagata said. “Aimless viewing can also distract kids from then focusing on an activity that’s at hand or in-person communication.”
Additionally, live by example by not having an excessive amount of screen time yourself, since kids tend to mimic what they see, experts said.
The American Academy of Pediatrics recommends considering the quality of screen time rather than just quantity, but the organization does have resources for determining guidelines and limits for your family — such as its family media plan you can tailor to your own family’s needs and advice for helping your kids build healthy habits.
“We need to just slow down and … be as careful and mindful as we can about keeping kids anchored in the real world, which is really how we evolved as humans,” Hutton said. “There’s going to be plenty of time for screen time later once we get a better sense of who the kids are and what they need.”
|
<urn:uuid:8db4eeb3-722a-4700-a170-09cb335fb16b>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100593.71/warc/CC-MAIN-20231206095331-20231206125331-00028.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9652767777442932,
"pii_count": 0,
"score": 2.875,
"token_count": 1631,
"url": "https://edition.cnn.com/2023/08/21/health/screen-time-child-development-delays-risks-wellness/index.html"
}
|
Get inspired by a weekly roundup on living well, made simple. Sign up for CNN’s Life, But Better newsletter for information and tools designed to improve your well-being.
Handing your baby a phone or tablet to play with may seem like a harmless solution when you’re busy, but it could quickly affect their development, a new study has found.
Having anywhere from one to four hours of screen time per day at age 1 is linked with higher risks of developmental delays in communication, fine motor, problem-solving and personal and social skills by age 2, according to a study of 7,097 children published Monday in the journal JAMA Pediatrics.
“It’s a really important study because it has a very large sample size of children who’ve been followed for several years,” said Dr. Jason Nagata, associate professor of pediatrics at the University of California, San Francisco, who wasn’t involved in the study.
“The study fills an important gap because it identifies specific developmental delays (in skills) such as communication and problem-solving associated with screen time,” said Nagata, noting there haven’t been many prior studies that studied this issue with several years of follow-up data.
The children and their mothers were part of the Japan-based Tohoku Medical Megabank Project Birth and Three-Generation Cohort Study and were recruited from 50 obstetric clinics and hospitals in the Miyagi and Iwate prefectures between July 2013 and March 2017.
The study measured how many hours children used screens per day at age 1 and how they performed in several developmental domains — communication skills, fine motor skills, personal and social skills, and problem-solving skills — at ages 2 and 4. Both measures were according to the mothers’ self-reports.
By age 2, those who had had up to four hours of screen time per day were up to three times more likely to experience developmental delays in communication and problem-solving skills.
Those who had spent four or more hours with screens were 4.78 times more likely to have underdeveloped communication skills, 1.74 times more likely to have subpar fine motor skills and two times more likely to have underdeveloped personal and social skills by age 2. By age 4, risk remained only in the communication and problem-solving categories.
“One of the areas that’s relatively understudied in the whole screen time literature is looking at impacts of screen exposure on very young kids, especially when screens are
|
introduced to babies,” said Dr. John Hutton, associate professor of general and community pediatrics at Cincinnati Children’s Hospital Medical Center, who wasn’t involved in the study. “It’s definitely a global concern, and I think the findings (here) should apply to, really, other countries as well.”
How screen time could affect development
The potential harms of screen time on communication skills may have to do with children being robbed of drivers for language development, Hutton said.
“Kids learn how to talk if they’re encouraged to talk, and very often, if they’re just watching a screen, they’re not having an opportunity to practice talking,” he said. “They may hear a lot of words, but they’re not practicing saying a lot of words or having a lot of that back-and-forth interaction.”
Technology use can take time away from interpersonal relationships that nurture social skills since real people are more multidimensional than characters on a screen, Hutton added. Looking at people’s faces is when our brains turn on to figure out how to interact with them.
“Also, (with) passive screen viewing that doesn’t have an interactive or physical component, children are more likely to be sedentary and then aren’t able to practice motor skills,” Nagata said.
If children don’t have enough time to play or are handed a tablet to pacify negative emotions, that could prevent the important developmental milestone that is the ability to navigate discomfort.
“Longer term, one of the real goals is for kids just to be able to sit quietly in their own thoughts,” Hutton said. “When they’re allowed to be a little bit bored for a second, they get a little uncomfortable, but then they’re like, ‘OK, I want to make myself more comfortable.’ And that’s how creativity happens.”
There are other factors that can affect a child’s development, such as genetics, adverse experiences such as neglect or abuse, and socioeconomic factors, Nagata said.
In the latest research, mothers of children with high levels of screen time were more likely to be younger, have never given birth before, have a lower household income, have a lower education level and have postpartum depression.
The study does have limitations. Due to social desirability bias — wanting to say the “right” or socially acceptable thing — parents may underreport their child’s screen time and overreport how their child is doing developmentally, experts said.
Additionally, the authors didn’t have details on what children’s screen time involved, and not all forms are equal in their capacity to harm or benefit, experts said.
“The other question that’s always really important is, is the parent watching with the child?” Hutton said. “When a parent is watching with a child, that tends to mitigate a lot of the negatives.”
Healthier ways to occupy your child
If you need to keep your toddler busy so you can get things done or have some solitude, try giving them a book, coloring materials or toys, experts said. They can even sometimes enjoy these activities while secured in a highchair.
If you need to rely on screens sometimes, opt for educational content or video chats with a loved one so they can still get some social interaction, Nagata said.
One issue with some online children’s content is that parents will think it’s educational because it’s marketed as such and has lots of information about the alphabet, colors, numbers or animals their children can see and hear, Hutton said. But what jumpstarts learning is content that helps children apply their knowledge beyond just rote memorization — so they can “navigate the real world, where things are more unpredictable and require more creativity and resilience,” he said.
Hutton and Nagata recommended choosing longer videos since watching lots of short videos could affect children’s attention span and ability to understand what they’re watching.
Be choosy about when you rely on screen time, and turn devices off when they’re not in use, Nagata said. “Aimless viewing can also distract kids from then focusing on an activity that’s at hand or in-person communication.”
Additionally, live by example by not having an excessive amount of screen time yourself, since kids tend to mimic what they see, experts said.
The American Academy of Pediatrics recommends considering the quality of screen time rather than just quantity, but the organization does have resources for determining guidelines and limits for your family — such as its family media plan you can tailor to your own family’s needs and advice for helping your kids build healthy habits.
“We need to just slow down and … be as careful and mindful as we can about keeping kids anchored in the real world, which is really how we evolved as humans,” Hutton said. “There’s going to be plenty of time for screen time later once we get a better sense of who the kids are and what they need.”
|
For berries that provide essential nutrients for migrating birds, plant these two types of bushes
The fruits from certain berry bushes pack a bigger nutrient punch for migrating and over-wintering birds.
September is a great time to plant berry bushes, as Vermont temperatures will still be warm enough, providing a few months before the ground freezes. And in that time, new berry bushes will have a chance to get their root systems established.
When considering which berry bushes to plant in your yard, take into consideration which ones provide the best fruits for birds, too.
Look first at the native shrub varieties in these two groups: dogwoods and viburnums.
Try viburnums, like nannyberry, which is viburnum lentago, Blackhaw or viburnum prunifolium, and American cranberrybush or viburnum trilobum.
Think about placement of these, too, as they will grow into very big shrubs. The blossoms and stems will provide beautiful color in all seasons (think red stems against freshly fallen snow!) and also grow tons of berries.
For instance, the nannyberry can grow from 10 to 18 feet tall. Its flowers are white and the fruits grow as blue-black berries on a reddish stem.
Dogwoods are also great for providing birds with a handy food source. The red osier dogwood or cornus sericea is fast-growing and will reach about eight feet tall. The berry bush blooms with white petals on red stems, and the fruits are white, as well. The gray dogwood and silky dogwood are two other types to try, too.
There are exotic berry bushes that provide a great food source for birds who migrate or overwinter, too, like Japanese honeysuckle and buckthorn. But research conducted over the last two decades at Colby College in Maine and Rochester Institute of Technology in New York measured these berries' caloric as well as nutrient content. What they found showed a real difference in the nutritional quality of the berries.
The berries from both natives and exotics had about the same amount of caloric value. It was the fat content where they really differed.
The fat content in berries of native plants is nearly 48%, depending on the berry — whereas for the exotics, the fat content is only 2%.
For a migrating bird trying to fatten themselves up for the winter and for fuel to fly to warmer climes, the extra fat in these native berries makes a big difference. Even birds who stay local are trying to make it through the winter here, and having that extra fat really helps.
A question about caterpillars on hardy hibiscus
Q: Charlie, you sweet-talked me into buying one of those perennial hibiscus bushes. Now you have to teach me how to save it! In this year of garden death and destruction, drought and floods ... no cukes, few zukes and first tomatoes in September, I can't bear another garden tragedy! The hibiscus has small caterpillars eating all its foliage. - Miriam in Hinesburg
A: This pest ruining your hardy hibiscus is the larval stage of the sawfly. Natural bacteria treatments using things like B.t., or bacillus thuringiensis, will not get rid of them.
These sawflies cling to the underside of leaves and can devastate a plant pretty quickly.
The best thing to do now is clean up the area near and underneath your hibiscus. Try to remove any sawflies that might be trying to overwinter on your plant.
Next year, around mid-summer, begin checking the undersides of the hardy hibiscus. If you start seeing some damage and seeing those little caterpillars, try a spray on the underside of the leaves, called spinosad.
This will help your hibiscus and get rid of the sawflies. Just note that you should treat the plant with this organic product very carefully, as it is harmful to bees.
Spray it on the hardy hibiscus leaves in the evening once the bees have gone home for the for the night. And once the spray has dried, it is not as toxic.
All Things Gardening is powered by you, our audience! Send us your toughest conundrums and join the fun. Submit your written question via email, or better yet, leave a voicemail with your gardening question so we can use your voice on the air! Call Vermont Public at 1-800-639-2192.
Listen to All Things Gardening Sunday mornings at 9:35 a.m., and subscribe to the podcast to listen any time.
|
<urn:uuid:e2f81e4f-ee08-48fc-a60d-206a7e942b57>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510130.53/warc/CC-MAIN-20230926011608-20230926041608-00813.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9494255781173706,
"pii_count": 0,
"score": 3.078125,
"token_count": 982,
"url": "https://www.vermontpublic.org/show/all-things-gardening/2023-09-17/for-berries-that-provide-essential-nutrients-for-migrating-birds-plant-these-two-types-of-bushes"
}
|
For berries that provide essential nutrients for migrating birds, plant these two types of bushes
The fruits from certain berry bushes pack a bigger nutrient punch for migrating and over-wintering birds.
September is a great time to plant berry bushes, as Vermont temperatures will still be warm enough, providing a few months before the ground freezes. And in that time, new berry bushes will have a chance to get their root systems established.
When considering which berry bushes to plant in your yard, take into consideration which ones provide the best fruits for birds, too.
Look first at the native shrub varieties in these two groups: dogwoods and viburnums.
Try viburnums, like nannyberry, which is viburnum lentago, Blackhaw or viburnum prunifolium, and American cranberrybush or viburnum trilobum.
Think about placement of these, too, as they will grow into very big shrubs. The blossoms and stems will provide beautiful color in all seasons (think red stems against freshly fallen snow!) and also grow tons of berries.
For instance, the nannyberry can grow from 10 to 18 feet tall. Its flowers are white and the fruits grow as blue-black berries on a reddish stem.
Dogwoods are also great for providing birds with a handy food source. The red osier dogwood or cornus sericea is fast-growing and will reach about eight feet tall. The berry bush blooms with white petals on red stems, and the fruits are white, as well. The gray dogwood and silky dogwood are two other types to try, too.
There are exotic berry bushes that provide a great food source for birds who migrate or overwinter, too, like Japanese honeysuckle and buckthorn. But research conducted over the last two decades at Colby College in Maine and Rochester Institute of Technology in New York measured these berries' caloric as well as nutrient content. What they found showed a real difference in the nutritional quality of the berries.
The berries from both natives and exotics had about the same amount of caloric value. It was the fat content where they really differed.
The fat content in berries of native plants is nearly 48%, depending on the berry — whereas for the exotics, the fat content is only 2%.
For a migrating bird trying to fatten themselves up for the winter and for fuel to
|
fly to warmer climes, the extra fat in these native berries makes a big difference. Even birds who stay local are trying to make it through the winter here, and having that extra fat really helps.
A question about caterpillars on hardy hibiscus
Q: Charlie, you sweet-talked me into buying one of those perennial hibiscus bushes. Now you have to teach me how to save it! In this year of garden death and destruction, drought and floods ... no cukes, few zukes and first tomatoes in September, I can't bear another garden tragedy! The hibiscus has small caterpillars eating all its foliage. - Miriam in Hinesburg
A: This pest ruining your hardy hibiscus is the larval stage of the sawfly. Natural bacteria treatments using things like B.t., or bacillus thuringiensis, will not get rid of them.
These sawflies cling to the underside of leaves and can devastate a plant pretty quickly.
The best thing to do now is clean up the area near and underneath your hibiscus. Try to remove any sawflies that might be trying to overwinter on your plant.
Next year, around mid-summer, begin checking the undersides of the hardy hibiscus. If you start seeing some damage and seeing those little caterpillars, try a spray on the underside of the leaves, called spinosad.
This will help your hibiscus and get rid of the sawflies. Just note that you should treat the plant with this organic product very carefully, as it is harmful to bees.
Spray it on the hardy hibiscus leaves in the evening once the bees have gone home for the for the night. And once the spray has dried, it is not as toxic.
All Things Gardening is powered by you, our audience! Send us your toughest conundrums and join the fun. Submit your written question via email, or better yet, leave a voicemail with your gardening question so we can use your voice on the air! Call Vermont Public at 1-800-639-2192.
Listen to All Things Gardening Sunday mornings at 9:35 a.m., and subscribe to the podcast to listen any time.
|
Winning individual primaries and caucuses is just one step in the long path to winning a party’s presidential nomination.
Both parties hold conventions in the summer where delegates technically select the nominee. The process and rules are different for each party, but the primaries are about winning enough delegates to secure the nomination.
The race is not over in this regard. There are different kinds of nominating contests and different kinds of delegates in a calendar that stretches from January to June, so keeping track of the delegate math can get complicated. Former South Carolina Gov. Nikki Haley will ultimately need to find a way to start winning contests if she’s going to eat into Trump’s growing delegate lead.
Performing well in primaries and caucuses equals delegates, and the larger goal is amassing the magic number of delegates to secure a nomination before delegate voting at the party convention.
Winning the GOP nomination requires at least 1,215 out of 2,429 delegates awarded as part of the primary process. Shortly after CNN projected that Trump would win New Hampshire, Trump had 32 delegates compared with Haley’s 17.
In years without an incumbent, like Republicans are experiencing in 2024, the winner frequently does not hit the magic number until May or even June. In 2016, in his first of three White House runs, Trump hit the magic number on May 26.
During most of the early primaries and caucuses, states award delegates proportionally. That means that each candidate gets a number of delegates roughly equivalent to the percentage of the vote he or she has won. Delegates can be awarded based on results either statewide or in individual congressional districts.
For example, in 2016, Sen. Ted Cruz won the Iowa caucuses, but with less than 30% of the vote, he only got eight delegates. Trump and Sen. Marco Rubio both got seven delegates in Iowa that year.
Sometimes, there’s a certain amount of support a candidate must register to qualify for delegates, and many of these states have special rules that allow a candidate who wins the lion’s share of the vote (often 50%) to take all of the state’s delegates.
Things change after March 15. That’s when states have the option to award all of their delegates to whoever gets the most votes in the state’s contest.
The introduction of winner-take-all rules makes it harder for any remaining candidates to accumulate delegates against the race leader.
While you’re bound to hear a lot about Iowa and New Hampshire, contests that can be critical for giving candidates early momentum, those two states represent a small number of delegates.
It’s not until Super Tuesday
|
<urn:uuid:09840ff6-2304-4bed-9945-d3c63c5d6ed6>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474715.58/warc/CC-MAIN-20240228112121-20240228142121-00694.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9541407823562622,
"pii_count": 0,
"score": 2.515625,
"token_count": 544,
"url": "https://edition.cnn.com/2024/01/14/politics/republican-presidential-nomination-delegates-dg/index.html"
}
|
Winning individual primaries and caucuses is just one step in the long path to winning a party’s presidential nomination.
Both parties hold conventions in the summer where delegates technically select the nominee. The process and rules are different for each party, but the primaries are about winning enough delegates to secure the nomination.
The race is not over in this regard. There are different kinds of nominating contests and different kinds of delegates in a calendar that stretches from January to June, so keeping track of the delegate math can get complicated. Former South Carolina Gov. Nikki Haley will ultimately need to find a way to start winning contests if she’s going to eat into Trump’s growing delegate lead.
Performing well in primaries and caucuses equals delegates, and the larger goal is amassing the magic number of delegates to secure a nomination before delegate voting at the party convention.
Winning the GOP nomination requires at least 1,215 out of 2,429 delegates awarded as part of the primary process. Shortly after CNN projected that Trump would win New Hampshire, Trump had 32 delegates compared with Haley’s 17.
In years without an incumbent, like Republicans are experiencing in 2024, the winner frequently does not hit the magic number until May or even June. In 2016, in his first of three White House runs, Trump hit the magic number on May 26.
During most of the early primaries and caucuses, states award delegates proportionally. That means that each candidate gets a number of delegates roughly equivalent to the percentage of the vote he or she has won. Delegates can be awarded based on results either statewide or in individual congressional districts.
For example, in 2016, Sen. Ted Cruz won the Iowa caucuses, but with less than 30% of the vote, he only got eight delegates. Trump and Sen. Marco Rubio both got seven delegates in Iowa that year.
Sometimes, there’s a certain amount of support a candidate must register to qualify for delegates, and many of these states have special rules that allow a candidate who wins the lion’s share of the vote (often 50%) to take all of the state’s delegates.
Things change after March 15. That’s when states have the option to award all of their delegates to whoever gets the most votes in the state’s contest.
The introduction of winner-take-all rules makes it harder for any remaining candidates to accumulate delegates against the race leader.
While you’re bound
|
to hear a lot about Iowa and New Hampshire, contests that can be critical for giving candidates early momentum, those two states represent a small number of delegates.
It’s not until Super Tuesday
|
Earlier this month, on a day off from working as a doctor at the children’s hospital, I took my baby to a rally at the Utah Capitol. The crowd thrummed with energy. The Great Salt Lake shimmered on the horizon. My baby reached towards brightly painted signs: “Save Our Lake.” “Save It, Don’t Spray It.” “Defend Our Future.”
We went because saving the lake is crucial for the health of my son, my pediatric patients and all the children in the Wasatch Front.
And because this is an emergency.
New research suggests that without dramatic policy changes, the Great Salt Lake could vanish within five years — and legislative decisions made in the next two months could make or break that path.
As a pediatrician and mother, a big brown rally sign summed up my biggest concern: “No to Toxic Dust Bowl.”
Our city’s namesake is a terminal body of water, with water flowing in but not out, which means it’s full of pollutants. Arsenic, mercury, lead and other toxins sit beneath the lake, held down by water and salt. As the lake dries, the toxins rise into the air as dust. This dust then blows across the Wasatch Front, which houses three-fourths of Utah’s population. It travels further still: dust from the Great Salt Lake has been found from Southern Utah to Wyoming.
When toxic dust reaches us, we breathe it in. So do our kids.
There’s no doubt that air pollution is bad for children’s health. If you know a child with asthma, you may have seen how it can be harder to breathe on bad air days. Air pollution is also linked to childhood cancers, birth defects, problems with brain development and other health concerns. With our smog, wildfire smoke, high ozone levels and worsening dust, children here often breathe unsafe air.
But the collapse of the Great Salt Lake brings an extremely urgent new threat. The dust will not simply bring more run-of-the-mill bad air days. The resulting toxic dust storms could be catastrophic for children’s health. Kids are vulnerable to even tiny amounts of pollution and toxins. Take lead, for example, one of the heavy metals found in the lakebed: even the tiniest amount of lead poisoning can harm a child’s brain.
We do not know what, exactly, would happen to children’s health – or to our health, as adults – if we were to breathe storm after storm of toxic dust. We can’t risk finding out.
Due to diversions and drought, the lake has lost more than two-thirds of its water. Salt crusts that hold down the toxic dust are starting to erode. When Owens Lake, a saltwater lake in southern California, dried up in 1926, it became the single largest source of dust in the United States — and held that title for close to a century. The Great Salt Lake, the largest saltwater lake in the Western Hemisphere, is around 12 times larger than Owens Lake. It is difficult to imagine the amount of dust that we will face if the Great Salt Lake continues to dry.
I am heartened by Governor Cox and the Utah Legislature’s declarations of this issue as a priority, as well as last year’s policy changes and funding commitments. This legislative season, we urgently need more: big, bold changes to improve water conservation.
The morning after the rally, my baby woke up and crawled for the first time. He’d been trying for weeks, pumping his arms and legs while stuck on an invisible treadmill. Suddenly, he was rocketing around the room, grinning and babbling like it was the easiest thing in the world.
As a mom and as a pediatrician, I am amazed at how children change every day. This rapid development of their brains and bodies leaves them vulnerable to toxins. But it also lets them greet each day with resolve and delight – even, or especially, when they are learning something hard.
Let us be inspired by children: We, too, can greet challenges with resolve and perhaps even delight. We, too, can make huge changes — even when it’s hard. When it comes to saving the Great Salt Lake, it’s our only option.
Hanna Saltzman, M.D., is a pediatric resident physician and mother in Salt Lake City.
|
<urn:uuid:cc19ca1b-d074-44cd-a24a-fd892bc5b1ae>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00799.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9542655348777771,
"pii_count": 0,
"score": 3.1875,
"token_count": 929,
"url": "https://www.sltrib.com/opinion/commentary/2023/01/26/hanna-saltzman-health-lake-is/"
}
|
Earlier this month, on a day off from working as a doctor at the children’s hospital, I took my baby to a rally at the Utah Capitol. The crowd thrummed with energy. The Great Salt Lake shimmered on the horizon. My baby reached towards brightly painted signs: “Save Our Lake.” “Save It, Don’t Spray It.” “Defend Our Future.”
We went because saving the lake is crucial for the health of my son, my pediatric patients and all the children in the Wasatch Front.
And because this is an emergency.
New research suggests that without dramatic policy changes, the Great Salt Lake could vanish within five years — and legislative decisions made in the next two months could make or break that path.
As a pediatrician and mother, a big brown rally sign summed up my biggest concern: “No to Toxic Dust Bowl.”
Our city’s namesake is a terminal body of water, with water flowing in but not out, which means it’s full of pollutants. Arsenic, mercury, lead and other toxins sit beneath the lake, held down by water and salt. As the lake dries, the toxins rise into the air as dust. This dust then blows across the Wasatch Front, which houses three-fourths of Utah’s population. It travels further still: dust from the Great Salt Lake has been found from Southern Utah to Wyoming.
When toxic dust reaches us, we breathe it in. So do our kids.
There’s no doubt that air pollution is bad for children’s health. If you know a child with asthma, you may have seen how it can be harder to breathe on bad air days. Air pollution is also linked to childhood cancers, birth defects, problems with brain development and other health concerns. With our smog, wildfire smoke, high ozone levels and worsening dust, children here often breathe unsafe air.
But the collapse of the Great Salt Lake brings an extremely urgent new threat. The dust will not simply bring more run-of-the-mill bad air days. The resulting toxic dust storms could be catastrophic for children’s health. Kids are vulnerable to even tiny amounts of pollution and toxins. Take lead, for example, one of the heavy metals found in the lakebed: even the tiniest amount of lead poisoning can harm a child’s brain.
We do not know what, exactly, would happen to children’s health – or to our health, as adults – if we were to breathe storm after storm of toxic dust. We can’t risk finding out.
Due to divers
|
ions and drought, the lake has lost more than two-thirds of its water. Salt crusts that hold down the toxic dust are starting to erode. When Owens Lake, a saltwater lake in southern California, dried up in 1926, it became the single largest source of dust in the United States — and held that title for close to a century. The Great Salt Lake, the largest saltwater lake in the Western Hemisphere, is around 12 times larger than Owens Lake. It is difficult to imagine the amount of dust that we will face if the Great Salt Lake continues to dry.
I am heartened by Governor Cox and the Utah Legislature’s declarations of this issue as a priority, as well as last year’s policy changes and funding commitments. This legislative season, we urgently need more: big, bold changes to improve water conservation.
The morning after the rally, my baby woke up and crawled for the first time. He’d been trying for weeks, pumping his arms and legs while stuck on an invisible treadmill. Suddenly, he was rocketing around the room, grinning and babbling like it was the easiest thing in the world.
As a mom and as a pediatrician, I am amazed at how children change every day. This rapid development of their brains and bodies leaves them vulnerable to toxins. But it also lets them greet each day with resolve and delight – even, or especially, when they are learning something hard.
Let us be inspired by children: We, too, can greet challenges with resolve and perhaps even delight. We, too, can make huge changes — even when it’s hard. When it comes to saving the Great Salt Lake, it’s our only option.
Hanna Saltzman, M.D., is a pediatric resident physician and mother in Salt Lake City.
|
Millions of homes have lead paint, harming kids of color most. Will federal grants help?
Though lead-based paint was banned in the late 1970s, lead still remains in the paint of roughly 37 million structures in the United States — and it's extremely harmful to people's health, especially growing children.
On Friday, the federal government announced two grants to help remove the paint and make homes in low-income neighborhoods healthier.
Together, the grants through the U.S. Department of Housing and Urban Development total $568 million and focus on mitigating other harmful substances too, such as carbon monoxide, mold, radon and asbestos, as well as improving fire safety, the agency told USA TODAY in an exclusive.
“Lead is still ubiquitous in our environment,” said Tiffany Sanchez, an environmental epidemiologist at the Columbia Mailman School of Public Health.
Low-income, urban communities of color have long suffered disproportionate lead exposures.
“This results in environmental injustice,” Sanchez said. “It’s unacceptable that so many are still exposed.”
LINGERING LEAD: Biden plan to eliminate old pipes highlights longstanding contamination in communities of color
Experts say they’re watching to see how initiatives will play out and what funds will come next to sustain efforts. Here's what to know.
What are the dangers of lead?
Lead is particularly harmful to children. Due to their rapid development, children who are exposed to lead may suffer major developmental delays and long-term, significant harm to the brain and nervous system.
Even at low levels of exposure, lead can slow growth and cause learning, behavior and speech problems in children, research shows. Studies tie elevated lead levels to lower IQ, decreased focus, and even violent crime and delinquency.
“Substandard housing results in a range of environmental exposures, including lead-based paint hazards amongst many others,” said Mount Sinai pediatrician Dr. Maida Galvez, founding director of the New York State Children’s Environmental Health Center.
Who is most at risk?
Urban children living in poverty and low-income communities of color are at the greatest danger, experts say.
Black children are three times more likely to have high lead levels in their blood than white children, according to the National Academy for State Health Policy.
Your pediatrician should screen your child's blood for lead, and you can check with your local health department's environmental health division for mitigation information.
More than $500M in funds: What to know about the HUD grants
The announcement comes a year after the Biden administration issued the Lead Pipe and Paint Action Plan and follows federal dollars from the administration’s Infrastructure Law that includes lead service line removal funding. In 2021, experts and advocates criticized the $15 billion allocation – down from a proposed $45 billion – saying the funds weren’t enough for the costly and complicated removal process of lead service lines.
Here's how these grants will be broken down:
►Roughly $165 million are open to public housing agencies, the largest investment in health and safety grants for public housing to date, the agency says, combining federal dollars from the Housing-related Hazards Capital Fund and the Lead-Based Paint Capital Fund programs.
►Under HUD’s Lead Hazard Reduction Grant Program, another $403 million are open to state and local government applicants for homeowners to improve safety in homes built before 1978. This grant is intended for homes owned by low-income families and owners of rentals that house low-income families, said Warren Friedman, senior adviser at the Office of Lead Hazard Control and Healthy Homes.
The funding amounts are based on congressional appropriations passed in prior years that have now become available, an agency spokeswoman said.
When and how can agencies apply?
Friedman told USA TODAY applicants will need to describe the developments where they plan to implement the funds and how they will follow through with the changes.
State and local governments have until March 14 to apply online at grants.gov. The deadline for public housing agencies is April 13, the agency said.
Why is lead removal such a big problem?
Lead paint remediation is a complex process that can be costly for low-income homeowners, said Galvez.
“The families that we serve oftentimes do not have the resources to fix those problems,” she said. “And they worsen.”
Older buildings are harder to maintain, leading to more breakdowns that lead to exposures.
“What makes lead-based paint dangerous is when it comes off the wall, and that happens when there’s maintenance... struggles to maintain that housing in good quality," said Mount Sinai pediatrician Dr. Perry Sheffield, who co-directs the New York center with Galvez. “There’s a leak, or a flood, or ongoing ventilation issue that causes a moisture problem, or just general breakdown.
“That’s the cycle that’s really hard to break.”
- Infrastructure act aimed to fight climate change. Is it living up to Biden's pledge?
- 8 things you can do to protect yourself against lead poisoning from water
- Americans born before 1996 may have a lower IQ from exposure to leaded gas
Reach Nada Hassanein at <email-pii> or on Twitter @nhassanein_.
|
<urn:uuid:3a41953c-8571-4ff6-8ddb-ce7eb129a580>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00229.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9430223703384399,
"pii_count": 1,
"score": 3.03125,
"token_count": 1119,
"url": "https://www.statesmanjournal.com/story/news/health/2023/01/13/hud-grants-lead-removal-low-income-housing/10895703002/"
}
|
Millions of homes have lead paint, harming kids of color most. Will federal grants help?
Though lead-based paint was banned in the late 1970s, lead still remains in the paint of roughly 37 million structures in the United States — and it's extremely harmful to people's health, especially growing children.
On Friday, the federal government announced two grants to help remove the paint and make homes in low-income neighborhoods healthier.
Together, the grants through the U.S. Department of Housing and Urban Development total $568 million and focus on mitigating other harmful substances too, such as carbon monoxide, mold, radon and asbestos, as well as improving fire safety, the agency told USA TODAY in an exclusive.
“Lead is still ubiquitous in our environment,” said Tiffany Sanchez, an environmental epidemiologist at the Columbia Mailman School of Public Health.
Low-income, urban communities of color have long suffered disproportionate lead exposures.
“This results in environmental injustice,” Sanchez said. “It’s unacceptable that so many are still exposed.”
LINGERING LEAD: Biden plan to eliminate old pipes highlights longstanding contamination in communities of color
Experts say they’re watching to see how initiatives will play out and what funds will come next to sustain efforts. Here's what to know.
What are the dangers of lead?
Lead is particularly harmful to children. Due to their rapid development, children who are exposed to lead may suffer major developmental delays and long-term, significant harm to the brain and nervous system.
Even at low levels of exposure, lead can slow growth and cause learning, behavior and speech problems in children, research shows. Studies tie elevated lead levels to lower IQ, decreased focus, and even violent crime and delinquency.
“Substandard housing results in a range of environmental exposures, including lead-based paint hazards amongst many others,” said Mount Sinai pediatrician Dr. Maida Galvez, founding director of the New York State Children’s Environmental Health Center.
Who is most at risk?
Urban children living in poverty and low-income communities of color are at the greatest danger, experts say.
Black children are three times more likely to have high lead levels in their blood than white children, according to the National Academy for State Health Policy.
Your pediatrician should screen your child's blood for lead, and you can check with your local health department's environmental health division for mitigation information.
More than $500M in funds: What to know about the HUD grants
The announcement comes a year after
|
the Biden administration issued the Lead Pipe and Paint Action Plan and follows federal dollars from the administration’s Infrastructure Law that includes lead service line removal funding. In 2021, experts and advocates criticized the $15 billion allocation – down from a proposed $45 billion – saying the funds weren’t enough for the costly and complicated removal process of lead service lines.
Here's how these grants will be broken down:
►Roughly $165 million are open to public housing agencies, the largest investment in health and safety grants for public housing to date, the agency says, combining federal dollars from the Housing-related Hazards Capital Fund and the Lead-Based Paint Capital Fund programs.
►Under HUD’s Lead Hazard Reduction Grant Program, another $403 million are open to state and local government applicants for homeowners to improve safety in homes built before 1978. This grant is intended for homes owned by low-income families and owners of rentals that house low-income families, said Warren Friedman, senior adviser at the Office of Lead Hazard Control and Healthy Homes.
The funding amounts are based on congressional appropriations passed in prior years that have now become available, an agency spokeswoman said.
When and how can agencies apply?
Friedman told USA TODAY applicants will need to describe the developments where they plan to implement the funds and how they will follow through with the changes.
State and local governments have until March 14 to apply online at grants.gov. The deadline for public housing agencies is April 13, the agency said.
Why is lead removal such a big problem?
Lead paint remediation is a complex process that can be costly for low-income homeowners, said Galvez.
“The families that we serve oftentimes do not have the resources to fix those problems,” she said. “And they worsen.”
Older buildings are harder to maintain, leading to more breakdowns that lead to exposures.
“What makes lead-based paint dangerous is when it comes off the wall, and that happens when there’s maintenance... struggles to maintain that housing in good quality," said Mount Sinai pediatrician Dr. Perry Sheffield, who co-directs the New York center with Galvez. “There’s a leak, or a flood, or ongoing ventilation issue that causes a moisture problem, or just general breakdown.
“That’s the cycle that’s really hard to break.”
- Infrastructure act aimed to fight climate change. Is it living up to Biden's pledge?
- 8 things you can do to protect yourself against lead poisoning from water
- Americans born before 1996 may have a lower IQ from exposure to leaded gas
Reach Nada Hassanein at <email-pii> or on Twitter @nhassanein_.
|
- Robots milk cows without human intervention using 3D cameras or laser beams.
- Production, labor savings and cow comfort are three critical benefits of using newer milking systems, farmers say.
- Not all dairy farmers benefit from using robot-based technology due to the high upfront cost or how it changes the management of the operation.
Chad Kieffer, a third-generation farmer from Utica, Minnesota, has five milkers for his herd of 350 cows.
The milkers are squat, patient, persistent workers. They hum around the mooing cows. They are robots.
In an increasingly automated world, the dairy industry is keeping up. According to Michigan State University, robotic milkers were first introduced in the United States in 2000. Now, according to Hoard’s Dairyman magazine, over 35,000 robotic milking units can be found around the world with thousands in the U.S.
“It’s always changing. It’s like your iPhone getting changed every six months. There’s lots of technology that’s getting researched every day,” said Dana Allen, a fourth-generation dairy farmer from Eyota, Minnesota.
Dairy technology has transformed the industry.
Decades ago, dairy producers milked by hand. Then came buckets, pipelines, parlors, and then parlors with automatic unit removal, rotary parlors, and robots, according to Douglas Reinemann, Ph.D., a researcher at the University of Wisconsin-Madison.
Automation started in Europe. Company DeLaval started using an automated robot system at a farm in Sweden in 1997. Another maker is Lely, based in the Netherlands.
By 2000, automation came to the U.S. The results are significant, farmers and researchers say. Cows feel less stress, farmers are able to be more efficient and they gain time. They also save money on labor.
Now there are 500 to 1,000 U.S. operations using the milking robots, said Reinemann.
The automation is not perfect. Startup costs challenge smaller producers. As there is with most machinery, there also is the need for maintenance.
Farmers gain time to work
Automated milking systems (AMS) give farmers time to focus on other work, and typically more time on skilled duties. This saves labor, Reinemann said. He also noted that AMS helps farmers ergonomically as they avoid repetitive physical tasks.
Farmers who use AMS appreciate the labor savings.
“The farmers have more time to clean up the farm, work on the crops, and other chores. It may even allow a farmer to attend their kid’s sporting events and in turn, improve their mental health,” said Mariah Busta, executive director of the Iowa State Dairy Association.
But the rate of change is up to the farmer.
“The degree to which a farmer adapts to technology and the speed at which they do it is open for their comfort level,” Allen said. “There is a tremendous amount of technology in agriculture, and I think that’s something the general public doesn’t understand.”
Robots leave cows calmer
Robots, like Kieffer’s, milk cows without the physical presence of a human being, using a robotic arm with the aid of either 3D cameras or laser beams to locate a cow’s teats.
According to the Animal Agriculture Alliance, a nonprofit focused on sustainable ag, the total time needed to milk one cow takes an average of seven minutes with a robot.
Since installing robots, Kieffer has reduced his number of full-time dairy employees from six to three.
“The robot is doing the milking for you, so the cost of the robots has to be offset by labor savings,” he said. The robots can cost more than $200,000 each.
The farmers also said the cows’ comfort is a critical benefit. Reinemann said cows can associate humans with negative interactions, such as a veterinarian checkup or being moved to and from different buildings.
“You’re allowing a cow to do what she wants when she wants. You’re not forcing her somewhere, and she can go get milked,” Kieffer said.
Cows voluntarily can go to the area indoors to be milked, farmers said. The robot system sorts cows that need to be milked based on how much time has elapsed in between milkings. Ear tags assist with sorting.
In Deer Park, Wisconsin, Kristin Quist of Minglewood Dairy keeps about 500 of her herd of 1,200 cows in a robot milking facility. She uses a system that guides the animals through the barn.
“The cow walks through a gate determining if it’s time for them to be milked after reading a tag in her ear,” she said. “If it’s time to be milked, she goes into the robot. If it isn’t, she’s sent to be fed in a different direction.”
RELATED VIDEO: Watch the robot in action
Quist compares her newer robot facility to her milking parlor.
“The cows in the robot facility are a lot more laid back,” she said. “In the parlor facility, they are more likely to get up expecting to get milked.”
Regardless of the production method, farmers are producing more milk, primarily because of improved genetics and improved nutrition, researchers said.
Reinemann said research has shown using robots allows cows to remain in the herd longer, which can mean they produce milk longer.
“We won’t be getting any additional land or resources in the coming future, so we need to be efficient with what we have,” Busta said. “Technology is absolutely crucial in helping us continue to be efficient about producing milk in a sustainable way.”
Other farmers opt for rotating platform systems
Farmers seeking an upgrade have other options beyond milking robots.
Rotary milking parlors allow cows to step onto a circulating platform before farmers attach milking units. The system constantly moves cows on and off the platform.
Allen milks her herd of about 1,750 cows on her farm, Gar-Lin Dairy, using a platform. Gar-Lin’s rotary allows 50 cows to step onto the platform at a time.
“Before, we were milking 750 to 800 cows in about seven hours. Now we can milk 1,750 in the same amount of time,” she said.
Allen said that she initially feared the cows would be difficult to get onto the carousel-like platform, but she quickly learned the greater issue would be getting them off.
“There’s not a lot of commotion,” she said. “They actually like riding around on the rotary.”
In deciding what type of system a farmer should pursue, a farm of 1,500 to 3,000 cows likely warrants a rotary system, while a herd of 200-300 cows may be better suited for robots, said Marcia Endres, Ph.D., a professor of animal science at the University of Minnesota. With larger herds, a rotary system is more efficient because up to 50 cows can be loaded at once, though it does still require human intervention to get the cows on the platform.
One benefit of technology is more milk. In 1925, the average Iowa cow was giving 4,000 pounds of milk each year, while today’s cow gives 28,000 pounds annually, according to the state dairy association.
Some farmers are weighing the pros and cons of the technology.
Among them is Nick Seitzer, a recent University of Minnesota grad and dairy farmer from St. Peter, Minnesota. He’s thinking about adding a robot to milk his 65 cows. He forecasts about a 10% increase in production if he installs a robot in a free-stall barn.
“The robot would be huge,” Seitzer said. “Fewer people want to do what we’re doing, so it would be nice to have robots that are always there doing the job.”
But the initial cost is high. Seitzer estimates one robot would cost about $250,000, not including the physical infrastructure (such as a potential barn expansion, milk house or other needs) to use it.
On his Utica, Minnesota, farm, Kieffer saw robots as a smart investment. Kieffer typically spends one hour a day doing maintenance, he said.
“Your debt per cow or debt per stall becomes rather large up front, but what I tell people is you’re basically prepaying your labor for seven or eight years,” he said. “It ends up being less than a traditional herd of cows being milked in a parlor.”
With dairy technology rapidly changing, Seitzer has been feeling pressure for his operation to adapt.
“If we don’t, we may not be doing it much longer,” he said.
Dr. Lindsey Borst, a veterinarian who milks 230 cows alongside her family in Rochester, Minnesota, feels differently. Borst has been looking at robots for five to six years.
“I wouldn’t say we’re falling behind. Robots are still fairly new, and they’re not super common yet,” she said. “Sometimes it’s also good to wait because technology is changing so quickly, too.”
Robots are not the right choice for all farmers, Kieffer, the farmer, and Endres, the University of Minnesota professor, said.
“Don’t put robots in because you don’t like cows. You also have to be mechanically inclined to do preventative maintenance on the robots,” Kieffer said.
Reinemann, the UW professor since 1990 and director of the UW Milking Research and Instruction lab, grew up in the dairyland of Wisconsin.
“For me as a young person, milking cows was so ordinary; there was absolutely nothing interesting about it,” he said. “I started my career in milking technology, and it’s been amazing. The shifts in the technology and how dairy farms are managed, it’s been a lot of change.”
RELATED STORY: Technology goes beyond milking
Ethan Humble is a 2023 graduate of Simpson College, where he majored in multimedia journalism.
Suzanne Behnke contributed to this report.
Type of work:
Comments are closed.
|
<urn:uuid:85d43dbe-1f38-45ef-b115-4d44859cafbe>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643663.27/warc/CC-MAIN-20230528083025-20230528113025-00718.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9508178234100342,
"pii_count": 0,
"score": 2.515625,
"token_count": 2248,
"url": "https://investigatemidwest.org/2023/04/27/for-years-farmers-milked-cows-by-hand-now-robots-and-technology-do-the-work/"
}
|
- Robots milk cows without human intervention using 3D cameras or laser beams.
- Production, labor savings and cow comfort are three critical benefits of using newer milking systems, farmers say.
- Not all dairy farmers benefit from using robot-based technology due to the high upfront cost or how it changes the management of the operation.
Chad Kieffer, a third-generation farmer from Utica, Minnesota, has five milkers for his herd of 350 cows.
The milkers are squat, patient, persistent workers. They hum around the mooing cows. They are robots.
In an increasingly automated world, the dairy industry is keeping up. According to Michigan State University, robotic milkers were first introduced in the United States in 2000. Now, according to Hoard’s Dairyman magazine, over 35,000 robotic milking units can be found around the world with thousands in the U.S.
“It’s always changing. It’s like your iPhone getting changed every six months. There’s lots of technology that’s getting researched every day,” said Dana Allen, a fourth-generation dairy farmer from Eyota, Minnesota.
Dairy technology has transformed the industry.
Decades ago, dairy producers milked by hand. Then came buckets, pipelines, parlors, and then parlors with automatic unit removal, rotary parlors, and robots, according to Douglas Reinemann, Ph.D., a researcher at the University of Wisconsin-Madison.
Automation started in Europe. Company DeLaval started using an automated robot system at a farm in Sweden in 1997. Another maker is Lely, based in the Netherlands.
By 2000, automation came to the U.S. The results are significant, farmers and researchers say. Cows feel less stress, farmers are able to be more efficient and they gain time. They also save money on labor.
Now there are 500 to 1,000 U.S. operations using the milking robots, said Reinemann.
The automation is not perfect. Startup costs challenge smaller producers. As there is with most machinery, there also is the need for maintenance.
Farmers gain time to work
Automated milking systems (AMS) give farmers time to focus on other work, and typically more time on skilled duties. This saves labor, Reinemann said. He also noted that AMS helps farmers ergonomically as they avoid repetitive physical tasks.
Farmers
|
who use AMS appreciate the labor savings.
“The farmers have more time to clean up the farm, work on the crops, and other chores. It may even allow a farmer to attend their kid’s sporting events and in turn, improve their mental health,” said Mariah Busta, executive director of the Iowa State Dairy Association.
But the rate of change is up to the farmer.
“The degree to which a farmer adapts to technology and the speed at which they do it is open for their comfort level,” Allen said. “There is a tremendous amount of technology in agriculture, and I think that’s something the general public doesn’t understand.”
Robots leave cows calmer
Robots, like Kieffer’s, milk cows without the physical presence of a human being, using a robotic arm with the aid of either 3D cameras or laser beams to locate a cow’s teats.
According to the Animal Agriculture Alliance, a nonprofit focused on sustainable ag, the total time needed to milk one cow takes an average of seven minutes with a robot.
Since installing robots, Kieffer has reduced his number of full-time dairy employees from six to three.
“The robot is doing the milking for you, so the cost of the robots has to be offset by labor savings,” he said. The robots can cost more than $200,000 each.
The farmers also said the cows’ comfort is a critical benefit. Reinemann said cows can associate humans with negative interactions, such as a veterinarian checkup or being moved to and from different buildings.
“You’re allowing a cow to do what she wants when she wants. You’re not forcing her somewhere, and she can go get milked,” Kieffer said.
Cows voluntarily can go to the area indoors to be milked, farmers said. The robot system sorts cows that need to be milked based on how much time has elapsed in between milkings. Ear tags assist with sorting.
In Deer Park, Wisconsin, Kristin Quist of Minglewood Dairy keeps about 500 of her herd of 1,200 cows in a robot milking facility. She uses a system that guides the animals through the barn.
“The cow walks through a gate determining if it’s time for them to be milked after reading a tag in her ear,” she said. “If it’s time to be milked, she goes into the robot. If it isn’t, she’s sent to be fed in a different direction.”
RELATED VIDEO: Watch the robot in action
Quist compares her newer robot facility to her milking parlor.
“The cows in the robot facility are a lot more laid back,” she said. “In the parlor facility, they are more likely to get up expecting to get milked.”
Regardless of the production method, farmers are producing more milk, primarily because of improved genetics and improved nutrition, researchers said.
Reinemann said research has shown using robots allows cows to remain in the herd longer, which can mean they produce milk longer.
“We won’t be getting any additional land or resources in the coming future, so we need to be efficient with what we have,” Busta said. “Technology is absolutely crucial in helping us continue to be efficient about producing milk in a sustainable way.”
Other farmers opt for rotating platform systems
Farmers seeking an upgrade have other options beyond milking robots.
Rotary milking parlors allow cows to step onto a circulating platform before farmers attach milking units. The system constantly moves cows on and off the platform.
Allen milks her herd of about 1,750 cows on her farm, Gar-Lin Dairy, using a platform. Gar-Lin’s rotary allows 50 cows to step onto the platform at a time.
“Before, we were milking 750 to 800 cows in about seven hours. Now we can milk 1,750 in the same amount of time,” she said.
Allen said that she initially feared the cows would be difficult to get onto the carousel-like platform, but she quickly learned the greater issue would be getting them off.
“There’s not a lot of commotion,” she said. “They actually like riding around on the rotary.”
In deciding what type of system a farmer should pursue, a farm of 1,500 to 3,000 cows likely warrants a rotary system, while a herd of 200-300 cows may be better suited for robots, said Marcia Endres, Ph.D., a professor of animal science at the University of Minnesota. With larger herds, a rotary system is more efficient because up to 50 cows can be loaded at once, though it does still require human intervention to get the cows on the platform.
One benefit of technology is more milk. In 1925, the average Iowa cow was giving 4,000 pounds of milk each year, while today’s cow gives 28,000 pounds annually, according to the state dairy association.
Some farmers are weighing the pros and cons of the technology.
Among them is Nick Seitzer, a recent University of Minnesota grad and dairy farmer from St. Peter, Minnesota. He’s thinking about adding a robot to milk his 65 cows. He forecasts about a 10% increase in production if he installs a robot in a free-stall barn.
“The robot would be huge,” Seitzer said. “Fewer people want to do what we’re doing, so it would be nice to have robots that are always there doing the job.”
But the initial cost is high. Seitzer estimates one robot would cost about $250,000, not including the physical infrastructure (such as a potential barn expansion, milk house or other needs) to use it.
On his Utica, Minnesota, farm, Kieffer saw robots as a smart investment. Kieffer typically spends one hour a day doing maintenance, he said.
“Your debt per cow or debt per stall becomes rather large up front, but what I tell people is you’re basically prepaying your labor for seven or eight years,” he said. “It ends up being less than a traditional herd of cows being milked in a parlor.”
With dairy technology rapidly changing, Seitzer has been feeling pressure for his operation to adapt.
“If we don’t, we may not be doing it much longer,” he said.
Dr. Lindsey Borst, a veterinarian who milks 230 cows alongside her family in Rochester, Minnesota, feels differently. Borst has been looking at robots for five to six years.
“I wouldn’t say we’re falling behind. Robots are still fairly new, and they’re not super common yet,” she said. “Sometimes it’s also good to wait because technology is changing so quickly, too.”
Robots are not the right choice for all farmers, Kieffer, the farmer, and Endres, the University of Minnesota professor, said.
“Don’t put robots in because you don’t like cows. You also have to be mechanically inclined to do preventative maintenance on the robots,” Kieffer said.
Reinemann, the UW professor since 1990 and director of the UW Milking Research and Instruction lab, grew up in the dairyland of Wisconsin.
“For me as a young person, milking cows was so ordinary; there was absolutely nothing interesting about it,” he said. “I started my career in milking technology, and it’s been amazing. The shifts in the technology and how dairy farms are managed, it’s been a lot of change.”
RELATED STORY: Technology goes beyond milking
Ethan Humble is a 2023 graduate of Simpson College, where he majored in multimedia journalism.
Suzanne Behnke contributed to this report.
Type of work:
Comments are closed.
|
The ozone layer is on track to recover in the coming decades, the United Nations says
Updated January 10, 2023 at 2:16 PM ET
The Earth's ozone layer is on its way to recovering, thanks to decades of work to get rid of ozone-damaging chemicals, a panel of international experts backed by the United Nations has found.
The ozone layer serves an important function for living things on Earth. This shield in the stratosphere protects humans and the environment from harmful levels of the sun's ultraviolet radiation.
The international community was alarmed after experts discovered a hole in the ozone layer in May 1985. Scientists had previously discovered that chemicals such as chlorofluorocarbons, used in manufacturing aerosol sprays and used as solvents and refrigerants, could destroy ozone.
Two years after the discovery of the dire state of the ozone layer, international bodies adopted a global agreement called the Montreal Protocol. This established the phaseout of almost 100 synthetic chemicals that were tied to the destruction of the all-important ozone.
In the latest report on the progress of the Montreal Protocol, the U.N.-backed panel confirmed that nearly 99% of banned ozone-depleting substances have been phased out.
If current policies stay in place, the ozone layer is expected to recover to 1980 values by 2040, the U.N. announced.
In some places, it may take longer. Experts said that 1980-level recovery over Antarctica is expected by around 2066 and by 2045 over the Arctic.
"The impact the Montreal Protocol has had on climate change mitigation cannot be overstressed," said Meg Seki, executive secretary of the U.N. Environment Programme's Ozone Secretariat, in a statement. "Over the last 35 years, the Protocol has become a true champion for the environment. The assessments and reviews undertaken by the Scientific Assessment Panel remain a vital component of the work of the Protocol that helps inform policy and decision-makers."
The depletion of the ozone layer is not a major cause of climate change. But research is showing that these efforts to save the ozone layer are proving beneficial in the fight against climate change.
In 2016, an amendment to the Montreal Protocol required the phaseout of the production and consumption of some hydrofluorocarbons. These HFCs don't directly deplete the ozone layer, but they are powerful greenhouse gases — which contribute to accelerated climate change and global warming, the U.N. says.
The Kigali Amendment will "avoid 0.3–0.5 °C of warming by 2100," the report estimates.
"Ozone action sets a precedent for climate action," said World Meteorological Organization Secretary-General Petteri Taalas. "Our success in phasing out ozone-eating chemicals shows us what can and must be done – as a matter of urgency – to transition away from fossil fuels, reduce greenhouse gases and so limit temperature increase."
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
<urn:uuid:23cf2390-ece1-4b88-824b-1ca9928dcace>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00482.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9326230883598328,
"pii_count": 0,
"score": 3.96875,
"token_count": 621,
"url": "https://www.hawaiipublicradio.org/npr-news/npr-news/2023-01-10/the-ozone-layer-is-on-track-to-recover-in-the-coming-decades-the-united-nations-says"
}
|
The ozone layer is on track to recover in the coming decades, the United Nations says
Updated January 10, 2023 at 2:16 PM ET
The Earth's ozone layer is on its way to recovering, thanks to decades of work to get rid of ozone-damaging chemicals, a panel of international experts backed by the United Nations has found.
The ozone layer serves an important function for living things on Earth. This shield in the stratosphere protects humans and the environment from harmful levels of the sun's ultraviolet radiation.
The international community was alarmed after experts discovered a hole in the ozone layer in May 1985. Scientists had previously discovered that chemicals such as chlorofluorocarbons, used in manufacturing aerosol sprays and used as solvents and refrigerants, could destroy ozone.
Two years after the discovery of the dire state of the ozone layer, international bodies adopted a global agreement called the Montreal Protocol. This established the phaseout of almost 100 synthetic chemicals that were tied to the destruction of the all-important ozone.
In the latest report on the progress of the Montreal Protocol, the U.N.-backed panel confirmed that nearly 99% of banned ozone-depleting substances have been phased out.
If current policies stay in place, the ozone layer is expected to recover to 1980 values by 2040, the U.N. announced.
In some places, it may take longer. Experts said that 1980-level recovery over Antarctica is expected by around 2066 and by 2045 over the Arctic.
"The impact the Montreal Protocol has had on climate change mitigation cannot be overstressed," said Meg Seki, executive secretary of the U.N. Environment Programme's Ozone Secretariat, in a statement. "Over the last 35 years, the Protocol has become a true champion for the environment. The assessments and reviews undertaken by the Scientific Assessment Panel remain a vital component of the work of the Protocol that helps inform policy and decision-makers."
The depletion of the ozone layer is not a major cause of climate change. But research is showing that these efforts to save the ozone layer are proving beneficial in the fight against climate change.
In 2016, an amendment to the Montreal Protocol required the phaseout of the production and consumption of some hydrofluorocarbons. These HFCs don't directly deplete the ozone layer, but they are powerful greenhouse gases — which contribute
|
to accelerated climate change and global warming, the U.N. says.
The Kigali Amendment will "avoid 0.3–0.5 °C of warming by 2100," the report estimates.
"Ozone action sets a precedent for climate action," said World Meteorological Organization Secretary-General Petteri Taalas. "Our success in phasing out ozone-eating chemicals shows us what can and must be done – as a matter of urgency – to transition away from fossil fuels, reduce greenhouse gases and so limit temperature increase."
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
A panel of independent experts that advises the US Food and Drug Administration on its vaccine decisions voted unanimously Thursday to update all Covid-19 vaccines so they contain the same ingredients as the two-strain shots that are now used as booster doses.
The vote means young children and others who haven’t been vaccinated may soon be eligible to receive two-strain vaccines that more closely match the circulating viruses as their primary series.
The FDA must sign off on the committee’s recommendation, which it is likely to do, before it goes into effect.
Currently, the US offers two types of Covid-19 vaccines. The first shots people get – also called the primary series – contain a single set of instructions that teach the immune system to fight off the original version of the virus, which emerged in 2019.
This index strain is no longer circulating. It was overrun months ago by an ever-evolving parade of new variants.
Last year, in consultation with its advisers, the FDA decided that it was time to update the vaccines. These two-strain, or bivalent, shots contain two sets of instructions; one set reminds the immune system about the original version of the coronavirus, and the second set teaches the immune system to recognize and fight off Omicron’s BA.4 and BA.5 subvariants, which emerged in the US last year.
People who have had their primary series – nearly 70% of all Americans – were advised to get the new two-strain booster late last year in an effort to upgrade their protection against the latest variants.
The advisory committee heard testimony and data suggesting that the complexity of having two types of Covid-19 vaccines and schedules for different age groups may be one of the reasons for low vaccine uptake in the US.
Currently, only about two-thirds of Americans have had the full primary series of shots. Only 15% of the population has gotten an updated bivalent booster.
Data presented to the committee shows that Covid-19 hospitalizations have been rising for children under the age of 2 over the past year, as Omicron and its many subvariants have circulated. Only 5% of this age group, which is eligible for Covid-19 vaccination at 6 months of age, has been fully vaccinated. Ninety percent of children under the age of 4 are still unvaccinated.
“The most concerning data point that I saw this whole day was that extremely low vaccination coverage in 6 months to 2 years of age and also 2 years to 4 years of age,” said Dr. Amanda Cohn, director of the US Centers for Disease Control and Prevention’s Division of Birth Defects and Infant Disorders. “We have to do much, much better.”
|
<urn:uuid:f76d9d79-d9c4-40b3-aa01-ea75570cab4f>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506423.70/warc/CC-MAIN-20230922202444-20230922232444-00721.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9699795842170715,
"pii_count": 0,
"score": 2.984375,
"token_count": 576,
"url": "https://edition.cnn.com/2023/01/26/health/fda-covid-19-vaccine-future/index.html?mkt_tok=NDkwLUVIWi05OTkAAAGJkggJBXSWgdFLY7xaA1Ha7LFah8V9wpGcSE6VzvuqTw_oUXID7-jqYD96haumTDovix5qJaLbn08UPczT7pk"
}
|
A panel of independent experts that advises the US Food and Drug Administration on its vaccine decisions voted unanimously Thursday to update all Covid-19 vaccines so they contain the same ingredients as the two-strain shots that are now used as booster doses.
The vote means young children and others who haven’t been vaccinated may soon be eligible to receive two-strain vaccines that more closely match the circulating viruses as their primary series.
The FDA must sign off on the committee’s recommendation, which it is likely to do, before it goes into effect.
Currently, the US offers two types of Covid-19 vaccines. The first shots people get – also called the primary series – contain a single set of instructions that teach the immune system to fight off the original version of the virus, which emerged in 2019.
This index strain is no longer circulating. It was overrun months ago by an ever-evolving parade of new variants.
Last year, in consultation with its advisers, the FDA decided that it was time to update the vaccines. These two-strain, or bivalent, shots contain two sets of instructions; one set reminds the immune system about the original version of the coronavirus, and the second set teaches the immune system to recognize and fight off Omicron’s BA.4 and BA.5 subvariants, which emerged in the US last year.
People who have had their primary series – nearly 70% of all Americans – were advised to get the new two-strain booster late last year in an effort to upgrade their protection against the latest variants.
The advisory committee heard testimony and data suggesting that the complexity of having two types of Covid-19 vaccines and schedules for different age groups may be one of the reasons for low vaccine uptake in the US.
Currently, only about two-thirds of Americans have had the full primary series of shots. Only 15% of the population has gotten an updated bivalent booster.
Data presented to the committee shows that Covid-19 hospitalizations have been rising for children under the age of 2 over the past year, as Omicron and its many subvariants have circulated. Only 5% of this age group, which is eligible for Covid-19 vaccination at 6 months of age, has been fully vaccinated. Ninety percent of children under the age of 4 are still unvaccinated.
“The most concerning data point that I saw this whole day was that extremely low vaccination coverage in 6 months to 2 years of age and also
|
2 years to 4 years of age,” said Dr. Amanda Cohn, director of the US Centers for Disease Control and Prevention’s Division of Birth Defects and Infant Disorders. “We have to do much, much better.”
|
In current times, associated with Beltane, how much do we actually know about the Wicker Man? And was animal and human sacrifice a major part of the Celtic religion?
Interestingly, we only have 2 ancient sources about the Wicker Man. Only the Roman general Julius Caesar and the Greek geographer Strabo mention the wicker man as one of many ways the druids of Gaul performed sacrifices.
In the mid-1st century BC, Caesar wrote in his Commentary on the Gallic War that a large wickerwork figure with limbs was filled with living men and set on fire. He says that criminals were the preferred victims, but that innocent people might also be burned if there were no criminals.
Writing slightly later, Strabo says in his Geographica that men and animals were burned in a large figure of wood and straw, although he does not make clear whether the victims were burned alive. He adds that the ashes were believed to help the crops grow.
Also in the 1st century BC, Greek historian Diodorus Siculus wrote in Bibliotheca historica that the Celts sacrificed human and animal captives by burning them on huge pyres along with the first fruits. It is probable that both Diodorus and Strabo got their information from the earlier Greek historian Posidonius, whose work has not survived.
In the 1st century AD, Roman writer Lucan mentioned human sacrifices to the Gaulish gods Esus, Toutatis and Taranis. In a commentary on Lucan—the Commenta Bernensia dating from the 4th century and later—an unnamed author added that sacrifices to Taranis were burned in a wooden container.
Archaeological evidence of human sacrifice among Celtic peoples is rare. Many modern historians and archaeologists state that the ancient Greco-Roman accounts should be viewed with caution. Both Greeks and Romans “had good reason to dislike a long-term enemy” and it may have benefited them to “transmit any bizarre and negative information” about the Celts. Their desire to depict Celtic peoples as “barbarians” may have “led to exaggeration or even fabrications”. (Mary Voight, The Violent Ways of Galatian Gordion)
The Wicker Man, although associated with ancient Britain, was only mentioned in respect of the Gauls, the name given to continental Celts.
In modern times, large wickerwork figures were burnt in France during the 18th and 19th centuries. Wilhelm Mannhardt recorded that a wickerwork giant was burnt each Midsummer Eve (not Beltane) in Brie. Until 1743, a large wickerwork figure of a soldier or warrior was burnt every 3 July on the Rue aux Ours in Paris, as the crowd sang “Salve Regina”.
At Luchon in the Pyrenees, snakes were burnt alive in a tall wickerwork column decked with leaves and flowers on Midsummer Eve. Far from being a pagan festival, young male Christians with torches danced around the burning column, whilst the townsfolk and clergy sang hymns. Snakes represented Satan, and this sadistic, barbaric ritual was about the Christian conquest of evil.
In recent times in Britain, neopagan movements have recreated burning of the Wicker Man at various festivals, in particular Beltane. However, the modern practice is more inspired by the 1970’s cult movie than any actual historical evidence.
One thought on “The Wicker Man”
|
<urn:uuid:65fcc305-3d06-48b4-9a7e-679b4356728f>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653071.58/warc/CC-MAIN-20230606182640-20230606212640-00595.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9689949750900269,
"pii_count": 0,
"score": 3.484375,
"token_count": 726,
"url": "https://orderofcelticwolves.wordpress.com/2023/05/01/the-wicker-man/"
}
|
In current times, associated with Beltane, how much do we actually know about the Wicker Man? And was animal and human sacrifice a major part of the Celtic religion?
Interestingly, we only have 2 ancient sources about the Wicker Man. Only the Roman general Julius Caesar and the Greek geographer Strabo mention the wicker man as one of many ways the druids of Gaul performed sacrifices.
In the mid-1st century BC, Caesar wrote in his Commentary on the Gallic War that a large wickerwork figure with limbs was filled with living men and set on fire. He says that criminals were the preferred victims, but that innocent people might also be burned if there were no criminals.
Writing slightly later, Strabo says in his Geographica that men and animals were burned in a large figure of wood and straw, although he does not make clear whether the victims were burned alive. He adds that the ashes were believed to help the crops grow.
Also in the 1st century BC, Greek historian Diodorus Siculus wrote in Bibliotheca historica that the Celts sacrificed human and animal captives by burning them on huge pyres along with the first fruits. It is probable that both Diodorus and Strabo got their information from the earlier Greek historian Posidonius, whose work has not survived.
In the 1st century AD, Roman writer Lucan mentioned human sacrifices to the Gaulish gods Esus, Toutatis and Taranis. In a commentary on Lucan—the Commenta Bernensia dating from the 4th century and later—an unnamed author added that sacrifices to Taranis were burned in a wooden container.
Archaeological evidence of human sacrifice among Celtic peoples is rare. Many modern historians and archaeologists state that the ancient Greco-Roman accounts should be viewed with caution. Both Greeks and Romans “had good reason to dislike a long-term enemy” and it may have benefited them to “transmit any bizarre and negative information” about the Celts. Their desire to depict Celtic peoples as “barbarians” may have “led to exaggeration or even fabrications”. (Mary Voight, The Violent Ways of Galatian Gordion)
The Wicker Man, although associated with ancient Britain, was only mentioned in respect of the Gauls, the name given to continental Celts.
In modern times, large wickerwork figures were burnt in France during the 18th and 19th centuries. Wilhelm Mannhardt recorded that a wickerwork giant
|
was burnt each Midsummer Eve (not Beltane) in Brie. Until 1743, a large wickerwork figure of a soldier or warrior was burnt every 3 July on the Rue aux Ours in Paris, as the crowd sang “Salve Regina”.
At Luchon in the Pyrenees, snakes were burnt alive in a tall wickerwork column decked with leaves and flowers on Midsummer Eve. Far from being a pagan festival, young male Christians with torches danced around the burning column, whilst the townsfolk and clergy sang hymns. Snakes represented Satan, and this sadistic, barbaric ritual was about the Christian conquest of evil.
In recent times in Britain, neopagan movements have recreated burning of the Wicker Man at various festivals, in particular Beltane. However, the modern practice is more inspired by the 1970’s cult movie than any actual historical evidence.
One thought on “The Wicker Man”
|
Hundreds of thousands of working families in Australia, many earning more than the minimum wage, don't have enough money to live a basic, healthy lifestyle – and are at risk of slipping into poverty.
That's according to detailed research commissioned by the Fair Work Commission ahead of the Minimum Wage Case – it's an Annual Wage Review to determine how much both minimum and award wages need to increase to help those working Australians make ends meet.
The study by the UNSW Social Policy Research Centre was designed to answer "one of the most difficult questions in social policy and welfare economics" – how much income is 'enough' to allow people to live a minimal, healthy lifestyle that would allow individuals to participate in society?
Working Australians don't earn enough to make ends meet
During the September quarter last year, Associate Professor Bruce Bradbury and other academics from UNSW's Social Policy Research Centre looked at a huge amount of information about the kinds of things that Australians spend their money on.
This included: what activities they undertake, what items they buy, how often they use health, childcare and public transport services, as well as how often (or whether) they eat out, have friends over for a meal, or take a family holiday.
Researchers then priced the lowest cost items available to calculate "minimal monetary weekly amounts required to achieve this standard."
They added in the cost of rental housing and "extremely austere" discretionary spending, allowing for alcohol at a healthy level, average tobacco and gambling expenditure, a small travel allowance and a tiny allowance for eating out.
The ABC's The Drum updated those household budgets using CPI figures for the December and March quarter.
Associate Professor Ben Phillips from the ANU Centre for Social Research and Methods then revised the income figures to take account of the increase in the minimum wage from July, and any changes to the Family Tax Benefits and other low-income payments and concessions received.
The conclusion — many working people and working families simply won't have enough to make ends meet each week.
A single, low-income, full-time worker living in Sydney was deemed to receive $852 in total income, after tax.
That's about 20 per cent higher than the minimum wage.
But that individual needed $945 for food, clothing, personal and household items, health, housing, transport, and an "austere" level of discretionary spending.
What happens when we adjust for kids?
A single parent working full-time with two kids will bring in $1,202 a week, and spend $1,119, leaving them with $83 left over.
However, a dual income-earning couple — one working full-time and the other part-time — will have $1,537 coming in a week, and $1,679 going out, which means the family is underwater by $142 a week.
And they will get further in the red, as the months go by and inflation continues to drive up the cost of everything from food, to rent, to transport.
"[The dual income family's] income is well below the income that we would say is a minimum standard for a healthy lifestyle," Dr Bradbury from UNSW says.
The inability for a household to earn enough money to live a healthy lifestyle, the research notes, extends to middle income earners in some cases.
In extensive focus group interviews, the UNSW researchers spoke to middle-income households to provide a comparison with the budget of low-income households.
They found that with spiking cost of living in Australia, there was "a lack of substantive differences in the budgetary choices, constraints and decisions between those on middle-incomes versus low-incomes."
How are families stretching their dollars?
All these working people and families were using the same strategies to make ends meet.
These include: eating fast food rather than more expensive, healthier and homemade meals; parents skipping meals, haircuts, and buying new shoes in order to provide for their children; and not attending to their medical and dental needs.
Academic economists warn the financial pain associated with substandard living extents across many income groups and is growing in scale.
"Low-income renter households have always struggled but increasingly the burden is also being felt by low- and middle-income households with a mortgage," Mr Phillips told The Drum.
Yet these households are nowhere near what advocates describe as the poverty line.
Where is the poverty line?
There's no official definition of poverty in Australia, and no way of monitoring who is living in poverty or at risk of slipping into poverty.
The Australian Council of Social Service (ACOSS) has its own measure, which says a single adult earning about $489 a week is living in poverty.
The minimum wage from July is $882 per week.
Couples with children earning less than $1,027 a week, are considered below the line.
By that measure, more than 3 million Australians – or 13.4 per cent of us – live in poverty.
Should any of those low-income families have a breadwinner lose a job over the coming months, they will be surviving on welfare payments, far below that ACOSS poverty line.
From September, a single JobSeeker recipient will earn $369.95 a week, those on the Youth Allowance, will get $299.95 a week, and the Age Pensioners get $532.
'I'm a Type 2 diabetic and I live on bread'
Fifty-two-year-old single mother Paulene Hutton from Morayfield, north of Brisbane, is living on JobSeeker.
She has a 12-year-old daughter that she home schools who has high anxiety, and a 30-year-old son on disability support.
Ms Hutton also works casually at a clothing shop.
The family recently moved house after being unable to pay the rent.
"I'm a Type 2 diabetic and I live on bread – and a couple of days before pay day I'm literally living on peanut butter sandwiches," Ms Hutton told The Drum.
"It is making sure the kids eat properly, and they eat way better than I do, and that's the way it should be – I would never sacrifice their meals for mine."
She would love to sit down and have meals with her kids, "but if I sit down with a peanut butter sandwich, they will literally put food on my plate to make sure that I'm eating with them, so we don't eat as a family anymore," she explains.
"I don't have the chance to sit down and have a good night sleep and have a good meal, because I'm too busy worrying about everything," Ms Hutton told The Drum.
"The thought of what I'm going to do next month, the thought of a holiday – it just doesn't exist."
Professor of Public Policy at the Australian National University Sharon Bessell works with children aged between six and 17 years, and their families.
Ms Hutton's story is all too familiar to her.
"We hear from children about the fact that they go hungry regularly," Professor Bessell told the ABC's The Drum.
"Many of them say they don't want to tell their parents because they know their parents are hungry too and they don't want to put more pressure on them.
"They don't want their mums to stop eating."
The average Australian living below ACOSS's poverty measure is getting by on just $304 per week, after deducting the cost of housing.
"Do we want to be the sort of country that says we're happy for people to work 50 hours a week – often in essential work like caring for others – but we're going to let them live in poverty?" Professor Bessell asks.
"Or are we going to say we want a fair and just society and we're going to ensure that people get paid an adequate salary in order to be able to support themselves, not to just survive?"
More financial stress for low- and middle-income earners can be expected
While the cost-of-living remains elevated, interest rates and rents are expected to keep rising.
Many economists are concerned this will place low-and middle-income households under more financial stress.
However, the Council of Financial Regulators (which includes the banking regulator APRA, and the Reserve Bank) said in a statement Wednesday — aggressive [interest] rate rises have not triggered widespread financial distress [among households].
The council said: "Most households are well-placed to manage the impact on budgets due to strong labour market conditions and sizeable saving buffers".Loading...
If you're unable to load the form, click here.
|
<urn:uuid:5d3b19f5-b1a2-4313-9f45-2600e43db5c1>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476397.24/warc/CC-MAIN-20240303174631-20240303204631-00723.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9730442762374878,
"pii_count": 0,
"score": 2.90625,
"token_count": 1790,
"url": "https://www.abc.net.au/news/2023-06-15/exclusive-disturbing-number-of-australians-living-in-substandard/102484354"
}
|
Hundreds of thousands of working families in Australia, many earning more than the minimum wage, don't have enough money to live a basic, healthy lifestyle – and are at risk of slipping into poverty.
That's according to detailed research commissioned by the Fair Work Commission ahead of the Minimum Wage Case – it's an Annual Wage Review to determine how much both minimum and award wages need to increase to help those working Australians make ends meet.
The study by the UNSW Social Policy Research Centre was designed to answer "one of the most difficult questions in social policy and welfare economics" – how much income is 'enough' to allow people to live a minimal, healthy lifestyle that would allow individuals to participate in society?
Working Australians don't earn enough to make ends meet
During the September quarter last year, Associate Professor Bruce Bradbury and other academics from UNSW's Social Policy Research Centre looked at a huge amount of information about the kinds of things that Australians spend their money on.
This included: what activities they undertake, what items they buy, how often they use health, childcare and public transport services, as well as how often (or whether) they eat out, have friends over for a meal, or take a family holiday.
Researchers then priced the lowest cost items available to calculate "minimal monetary weekly amounts required to achieve this standard."
They added in the cost of rental housing and "extremely austere" discretionary spending, allowing for alcohol at a healthy level, average tobacco and gambling expenditure, a small travel allowance and a tiny allowance for eating out.
The ABC's The Drum updated those household budgets using CPI figures for the December and March quarter.
Associate Professor Ben Phillips from the ANU Centre for Social Research and Methods then revised the income figures to take account of the increase in the minimum wage from July, and any changes to the Family Tax Benefits and other low-income payments and concessions received.
The conclusion — many working people and working families simply won't have enough to make ends meet each week.
A single, low-income, full-time worker living in Sydney was deemed to receive $852 in total income, after tax.
That's about 20 per cent higher than the minimum wage.
But that individual needed $945 for food, clothing, personal and household items, health, housing, transport, and an "austere" level of discretionary spending.
What happens when we adjust for kids?
A single parent working full-time with two kids will bring in $
|
1,202 a week, and spend $1,119, leaving them with $83 left over.
However, a dual income-earning couple — one working full-time and the other part-time — will have $1,537 coming in a week, and $1,679 going out, which means the family is underwater by $142 a week.
And they will get further in the red, as the months go by and inflation continues to drive up the cost of everything from food, to rent, to transport.
"[The dual income family's] income is well below the income that we would say is a minimum standard for a healthy lifestyle," Dr Bradbury from UNSW says.
The inability for a household to earn enough money to live a healthy lifestyle, the research notes, extends to middle income earners in some cases.
In extensive focus group interviews, the UNSW researchers spoke to middle-income households to provide a comparison with the budget of low-income households.
They found that with spiking cost of living in Australia, there was "a lack of substantive differences in the budgetary choices, constraints and decisions between those on middle-incomes versus low-incomes."
How are families stretching their dollars?
All these working people and families were using the same strategies to make ends meet.
These include: eating fast food rather than more expensive, healthier and homemade meals; parents skipping meals, haircuts, and buying new shoes in order to provide for their children; and not attending to their medical and dental needs.
Academic economists warn the financial pain associated with substandard living extents across many income groups and is growing in scale.
"Low-income renter households have always struggled but increasingly the burden is also being felt by low- and middle-income households with a mortgage," Mr Phillips told The Drum.
Yet these households are nowhere near what advocates describe as the poverty line.
Where is the poverty line?
There's no official definition of poverty in Australia, and no way of monitoring who is living in poverty or at risk of slipping into poverty.
The Australian Council of Social Service (ACOSS) has its own measure, which says a single adult earning about $489 a week is living in poverty.
The minimum wage from July is $882 per week.
Couples with children earning less than $1,027 a week, are considered below the line.
By that measure, more than 3 million Australians – or 13.4 per cent of us – live in poverty.
Should any of those low-income families have a breadwinner lose a job over the coming months, they will be surviving on welfare payments, far below that ACOSS poverty line.
From September, a single JobSeeker recipient will earn $369.95 a week, those on the Youth Allowance, will get $299.95 a week, and the Age Pensioners get $532.
'I'm a Type 2 diabetic and I live on bread'
Fifty-two-year-old single mother Paulene Hutton from Morayfield, north of Brisbane, is living on JobSeeker.
She has a 12-year-old daughter that she home schools who has high anxiety, and a 30-year-old son on disability support.
Ms Hutton also works casually at a clothing shop.
The family recently moved house after being unable to pay the rent.
"I'm a Type 2 diabetic and I live on bread – and a couple of days before pay day I'm literally living on peanut butter sandwiches," Ms Hutton told The Drum.
"It is making sure the kids eat properly, and they eat way better than I do, and that's the way it should be – I would never sacrifice their meals for mine."
She would love to sit down and have meals with her kids, "but if I sit down with a peanut butter sandwich, they will literally put food on my plate to make sure that I'm eating with them, so we don't eat as a family anymore," she explains.
"I don't have the chance to sit down and have a good night sleep and have a good meal, because I'm too busy worrying about everything," Ms Hutton told The Drum.
"The thought of what I'm going to do next month, the thought of a holiday – it just doesn't exist."
Professor of Public Policy at the Australian National University Sharon Bessell works with children aged between six and 17 years, and their families.
Ms Hutton's story is all too familiar to her.
"We hear from children about the fact that they go hungry regularly," Professor Bessell told the ABC's The Drum.
"Many of them say they don't want to tell their parents because they know their parents are hungry too and they don't want to put more pressure on them.
"They don't want their mums to stop eating."
The average Australian living below ACOSS's poverty measure is getting by on just $304 per week, after deducting the cost of housing.
"Do we want to be the sort of country that says we're happy for people to work 50 hours a week – often in essential work like caring for others – but we're going to let them live in poverty?" Professor Bessell asks.
"Or are we going to say we want a fair and just society and we're going to ensure that people get paid an adequate salary in order to be able to support themselves, not to just survive?"
More financial stress for low- and middle-income earners can be expected
While the cost-of-living remains elevated, interest rates and rents are expected to keep rising.
Many economists are concerned this will place low-and middle-income households under more financial stress.
However, the Council of Financial Regulators (which includes the banking regulator APRA, and the Reserve Bank) said in a statement Wednesday — aggressive [interest] rate rises have not triggered widespread financial distress [among households].
The council said: "Most households are well-placed to manage the impact on budgets due to strong labour market conditions and sizeable saving buffers".Loading...
If you're unable to load the form, click here.
|
At Apni Shala, Social-Emotional Learning (SEL) is the process of nurturing cognitive, emotional, and behavioural wellbeing for self and society. This involves creating opportunities to learn and practice related knowledge, skills, and attitudes and creating compassionate and inclusive ecosystems for learners (Source: Apni Shala Definition).
The question in front of us is – how do we support facilitators*, who are a critical part of our students’ ecosystem, to create compassionate and inclusive ecosystems (school policies, classroom environment, inclusive facilitation/teaching practices, etc)?
Annalisa Morganti, one of the Chairs of the European Network for Social and Emotional Competencies (ENSEC) a leading network on SEL in Europe, says “Drawing from my personal experience I think it is crucial that all higher education courses aimed at teacher training should make SEL a mandatory subject. This would also allow those who are still “green” to teach the foundation and to be already equipped with the basic competencies needed to be a socially and emotionally competent adult role model.” Samiksha, Apni Shala Fellow, adds, “when it will is in a teapot then only it can be poured into the cup.”
What does the SEL Curriculum entail at Apni Shala?
The Curriculum typically includes content (also referred to as a syllabus), assessment (how do we know SEL is being learnt) and pedagogy. The content typically refers to “what our students are learning” which may include the purpose/intentions, learning outcomes, a plan of activities, instructions and learning experiences that are designed to support individuals develop their social-emotional competencies. Apni Shala’s SEL curriculum, based on CASEL, focuses on five competencies, namely, Self-Awareness, Self Management, Social Awareness, Relationship Skills, and Responsible Decision- Making.
Pedagogy refers to the ways in which facilitators create diverse and differentiated learning experiences and support students in social-emotional growth through a variety of learning modules. This is further guided by educational/developmental theories such as Erik Erikson, Jean Piaget, Bandura, etc. To assess the impact, Curriculum also includes a variety of assessment tools (formative and summative).
The SEL curriculum can include different elements such as self-reflection exercises, mindfulness practices, group discussions, and role-playing activities. It is typically structured to build on previous learning experiences and gradually increase in complexity. The goal of an SEL curriculum is to provide a thoughtful pathway for facilitators to support their students’ social-emotional growth.
How are the support and development designed for facilitators at Apni Shala?
To support Apni Shala’s SEL facilitators, in addition to a variety of support options such as lesson planning, supervision, observation & debriefs, weekly workshops are facilitated, which we call “Curriculum Gatherings”. They are by nature similar to content-training or professional development offered in many schools and organizations on themes, topics and subject areas. However, in the way they are facilitated at Apni Shala, they offer facilitators to come together and explore the themes presented in the curriculum such as managing emotions, conflict, empathy and perspective-taking. body image, responsible decision-making, addiction and much more. The gatherings are facilitated by some of the facilitators themselves who have been facilitating the curriculum with students, in partnership with the curriculum team. The structure of the gathering is such that we make our own meanings of the theme and move to a common understanding; we then move on to thinking about our own lived experiences which will guide us in our classroom sessions.
Shared below are the intentions and the aims of Curriculum Gatherings at Apni Shala.:
Intention 1: Define the core concept covered in the theme of the SEL Curriculum across grades and Correlate the concept with modular objectives of various grade levels and age groups
As a new facilitator in the organization, and facilitating SEL for the first time, I (Puja) was nervous and scared about my ways of facilitating. I used to wonder, how sessions, other than the academic syllabus, look like? The Curriculum Gatherings changed my perspective on facilitating in schools. They have been very enriching and have shaped my facilitation style.
Curriculum Gatherings at Apni Shala look different and have many activities involved in it. Usually, when I hear the word curriculum I think that it would be boring or would include a lot of theory, but here the gatherings include ice breaker, warm up, creation, and debrief/closure, just as we do with our students! Curriculum Gatherings starts with questioning ourselves on how we will introduce Social Emotional Learning(SEL) to 4-10th grade in a variety of ways. The space also helps us brainstorm age-appropriate ways to introduce the same topic in different grades. Something, for example, will be easier to explain to 8 graders than to 4th graders, and vice versa. It gave me a clear understanding and prepared me for how I will break the term for each grade.
I (Diksha) got an opportunity to facilitate a Curriculum Gathering on Bullying. From being a participant in these gatherings, I become a facilitator. During our planning, my colleague and I had a powerful conversation about our own experiences of bullying, a topic we were co-facilitating. When we were planning the session we came across this image as a resource. It is important before we facilitate an SEL theme that we have unpacked it for ourselves and we have clarity around this. When we were discussing this resource, we spoke about each of the small themes, we went back to our memories of school and college, we reflected and listed down those instances and spoke about times we had enabled bullying and how we were unaware of this, we spoke about the emotions it brought in us and processed it. Such a process provides our facilitators with deeper clarity on the SEL themes from across the curriculum.
During my session in 8th grade I (Puja) asked my students, “are there any difficulties you face while taking perspective from others?”
A student replied, “Didi, musibate toh uske upar depend karta hai kyuki sabke dekhne ka nazariya alag hota hai. Agar didi shivam class mai bole ki apni class teacher bahut achi hai sabse baat karti hai, toh udhar shayad hi shivanshu ye baat se agree kare aur mai bolu aisa kuch nhi hai tab udhar hamari perspective change hoti hai, aur hamare beech jhagda ho sakta hai.”
This conversation made me think to work on myself. Though the topic was perspective-taking, some words or actions might trigger students at different points. Being mindful of the same and planning our sessions thoughtfully is something I find very critical.
Intention 2: Discover their own experiences with the topic and co-create a space to recognize any potential triggers and possibly find community with the facilitators in beginning the processing (so that they can work to process it before they facilitate the topic and create support for themselves)
“During one of the Curriculum Gatherings – our year-long professional development on SEL curriculum themes, we were unpacking a particular theme. The team of facilitators was sharing how different emotions visit while facilitating topics such as responsibility, bullying, gender or religion-based discrimination, and conflict. Our own lived experiences bring up so many memories and past emotions. And what’s the impact of that in that moment on our facilitation and learning space of students,” writes Shahbaan Shah, R&D Associate, in this article.
As “our students are not responsible for our healing (Rohit Kumar)”, Curriculum Gatherings provide participants with the space to process or identify various emotional triggers, and areas/events where they have felt triggered. Following this, we are asked how we’ll process these triggers for ourselves and how we’ll respond to them when such triggers arise in classrooms. It is important to identify our triggers before going to classrooms and start managing them so effective examples can be shared during the session.
Diksha writes, “Curriculum Gatherings gave us a space to talk about and process our triggers. while unpacking the theme to be facilitated in class, ‘decision making’ in the Curriculum Gatherings. The prompt was to list down the important decisions we made for ourselves in the last 3 months. This triggered some overwhelming emotions as I realized I couldn’t remember the last decision I made for/about myself. I realized I haven’t found many spaces or agency to make a decision on my own for a long time. When I started making decisions I realized that I was feeling very anxious. During the gathering, I was then able to process these emotions and then I was also able to identify, during the pair-share process, a list of small decisions I have been able to make for myself. This process helped in taking some of my own experiences, without feeling triggered, into the classroom.”
Puja adds, “Curriculum gathering helped me in recognising myself in such a way that in various modules like Celebrating Uniqueness, Bullying, Communities and Co-existence, Body Image, etc is/or happened with me at some point or the other but I couldn’t think of that. For example in the module on Responsibility, the facilitator asked us to journal on the following prompts one at a time – What emotions visit us when we hear the term Responsibility? What are some reasons for those emotions to visit? What responsibility would I like to take?
In these three prompts, I realized knowingly or unknowingly I have many responsibilities. Also from this gathering, I realized some responsibilities are accepted by me or I have taken up the responsibility like me being in class and seeing all students participating in the activities, while some responsibilities that come with the choice we make like keeping materials ready for the session, being on time, etc.”
Puja continues, “Nishant spends the night taking care of his younger brother because he stays with his relatives. He does not get time to complete his home-task given to all students after every session. When he came to the session he was scared and was not willing to participate. If I hadn’t attended the Curriculum Gathering, I may have been triggered by him not doing the task where everyone else is doing it. Unconsciously I would have made him feel like a failure for lagging behind. But during another gathering on Relationships, we had a discussion on relationships and how family relationships would play a role, which supported me to take a more whole-rounded perspective on students’ lives.
Intention 3: Build student-friendly language (in Hindi, English and Marathi) to take these topics into the classrooms
‘Research shows that education in the mother tongue is a key factor for inclusion and quality learning, and it also improves learning outcomes and academic performance.’ (UNSCEO, 2022). In the context of our work, our students primarily speak Hindi or Marathi, with some English/Hinglish words/sentences depending upon their exposure to the language. Many of them also continue to speak many other languages at their home such as Bhojpuri, Tamil, Telugu and Gujarati. For students to be able to freely express and make meaning of their learning, it’s critical to be able to facilitate learning spaces in the languages accessible to them. We have built the curriculum and facilitation in a way that the students have space to express themselves and learn in the context preferred language.
At Apni Shala, the facilitators are assigned groups which can support students to reflect on activities and curriculum based on their preferred language along with diverse audio-visual media. This reflects in our discussion in the Curriculum Gatherings, when we think of the theme to be introduced we have discussions around how the term can be broken down in a student-friendly language that connects to their context and lived experiences.
While discussing the theme ‘Conflict’ in one of our Curriculum Gatherings; the prompt was, “if we had to introduce the theme in class then how would we go about it?”
The Hindi translation for conflict/difference of opinion is ‘matbhed’. A middle schooler would be able to understand this while a 4th grader might struggle to make meaning. Hence we came up with a word through discussion, ‘tu tu mai mai’, a word the student will be able to connect to and understand. When I introduced the theme to our class by using ‘tu tu mai mai’ and posed the question about the meaning of the word the student immediately said “didi, when my friends say ‘tu tu mai mai’ it means that one of them is angry with the other person”.
Puja shares, “In the module on Bullying, I noticed that students might get confused between the terms bullying and argument. But when I went through the Curricular lesson plan, I found an example which stated that ‘Neha and Aarya are friends. Today they are having an argument. Neha called Aarya by a mean name and Aarya also called Neha by a mean name.’ Is this bullying? No, both are acting mean and will probably be friends again, not one person hurting the other. Such examples allowed me to build a clear understanding of the topic of bullying for the students and be able to take relevant examples into classrooms.”
The Curriculum Gatherings have offered incredibly valuable experiences to us, both as participants and facilitators of the space. We have gained important skills and knowledge that have helped us not only be better at our SEL facilitation skills but also become better people, as we continue to use these skills to support and understand ourselves and others.
Curriculum Gatherings have supported us in developing a more nuanced understanding of different SEL topics which helped us facilitate in the classroom and gave ideas on how the particular classroom will look. No two students are alike. They bring their own life experience to the classroom and share amongst the groups. Curriculum Gatherings have also helped to create a safe space amongst the students so that they can share their thoughts, feelings and opinions.
* Facilitator is referred to someone (teachers, educators, Mental Health Workers, counsellors, others) who use facilitative practices to facilitate teaching-learning and development opportunities for their participants/students.
About the Authors:
Diksha Pandey is a Programme Coordinator with Apni Shala. She has been facilitating sessions with adults and children of different age groups for the last 4 years. She has completed her graduation with B.Sc.(Mathematics). When she is not in the classroom you can find her reading mythology books or dancing.
Puja Surve: She is a fellow at Apni Shala. She facilitates SEL sessions in four MGCM partner schools. She has done her Masters in Human Development from SNDT Women’s University, Juhu. She loves travelling and being around young people.
|
<urn:uuid:9f6cdb67-8d1a-482e-9230-161cbe1e7679>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00135.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9624260067939758,
"pii_count": 0,
"score": 3.46875,
"token_count": 3284,
"url": "https://apnishala.wordpress.com/2023/02/09/curriculum-as-practice-for-sel-facilitators/"
}
|
At Apni Shala, Social-Emotional Learning (SEL) is the process of nurturing cognitive, emotional, and behavioural wellbeing for self and society. This involves creating opportunities to learn and practice related knowledge, skills, and attitudes and creating compassionate and inclusive ecosystems for learners (Source: Apni Shala Definition).
The question in front of us is – how do we support facilitators*, who are a critical part of our students’ ecosystem, to create compassionate and inclusive ecosystems (school policies, classroom environment, inclusive facilitation/teaching practices, etc)?
Annalisa Morganti, one of the Chairs of the European Network for Social and Emotional Competencies (ENSEC) a leading network on SEL in Europe, says “Drawing from my personal experience I think it is crucial that all higher education courses aimed at teacher training should make SEL a mandatory subject. This would also allow those who are still “green” to teach the foundation and to be already equipped with the basic competencies needed to be a socially and emotionally competent adult role model.” Samiksha, Apni Shala Fellow, adds, “when it will is in a teapot then only it can be poured into the cup.”
What does the SEL Curriculum entail at Apni Shala?
The Curriculum typically includes content (also referred to as a syllabus), assessment (how do we know SEL is being learnt) and pedagogy. The content typically refers to “what our students are learning” which may include the purpose/intentions, learning outcomes, a plan of activities, instructions and learning experiences that are designed to support individuals develop their social-emotional competencies. Apni Shala’s SEL curriculum, based on CASEL, focuses on five competencies, namely, Self-Awareness, Self Management, Social Awareness, Relationship Skills, and Responsible Decision- Making.
Pedagogy refers to the ways in which facilitators create diverse and differentiated learning experiences and support students in social-emotional growth through a variety of learning modules. This is further guided by educational/developmental theories such as Erik Erikson, Jean Piaget, Bandura, etc. To assess the impact, Curriculum also includes a variety of assessment tools (formative and summative).
The SEL curriculum can include different elements such as self-reflection exercises, mindfulness practices, group discussions, and role-playing activities. It is typically structured to build on previous learning experiences and gradually increase in complexity. The goal of an SEL curriculum is
|
to provide a thoughtful pathway for facilitators to support their students’ social-emotional growth.
How are the support and development designed for facilitators at Apni Shala?
To support Apni Shala’s SEL facilitators, in addition to a variety of support options such as lesson planning, supervision, observation & debriefs, weekly workshops are facilitated, which we call “Curriculum Gatherings”. They are by nature similar to content-training or professional development offered in many schools and organizations on themes, topics and subject areas. However, in the way they are facilitated at Apni Shala, they offer facilitators to come together and explore the themes presented in the curriculum such as managing emotions, conflict, empathy and perspective-taking. body image, responsible decision-making, addiction and much more. The gatherings are facilitated by some of the facilitators themselves who have been facilitating the curriculum with students, in partnership with the curriculum team. The structure of the gathering is such that we make our own meanings of the theme and move to a common understanding; we then move on to thinking about our own lived experiences which will guide us in our classroom sessions.
Shared below are the intentions and the aims of Curriculum Gatherings at Apni Shala.:
Intention 1: Define the core concept covered in the theme of the SEL Curriculum across grades and Correlate the concept with modular objectives of various grade levels and age groups
As a new facilitator in the organization, and facilitating SEL for the first time, I (Puja) was nervous and scared about my ways of facilitating. I used to wonder, how sessions, other than the academic syllabus, look like? The Curriculum Gatherings changed my perspective on facilitating in schools. They have been very enriching and have shaped my facilitation style.
Curriculum Gatherings at Apni Shala look different and have many activities involved in it. Usually, when I hear the word curriculum I think that it would be boring or would include a lot of theory, but here the gatherings include ice breaker, warm up, creation, and debrief/closure, just as we do with our students! Curriculum Gatherings starts with questioning ourselves on how we will introduce Social Emotional Learning(SEL) to 4-10th grade in a variety of ways. The space also helps us brainstorm age-appropriate ways to introduce the same topic in different grades. Something, for example, will be easier to explain to 8 graders than to 4th graders, and vice versa. It gave me a clear understanding and prepared me for how I will break the term for each grade.
I (Diksha) got an opportunity to facilitate a Curriculum Gathering on Bullying. From being a participant in these gatherings, I become a facilitator. During our planning, my colleague and I had a powerful conversation about our own experiences of bullying, a topic we were co-facilitating. When we were planning the session we came across this image as a resource. It is important before we facilitate an SEL theme that we have unpacked it for ourselves and we have clarity around this. When we were discussing this resource, we spoke about each of the small themes, we went back to our memories of school and college, we reflected and listed down those instances and spoke about times we had enabled bullying and how we were unaware of this, we spoke about the emotions it brought in us and processed it. Such a process provides our facilitators with deeper clarity on the SEL themes from across the curriculum.
During my session in 8th grade I (Puja) asked my students, “are there any difficulties you face while taking perspective from others?”
A student replied, “Didi, musibate toh uske upar depend karta hai kyuki sabke dekhne ka nazariya alag hota hai. Agar didi shivam class mai bole ki apni class teacher bahut achi hai sabse baat karti hai, toh udhar shayad hi shivanshu ye baat se agree kare aur mai bolu aisa kuch nhi hai tab udhar hamari perspective change hoti hai, aur hamare beech jhagda ho sakta hai.”
This conversation made me think to work on myself. Though the topic was perspective-taking, some words or actions might trigger students at different points. Being mindful of the same and planning our sessions thoughtfully is something I find very critical.
Intention 2: Discover their own experiences with the topic and co-create a space to recognize any potential triggers and possibly find community with the facilitators in beginning the processing (so that they can work to process it before they facilitate the topic and create support for themselves)
“During one of the Curriculum Gatherings – our year-long professional development on SEL curriculum themes, we were unpacking a particular theme. The team of facilitators was sharing how different emotions visit while facilitating topics such as responsibility, bullying, gender or religion-based discrimination, and conflict. Our own lived experiences bring up so many memories and past emotions. And what’s the impact of that in that moment on our facilitation and learning space of students,” writes Shahbaan Shah, R&D Associate, in this article.
As “our students are not responsible for our healing (Rohit Kumar)”, Curriculum Gatherings provide participants with the space to process or identify various emotional triggers, and areas/events where they have felt triggered. Following this, we are asked how we’ll process these triggers for ourselves and how we’ll respond to them when such triggers arise in classrooms. It is important to identify our triggers before going to classrooms and start managing them so effective examples can be shared during the session.
Diksha writes, “Curriculum Gatherings gave us a space to talk about and process our triggers. while unpacking the theme to be facilitated in class, ‘decision making’ in the Curriculum Gatherings. The prompt was to list down the important decisions we made for ourselves in the last 3 months. This triggered some overwhelming emotions as I realized I couldn’t remember the last decision I made for/about myself. I realized I haven’t found many spaces or agency to make a decision on my own for a long time. When I started making decisions I realized that I was feeling very anxious. During the gathering, I was then able to process these emotions and then I was also able to identify, during the pair-share process, a list of small decisions I have been able to make for myself. This process helped in taking some of my own experiences, without feeling triggered, into the classroom.”
Puja adds, “Curriculum gathering helped me in recognising myself in such a way that in various modules like Celebrating Uniqueness, Bullying, Communities and Co-existence, Body Image, etc is/or happened with me at some point or the other but I couldn’t think of that. For example in the module on Responsibility, the facilitator asked us to journal on the following prompts one at a time – What emotions visit us when we hear the term Responsibility? What are some reasons for those emotions to visit? What responsibility would I like to take?
In these three prompts, I realized knowingly or unknowingly I have many responsibilities. Also from this gathering, I realized some responsibilities are accepted by me or I have taken up the responsibility like me being in class and seeing all students participating in the activities, while some responsibilities that come with the choice we make like keeping materials ready for the session, being on time, etc.”
Puja continues, “Nishant spends the night taking care of his younger brother because he stays with his relatives. He does not get time to complete his home-task given to all students after every session. When he came to the session he was scared and was not willing to participate. If I hadn’t attended the Curriculum Gathering, I may have been triggered by him not doing the task where everyone else is doing it. Unconsciously I would have made him feel like a failure for lagging behind. But during another gathering on Relationships, we had a discussion on relationships and how family relationships would play a role, which supported me to take a more whole-rounded perspective on students’ lives.
Intention 3: Build student-friendly language (in Hindi, English and Marathi) to take these topics into the classrooms
‘Research shows that education in the mother tongue is a key factor for inclusion and quality learning, and it also improves learning outcomes and academic performance.’ (UNSCEO, 2022). In the context of our work, our students primarily speak Hindi or Marathi, with some English/Hinglish words/sentences depending upon their exposure to the language. Many of them also continue to speak many other languages at their home such as Bhojpuri, Tamil, Telugu and Gujarati. For students to be able to freely express and make meaning of their learning, it’s critical to be able to facilitate learning spaces in the languages accessible to them. We have built the curriculum and facilitation in a way that the students have space to express themselves and learn in the context preferred language.
At Apni Shala, the facilitators are assigned groups which can support students to reflect on activities and curriculum based on their preferred language along with diverse audio-visual media. This reflects in our discussion in the Curriculum Gatherings, when we think of the theme to be introduced we have discussions around how the term can be broken down in a student-friendly language that connects to their context and lived experiences.
While discussing the theme ‘Conflict’ in one of our Curriculum Gatherings; the prompt was, “if we had to introduce the theme in class then how would we go about it?”
The Hindi translation for conflict/difference of opinion is ‘matbhed’. A middle schooler would be able to understand this while a 4th grader might struggle to make meaning. Hence we came up with a word through discussion, ‘tu tu mai mai’, a word the student will be able to connect to and understand. When I introduced the theme to our class by using ‘tu tu mai mai’ and posed the question about the meaning of the word the student immediately said “didi, when my friends say ‘tu tu mai mai’ it means that one of them is angry with the other person”.
Puja shares, “In the module on Bullying, I noticed that students might get confused between the terms bullying and argument. But when I went through the Curricular lesson plan, I found an example which stated that ‘Neha and Aarya are friends. Today they are having an argument. Neha called Aarya by a mean name and Aarya also called Neha by a mean name.’ Is this bullying? No, both are acting mean and will probably be friends again, not one person hurting the other. Such examples allowed me to build a clear understanding of the topic of bullying for the students and be able to take relevant examples into classrooms.”
The Curriculum Gatherings have offered incredibly valuable experiences to us, both as participants and facilitators of the space. We have gained important skills and knowledge that have helped us not only be better at our SEL facilitation skills but also become better people, as we continue to use these skills to support and understand ourselves and others.
Curriculum Gatherings have supported us in developing a more nuanced understanding of different SEL topics which helped us facilitate in the classroom and gave ideas on how the particular classroom will look. No two students are alike. They bring their own life experience to the classroom and share amongst the groups. Curriculum Gatherings have also helped to create a safe space amongst the students so that they can share their thoughts, feelings and opinions.
* Facilitator is referred to someone (teachers, educators, Mental Health Workers, counsellors, others) who use facilitative practices to facilitate teaching-learning and development opportunities for their participants/students.
About the Authors:
Diksha Pandey is a Programme Coordinator with Apni Shala. She has been facilitating sessions with adults and children of different age groups for the last 4 years. She has completed her graduation with B.Sc.(Mathematics). When she is not in the classroom you can find her reading mythology books or dancing.
Puja Surve: She is a fellow at Apni Shala. She facilitates SEL sessions in four MGCM partner schools. She has done her Masters in Human Development from SNDT Women’s University, Juhu. She loves travelling and being around young people.
|
A Missouri study on bald eagles and turbines aims to find how birds and wind energy can co-exist
The Missouri Department of Conservation has started a new project to see how bald eagles in northwestern Missouri interact with wind turbines. Conservationists and wind energy advocates are both hoping that the results will advance both bird conservation and renewable energy goals.
Bald eagles are an American conservation success story. Conservation efforts over more than four decades allowed officials to take the bird off the endangered species list in 2007.
Now the Missouri Department of Conservation is hoping to help keep the iconic American bird protected by starting a multi-year research project to understand how wind turbines affect them.
“We want to know how these eagles move across the landscape during stages of their lives and is there a negative interaction with these wind facilities?” said Janet Haslerig, an avian ecologist with the Department of Conservation.
The department will tag four adult eagles and four immature eagles with GPS transmitters to get a better understanding of the movements, territories and habits of the growing Midwest bald eagle population.
It will also tell them what kind of danger the wind turbines pose.
Wind energy and bird collisions
Renew Missouri hopes that the results of this study, in addition to learning more about bald eagles, will show accurately what the impacts of turbines are on the birds.
“Sometimes the case for how much damage wind turbines can do for migration, for causing injury to birds, can sometimes be overstated by opponents of renewable energy,” said James Owen, the executive director of Renew Missouri. “So I think being able to really get some precise numbers on that will be helpful.”
Last month, former President Donald Trump claimed that thousands of bald eagles are killed by wind turbines, a claim with little evidence. According to the U.S. Fish and Wildlife Service, about 140,000 to 328,000 birds in general are estimated to be killed per year by turbines, a number that is expected to grow due to more wind farms. However, deaths caused by birds running into buildings are about triple those killed by turbines each year.
Yet there is documentation of bald eagles colliding with wind turbines. Last year, ESI energy, a company under NextEra, was fined for violating the Migratory Bird Treaty Act. At least 150 bald and golden eagles were killed on its wind farms.
Haslerig with the Missouri’s Department of Conservation said wind companies can protect the birds by being careful about the placement of turbines.
“We hope this will inform the wind facilities as well as our agency on where we should put these wind facilities. What's a good location? What's a bad location?” she said.
Missouri already has a set of guidelines for energy infrastructure that takes conservation into account. New technology also is being developed to protect birds including sounds that keep them away and paint on the blades.
Right now Owens said the state “falls in the middle” when it comes to renewable energy infrastructure.
“We have a lot more opportunity for wind than I think we’re using,” he said. “Some of that is limited by certain counties having limitations on placing wind there.”
Aligning conservation and energy goals
Conservationists often agree that renewable energy is needed, even as they seek to protect birds.
Mary Nemecek from the Burroughs Audubon Society of Greater Kansas City said she sometimes gets calls from people who want to oppose a wind energy project in their county. In one such case, she referred them to the state’s guidelines and checked back in.
“I said, ‘Did you read the guidelines? Did you have any questions?’ And they said, ‘Well, we were really hoping to stop this instead of finding a way to work with them about it.’” she said. “And I think that's really unfortunate.”
She said the study could also provide useful information on whether the birds' habitats might be affected, even if the birds are not being injured or killed by the structures. There have been projects proposed in the past in the state that would interfere with bird migration and habitats.
“There are places that wind energy should not go, but that doesn't mean that there shouldn't be renewable energy,” she said.
She also acknowledged the survival of birds and humans is going to be dependent on finding solutions to the climate crisis. According toa study from the National Audubon Society, 389 species of bird are threatened by climate change.
“It’s just unfortunate that it's gotten to the point where there's a chasm between renewable energy and the goals that it's trying to accomplish, and people see that at odds with conservation,” she said, “because I really think that the outcome should really be aligned to the two.”
This story was produced in partnership with Harvest Public Media, a collaboration of public media newsrooms in the Midwest. It reports on food systems, agriculture and rural issues. Follow Harvest on Twitter: @HarvestPM
|
<urn:uuid:c00b2dda-e403-4edc-8dcc-273aa085e3d4>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473518.6/warc/CC-MAIN-20240221134259-20240221164259-00617.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9678595662117004,
"pii_count": 0,
"score": 3.265625,
"token_count": 1065,
"url": "https://www.kcur.org/news/2023-02-21/a-missouri-study-on-bald-eagles-and-turbines-aims-to-find-how-birds-and-wind-energy-can-co-exist"
}
|
A Missouri study on bald eagles and turbines aims to find how birds and wind energy can co-exist
The Missouri Department of Conservation has started a new project to see how bald eagles in northwestern Missouri interact with wind turbines. Conservationists and wind energy advocates are both hoping that the results will advance both bird conservation and renewable energy goals.
Bald eagles are an American conservation success story. Conservation efforts over more than four decades allowed officials to take the bird off the endangered species list in 2007.
Now the Missouri Department of Conservation is hoping to help keep the iconic American bird protected by starting a multi-year research project to understand how wind turbines affect them.
“We want to know how these eagles move across the landscape during stages of their lives and is there a negative interaction with these wind facilities?” said Janet Haslerig, an avian ecologist with the Department of Conservation.
The department will tag four adult eagles and four immature eagles with GPS transmitters to get a better understanding of the movements, territories and habits of the growing Midwest bald eagle population.
It will also tell them what kind of danger the wind turbines pose.
Wind energy and bird collisions
Renew Missouri hopes that the results of this study, in addition to learning more about bald eagles, will show accurately what the impacts of turbines are on the birds.
“Sometimes the case for how much damage wind turbines can do for migration, for causing injury to birds, can sometimes be overstated by opponents of renewable energy,” said James Owen, the executive director of Renew Missouri. “So I think being able to really get some precise numbers on that will be helpful.”
Last month, former President Donald Trump claimed that thousands of bald eagles are killed by wind turbines, a claim with little evidence. According to the U.S. Fish and Wildlife Service, about 140,000 to 328,000 birds in general are estimated to be killed per year by turbines, a number that is expected to grow due to more wind farms. However, deaths caused by birds running into buildings are about triple those killed by turbines each year.
Yet there is documentation of bald eagles colliding with wind turbines. Last year, ESI energy, a company under NextEra, was fined for violating the Migratory Bird Treaty Act. At least 150 bald and golden eagles were killed on its wind farms.
Haslerig with the Missouri’s Department of Conservation said wind companies can protect the birds by being careful about the placement of turbines
|
.
“We hope this will inform the wind facilities as well as our agency on where we should put these wind facilities. What's a good location? What's a bad location?” she said.
Missouri already has a set of guidelines for energy infrastructure that takes conservation into account. New technology also is being developed to protect birds including sounds that keep them away and paint on the blades.
Right now Owens said the state “falls in the middle” when it comes to renewable energy infrastructure.
“We have a lot more opportunity for wind than I think we’re using,” he said. “Some of that is limited by certain counties having limitations on placing wind there.”
Aligning conservation and energy goals
Conservationists often agree that renewable energy is needed, even as they seek to protect birds.
Mary Nemecek from the Burroughs Audubon Society of Greater Kansas City said she sometimes gets calls from people who want to oppose a wind energy project in their county. In one such case, she referred them to the state’s guidelines and checked back in.
“I said, ‘Did you read the guidelines? Did you have any questions?’ And they said, ‘Well, we were really hoping to stop this instead of finding a way to work with them about it.’” she said. “And I think that's really unfortunate.”
She said the study could also provide useful information on whether the birds' habitats might be affected, even if the birds are not being injured or killed by the structures. There have been projects proposed in the past in the state that would interfere with bird migration and habitats.
“There are places that wind energy should not go, but that doesn't mean that there shouldn't be renewable energy,” she said.
She also acknowledged the survival of birds and humans is going to be dependent on finding solutions to the climate crisis. According toa study from the National Audubon Society, 389 species of bird are threatened by climate change.
“It’s just unfortunate that it's gotten to the point where there's a chasm between renewable energy and the goals that it's trying to accomplish, and people see that at odds with conservation,” she said, “because I really think that the outcome should really be aligned to the two.”
This story was produced in partnership with Harvest Public Media, a collaboration of public media newsrooms in the Midwest. It reports on food systems, agriculture and rural issues. Follow Harvest on Twitter: @HarvestPM
|
Mississippi civil rights sites may join National Park System
Why it matters: For decades, advocates have sought to preserve sites linked to two crucial moments of the Civil Rights Movement while seeking to remove monuments to Confederates who owned enslaved people.
Driving the news: The park system said it identified the nine sites after evaluating more than 220 across Mississippi and consulting with historians and activists.
- The Mississippi Civil Rights Sites Special Resource Study, sent to Congress late last month, concluded the sites met the criteria for potential inclusion.
- Congress can pass legislation authorizing the National Park Service to make the nine Mississippi sites part of the National Park System. President Biden also can designate the sites under the Antiquities Act.
Zoom in: Among the sites are the remains of Bryant's Grocery in Money, Mississippi. That's the store where 14-year-old Emmett Till, a Black teen, was falsely accused of grabbing and threatening Carolyn Bryant, a white woman.
- The site of Mt. Zion Methodist Church was another. It was one of 20 black churches to be firebombed across Mississippi during that Freedom Summer.
Flashback: In 1955, Roy Bryant and J.W. Milam lynched Till. An all-white jury cleared the two white men, though they later admitted to killing him.
- The 1964 Freedom Summer project brought northern college students down to the American South to help register Black Mississippians to vote. Three civil rights workers were killed during the summer.
The intrigue: Some sites have sat abandoned and forgotten for years while others have been privately preserved or kept up by local officials.
- Inclusion into the National Park System would make the sites easier to find and more accessible — and it could open the door for federal funding to preserve them.
Between the lines: Civil rights advocates, historians and social justice travelers have been mapping out sites in recent years as a way to remember and confront episodes connected with trauma.
- Some researchers say the preservation of the sites is part of a movement called "memory work," where scholars engage with the past to revise accounts of history.
- The Equal Justice Initiative, a Montgomery-based legal advocacy group, has mapped thousands of sites linked to lynchings of Black people.
- The nonprofit Refusing to Forget also has located sites connected to the lynchings and massacres of Mexican Americans in Texas.
Yes, but: Advocates often face resistance from conservative local historic county commissions that fight efforts to erect historic markers at civil rights sites.
- The sites also are regularly vandalized if markers are placed.
|
<urn:uuid:3ccb0749-5bce-4068-ad43-d0b7dcc00593>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474853.43/warc/CC-MAIN-20240229202522-20240229232522-00012.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9360695481300354,
"pii_count": 0,
"score": 3.625,
"token_count": 525,
"url": "https://www.axios.com/2023/01/07/mississippi-civil-rights-sites-national-park-service?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axiospm&stream=top"
}
|
Mississippi civil rights sites may join National Park System
Why it matters: For decades, advocates have sought to preserve sites linked to two crucial moments of the Civil Rights Movement while seeking to remove monuments to Confederates who owned enslaved people.
Driving the news: The park system said it identified the nine sites after evaluating more than 220 across Mississippi and consulting with historians and activists.
- The Mississippi Civil Rights Sites Special Resource Study, sent to Congress late last month, concluded the sites met the criteria for potential inclusion.
- Congress can pass legislation authorizing the National Park Service to make the nine Mississippi sites part of the National Park System. President Biden also can designate the sites under the Antiquities Act.
Zoom in: Among the sites are the remains of Bryant's Grocery in Money, Mississippi. That's the store where 14-year-old Emmett Till, a Black teen, was falsely accused of grabbing and threatening Carolyn Bryant, a white woman.
- The site of Mt. Zion Methodist Church was another. It was one of 20 black churches to be firebombed across Mississippi during that Freedom Summer.
Flashback: In 1955, Roy Bryant and J.W. Milam lynched Till. An all-white jury cleared the two white men, though they later admitted to killing him.
- The 1964 Freedom Summer project brought northern college students down to the American South to help register Black Mississippians to vote. Three civil rights workers were killed during the summer.
The intrigue: Some sites have sat abandoned and forgotten for years while others have been privately preserved or kept up by local officials.
- Inclusion into the National Park System would make the sites easier to find and more accessible — and it could open the door for federal funding to preserve them.
Between the lines: Civil rights advocates, historians and social justice travelers have been mapping out sites in recent years as a way to remember and confront episodes connected with trauma.
- Some researchers say the preservation of the sites is part of a movement called "memory work," where scholars engage with the past to revise accounts of history.
- The Equal Justice Initiative, a Montgomery-based legal advocacy group, has mapped thousands of sites linked to lynchings of Black people.
- The nonprofit Refusing to Forget also has located sites connected to the lynchings and massacres of Mexican Americans in Texas.
Yes, but: Advocates often face resistance from conservative local historic county commissions that fight efforts to erect historic markers at civil rights sites
|
.
- The sites also are regularly vandalized if markers are placed.
|
At an underwater mountain in the middle of the Atlantic Ocean, scientists have drilled nearly a mile beneath the ocean floor and pulled up an unprecedented scientific bounty - pieces of Earth’s rocky mantle.
The record-breaking achievement has electrified geoscientists, who for decades have dreamed of punching through miles of Earth’s crust to sample the mysterious realm that makes up most of the planet. The heat-driven churn of the mantle is what fuels plate tectonics in the crust, giving rise to mountains, volcanoes and earthquakes.
The new expedition, by an ocean drilling vessel called the JOIDES Resolution, did not technically drill into the mantle, and the hole isn’t the deepest ever drilled beneath the ocean floor. Instead, researchers cruised to a special “tectonic window” in the North Atlantic where drills don’t have to tunnel as far to strike pay dirt. Here, the rocks of the mantle have been pushed close to the surface as the ocean floor slowly pulls apart at the nearby Mid-Atlantic Ridge.
On May 1, they began drilling the hole, known as U1601C. Andrew McCaig, the expedition’s co-chief scientist, expected to make a shallow “pinprick” because the record for drilling in mantle rock, set in the 1990s, was a mere tenth of a mile. The researchers hoped to recover enough samples to help elucidate how chemical reactions between mantle rocks and water could have given rise to life on our planet. But ocean drilling can be an uncertain enterprise - drills get stuck, or the long cores of rock being recovered may be only partial samples.
This time, though, the drill yielded tube after tube of dark rock, many of them surprisingly complete.
“It just kept going deeper, deeper and deeper. Then everyone in the science party said, ‘Hey, this is what we wanted all along. Since 1960, we wanted to get a hole this deep in mantle rock,’” McCaig said, speaking from the JOIDES Resolution minutes before another long section of dark rock was pulled on board. When the team stopped drilling on June 2, the team had taken rock samples from as deep as 4,157 feet below the seafloor.
“We’ve achieved an ambition that’s been feeding the science community for many decades,” McCaig said.
Scientists on land have been eagerly keeping tabs on the expedition, anticipating a jackpot of data that will open a new window into the deep Earth and fuel years of research.
“We are just to the moon with excitement about what they’ve got - an amazing section of rocks,” said Andrew Fisher, a hydrogeologist at the University of California at Santa Cruz, who advises a graduate student who is aboard the ship and has been monitoring their progress remotely.
Making it to the Moho
In 1909, a Croatian seismologist named Andrija Mohorovičić discovered a boundary within Earth.
Mohorovičić monitored how seismic waves generated by an earthquake traveled through the ground, similar to using X-rays to probe inside the human body. Closer to the surface, seismic waves traveled at one speed, but past a certain zone all around the globe, they traveled faster, suggesting the waves were moving through two distinct layers of rock.
This discontinuity, called the Moho, is now recognized as the line between Earth’s crust and its mantle. Its depth varies, but the mantle generally begins about five miles beneath the ocean floor and roughly 20 miles beneath the continents.
“Think of the crust in the way that you have a beautifully iced cake, but what you want is the cake, not the icing,” said Jessica Warren, a professor of Earth sciences at the University of Delaware who has also been monitoring the project’s progress remotely. “If we want to understand the Earth as a whole, there’s a huge, huge amount of rock below that.”
The mantle isn’t a complete unknown. Occasionally, volcanic eruptions spew out bits of it — chunks of greenish peridotite, the type of rock that dominates the upper mantle, embedded in basalt rock. But these samples, called mantle xenoliths, have their limits, because they are often chewed up and weathered from their trip to the surface. There are also ophiolites, sheets of oceanic crust tinged with some of the upper mantle that were uplifted and plastered onto the land. But they too have been altered by the trip.
What scientists have long craved was a drilled sample of mantle rock. Project Mohole, a famous ocean expedition, set out to drill through the thinner crust on the ocean floor to reach the mantle in 1961 but failed.
Portions of the ocean floor where the mantle is closer to the surface seemed like an opportunity to take a sample without the technical difficulties of drilling through miles of crust. That’s where the scientists aboard the JOIDES Resolution set their sights for one of the vessel’s last missions before its scheduled retirement in fiscal year 2024.
The team departed Ponta Delgada in Portugal’s Azores Islands in April and headed to the Atlantis Massif, an underwater mountain about the size of Mount Rainier. Its primary mission wasn’t to drill the deepest hole yet in mantle rock, but to sample those rocks for clues about how, in the absence of life on infant Earth, small organic molecules might have formed as rocks reacted with water.
“This could be a way that you go from just having basically water and rock,” said Susan Lang, the co-chief scientist of the expedition and a scientist at the Woods Hole Oceanographic Institution. “That produces hydrogen, [and] that hydrogen is a really big fuel to things like the formation of smaller organic molecules, and that can then combine with other organic molecules and lead to early life.”
- - -
Going deeper and getting fresher
The rock cores extracted from hole U1601C are dominated by peridotite, the most common type of rock found in the upper mantle. The samples have been altered by their exposure to seawater, and scientists are already beginning to debate how to interpret the findings.
Most of the mantle is buried beneath the crust, not exposed to the ocean the way it is at this site. That raises the fundamental question: How closely do the latest samples mimic the rest of the mantle? Do the rocks truly represent mantle, or are they lower crust?
And for that matter, is the boundary between mantle and crust a sharp boundary, or more of a gradual transition? The samples aren’t pure peridotite, and that could be a key piece of evidence.
“It’s a bit of a hash, but that’s maybe what the lower crust is,” Fisher said, listing off various types of rock that have been reported in daily science logs. “This is really unusual - more than a kilometer of highly altered, lower crustal and/or upper mantle rock. I’d say it’s a mix.”
The scientists have been so busy processing the enormous volume of rock they’ve recovered that they’ve had little opportunity to study the samples in detail, or even reflect on the magnitude of the achievement. The drill bits need to be switched out every 50 hours. The team aboard works in 12-hour shifts, not wasting a minute of time.
On a recent morning, Lang became distracted and excused herself from an interview when she saw seawater spray through a window.
“I saw this seawater stage, which is always a very dramatic point where they detach this one thing and a bunch of seawater sprays everywhere,” Lang said. “Usually, that’s my warning that a core is coming on deck in about the next five minutes.”
What excites all of them is the hope that the deepest samples will yield even “fresher” rock, less altered by other processes and closer to what the mantle is really made of.
“The deeper we get in there, the closer we’re getting to what we those rocks look like, closer to what the mantle looks like,” Warren said.
|
<urn:uuid:b7ffc55c-576f-4dd7-889b-97cfdfb5f25e>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510481.79/warc/CC-MAIN-20230929022639-20230929052639-00085.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9534809589385986,
"pii_count": 0,
"score": 3.484375,
"token_count": 1737,
"url": "https://www.adn.com/nation-world/2023/06/06/in-a-geologic-triumph-scientists-drill-a-window-into-earths-mantle/"
}
|
At an underwater mountain in the middle of the Atlantic Ocean, scientists have drilled nearly a mile beneath the ocean floor and pulled up an unprecedented scientific bounty - pieces of Earth’s rocky mantle.
The record-breaking achievement has electrified geoscientists, who for decades have dreamed of punching through miles of Earth’s crust to sample the mysterious realm that makes up most of the planet. The heat-driven churn of the mantle is what fuels plate tectonics in the crust, giving rise to mountains, volcanoes and earthquakes.
The new expedition, by an ocean drilling vessel called the JOIDES Resolution, did not technically drill into the mantle, and the hole isn’t the deepest ever drilled beneath the ocean floor. Instead, researchers cruised to a special “tectonic window” in the North Atlantic where drills don’t have to tunnel as far to strike pay dirt. Here, the rocks of the mantle have been pushed close to the surface as the ocean floor slowly pulls apart at the nearby Mid-Atlantic Ridge.
On May 1, they began drilling the hole, known as U1601C. Andrew McCaig, the expedition’s co-chief scientist, expected to make a shallow “pinprick” because the record for drilling in mantle rock, set in the 1990s, was a mere tenth of a mile. The researchers hoped to recover enough samples to help elucidate how chemical reactions between mantle rocks and water could have given rise to life on our planet. But ocean drilling can be an uncertain enterprise - drills get stuck, or the long cores of rock being recovered may be only partial samples.
This time, though, the drill yielded tube after tube of dark rock, many of them surprisingly complete.
“It just kept going deeper, deeper and deeper. Then everyone in the science party said, ‘Hey, this is what we wanted all along. Since 1960, we wanted to get a hole this deep in mantle rock,’” McCaig said, speaking from the JOIDES Resolution minutes before another long section of dark rock was pulled on board. When the team stopped drilling on June 2, the team had taken rock samples from as deep as 4,157 feet below the seafloor.
“We’ve achieved an ambition that’s been feeding the science community for many decades,” McCaig said.
Scientists on land have been eagerly keeping tabs on the expedition, anticipating a jackpot of data that will open a new window into the deep
|
Earth and fuel years of research.
“We are just to the moon with excitement about what they’ve got - an amazing section of rocks,” said Andrew Fisher, a hydrogeologist at the University of California at Santa Cruz, who advises a graduate student who is aboard the ship and has been monitoring their progress remotely.
Making it to the Moho
In 1909, a Croatian seismologist named Andrija Mohorovičić discovered a boundary within Earth.
Mohorovičić monitored how seismic waves generated by an earthquake traveled through the ground, similar to using X-rays to probe inside the human body. Closer to the surface, seismic waves traveled at one speed, but past a certain zone all around the globe, they traveled faster, suggesting the waves were moving through two distinct layers of rock.
This discontinuity, called the Moho, is now recognized as the line between Earth’s crust and its mantle. Its depth varies, but the mantle generally begins about five miles beneath the ocean floor and roughly 20 miles beneath the continents.
“Think of the crust in the way that you have a beautifully iced cake, but what you want is the cake, not the icing,” said Jessica Warren, a professor of Earth sciences at the University of Delaware who has also been monitoring the project’s progress remotely. “If we want to understand the Earth as a whole, there’s a huge, huge amount of rock below that.”
The mantle isn’t a complete unknown. Occasionally, volcanic eruptions spew out bits of it — chunks of greenish peridotite, the type of rock that dominates the upper mantle, embedded in basalt rock. But these samples, called mantle xenoliths, have their limits, because they are often chewed up and weathered from their trip to the surface. There are also ophiolites, sheets of oceanic crust tinged with some of the upper mantle that were uplifted and plastered onto the land. But they too have been altered by the trip.
What scientists have long craved was a drilled sample of mantle rock. Project Mohole, a famous ocean expedition, set out to drill through the thinner crust on the ocean floor to reach the mantle in 1961 but failed.
Portions of the ocean floor where the mantle is closer to the surface seemed like an opportunity to take a sample without the technical difficulties of drilling through miles of crust. That’s where the scientists aboard the JOIDES Resolution set their sights for one of the vessel’s last missions before its scheduled retirement in fiscal year 2024.
The team departed Ponta Delgada in Portugal’s Azores Islands in April and headed to the Atlantis Massif, an underwater mountain about the size of Mount Rainier. Its primary mission wasn’t to drill the deepest hole yet in mantle rock, but to sample those rocks for clues about how, in the absence of life on infant Earth, small organic molecules might have formed as rocks reacted with water.
“This could be a way that you go from just having basically water and rock,” said Susan Lang, the co-chief scientist of the expedition and a scientist at the Woods Hole Oceanographic Institution. “That produces hydrogen, [and] that hydrogen is a really big fuel to things like the formation of smaller organic molecules, and that can then combine with other organic molecules and lead to early life.”
- - -
Going deeper and getting fresher
The rock cores extracted from hole U1601C are dominated by peridotite, the most common type of rock found in the upper mantle. The samples have been altered by their exposure to seawater, and scientists are already beginning to debate how to interpret the findings.
Most of the mantle is buried beneath the crust, not exposed to the ocean the way it is at this site. That raises the fundamental question: How closely do the latest samples mimic the rest of the mantle? Do the rocks truly represent mantle, or are they lower crust?
And for that matter, is the boundary between mantle and crust a sharp boundary, or more of a gradual transition? The samples aren’t pure peridotite, and that could be a key piece of evidence.
“It’s a bit of a hash, but that’s maybe what the lower crust is,” Fisher said, listing off various types of rock that have been reported in daily science logs. “This is really unusual - more than a kilometer of highly altered, lower crustal and/or upper mantle rock. I’d say it’s a mix.”
The scientists have been so busy processing the enormous volume of rock they’ve recovered that they’ve had little opportunity to study the samples in detail, or even reflect on the magnitude of the achievement. The drill bits need to be switched out every 50 hours. The team aboard works in 12-hour shifts, not wasting a minute of time.
On a recent morning, Lang became distracted and excused herself from an interview when she saw seawater spray through a window.
“I saw this seawater stage, which is always a very dramatic point where they detach this one thing and a bunch of seawater sprays everywhere,” Lang said. “Usually, that’s my warning that a core is coming on deck in about the next five minutes.”
What excites all of them is the hope that the deepest samples will yield even “fresher” rock, less altered by other processes and closer to what the mantle is really made of.
“The deeper we get in there, the closer we’re getting to what we those rocks look like, closer to what the mantle looks like,” Warren said.
|
A war of self determination is a war of clear distinctions guided by reason and not groomed perceptions.
Firstly, Amba claimed to be fighting for someone called anglophones. Anglophones are any groups of people who can speak English in Cameroon as per definition. In reality, there is none in Cameroon. The history teaches us that at the table of discuss, those who were speaking English ( perceived to be those under the British) were identified as anglophone and those under the french as francophones because they spoke French. However, we don’t speak English hence why accept something that is untrue about yourselves?Ignorance
Secondly, Amba claims to be fighting against marginalisation. Marginalisation is what those with arms are doing to the mass by definition.
Amba claims to be fighting against corruption but corruption is define by the actions Amba MTN money and Gofund.
Amba claims to be fighting against dictatorship but dictatorship is what Amba bush fallers are doing on those on the ground in the name of leadership.
You can list all what Amba claims to be fighting against and you will see it in AmbaDNA hence what is the problem that Amba independence will address and how?
Fighting for Southern Cameroons is just share stupidity because you can’t fight for SCs without fighting for British Cameroons and you can’t fight for BCs without fighting for Greater Kamerun reducing Amba knowledge to 0. It is just a rational proposition that demonstrates your understanding of the history.
To summarise, this simply means we have no distinct case that warrant independence. Federalism to a unilateral state has no effects on any Cameroonians. When you question Amba anglophone perception, they always focus their frustration to tribes (Ewondo , Beto, Bulls) in the name of enmity against someone called francophones. When you speak to a Beti, you get the same problems of marginalisation against Bulu or Ewondo. Worse of all, none identifies as francophones.
This tells us that what you claim as a Anglo problem is a general perception in Cameroonian’s DNA against each other. More to that, it also tells us we have a national identity crisis.
I know some blind radicals in the name of Amba will rush into insult and criticism but here is your case, if Amba was to become a nation, will the national identity be anglophones or southern Cameroonians? Ambazonia is the finger print frustrated anglophones abroad (SCNC) who have recognised this issue and are trying to mask it with an alternative name.
This brings us to one of the issues fuelling the crisis, the notion of anglophone, the willingness and acceptance among non English speakers to be called anglophones and the stereotype Anglo fools. Is the stereotype not a true representation of our misconceptions? Acknowledging contested facts show our willingness to accept reality and learn. Anglophone stereotype is not different to Bamileke stereotype acknowledged as an identity until when examine under scrutiny. Nobody comes from Bamileke just like nobody comes from anglophone but we hear of anglophone communities just as Bamileke communities around Yde but never anglophone community in Bda. When you claim anglophone culture in theory, the reality is totally the opposite. You see different cultures and tribes coming forward. The same applies to Bamilekes proving I am absolutely correct.
We should always recognise the fact that Yaounde is not just the capital or the town but someone’s village Just like we call some came no go in the SW.
I am from Bali, in Bali we are not anglophone although I may be an anglophone. We have Bali culture and not anglophone culture
If I am wrong, my grand father was born in 1918. Just like some of our parents who speak neither french or English, are they not Cameroonians?
|
<urn:uuid:667bf601-96ff-4e07-a07c-1e33f2e9337f>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224656737.96/warc/CC-MAIN-20230609132648-20230609162648-00245.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9502912163734436,
"pii_count": 0,
"score": 2.6875,
"token_count": 821,
"url": "https://welovecameroon.wordpress.com/2023/04/18/amba-war-of-self-determination-independence/"
}
|
A war of self determination is a war of clear distinctions guided by reason and not groomed perceptions.
Firstly, Amba claimed to be fighting for someone called anglophones. Anglophones are any groups of people who can speak English in Cameroon as per definition. In reality, there is none in Cameroon. The history teaches us that at the table of discuss, those who were speaking English ( perceived to be those under the British) were identified as anglophone and those under the french as francophones because they spoke French. However, we don’t speak English hence why accept something that is untrue about yourselves?Ignorance
Secondly, Amba claims to be fighting against marginalisation. Marginalisation is what those with arms are doing to the mass by definition.
Amba claims to be fighting against corruption but corruption is define by the actions Amba MTN money and Gofund.
Amba claims to be fighting against dictatorship but dictatorship is what Amba bush fallers are doing on those on the ground in the name of leadership.
You can list all what Amba claims to be fighting against and you will see it in AmbaDNA hence what is the problem that Amba independence will address and how?
Fighting for Southern Cameroons is just share stupidity because you can’t fight for SCs without fighting for British Cameroons and you can’t fight for BCs without fighting for Greater Kamerun reducing Amba knowledge to 0. It is just a rational proposition that demonstrates your understanding of the history.
To summarise, this simply means we have no distinct case that warrant independence. Federalism to a unilateral state has no effects on any Cameroonians. When you question Amba anglophone perception, they always focus their frustration to tribes (Ewondo , Beto, Bulls) in the name of enmity against someone called francophones. When you speak to a Beti, you get the same problems of marginalisation against Bulu or Ewondo. Worse of all, none identifies as francophones.
This tells us that what you claim as a Anglo problem is a general perception in Cameroonian’s DNA against each other. More to that, it also tells us we have a national identity crisis.
I know some blind radicals in the name of Amba will rush into insult and criticism but here is your case, if Amba was to become a nation, will the national identity be anglophones or southern Cameroonians? Ambazonia is the finger print frustrated anglophones abroad (SCNC) who
|
have recognised this issue and are trying to mask it with an alternative name.
This brings us to one of the issues fuelling the crisis, the notion of anglophone, the willingness and acceptance among non English speakers to be called anglophones and the stereotype Anglo fools. Is the stereotype not a true representation of our misconceptions? Acknowledging contested facts show our willingness to accept reality and learn. Anglophone stereotype is not different to Bamileke stereotype acknowledged as an identity until when examine under scrutiny. Nobody comes from Bamileke just like nobody comes from anglophone but we hear of anglophone communities just as Bamileke communities around Yde but never anglophone community in Bda. When you claim anglophone culture in theory, the reality is totally the opposite. You see different cultures and tribes coming forward. The same applies to Bamilekes proving I am absolutely correct.
We should always recognise the fact that Yaounde is not just the capital or the town but someone’s village Just like we call some came no go in the SW.
I am from Bali, in Bali we are not anglophone although I may be an anglophone. We have Bali culture and not anglophone culture
If I am wrong, my grand father was born in 1918. Just like some of our parents who speak neither french or English, are they not Cameroonians?
|
This year’s Atlantic hurricane season is expected to be near average, officials with the National Oceanic and Atmospheric Administration said Thursday.
Forecasters at the agency are predicting 12 to 17 named tropical storms, five to nine of which could become hurricanes. They expect as many of four of those could strengthen into major hurricanes – category 3 or stronger.
The last time there was fewer than the average of 14 named storms was in 2015. Hurricane season technically begins June 1, though storms have developed before that date in the past.
“As we saw with Hurricane Ian, it only takes one hurricane to cause widespread devastation and upend lives. So regardless of the number of storms predicted this season, it is critical that everyone understand their risk and heed the warnings of state and local officials,” FEMA Administrator Deanne Criswell said at a news conference. “Whether you live on the coast or further inland, hurricanes can cause serious impacts to everybody in their path.”
The El Niño-Southern Oscillation is a naturally occurring climate pattern impacting the ocean and atmosphere in the tropical Pacific, and consists of opposite phases known as El Niño (the warm phase) and La Niña (the cool phase). It has the potential to significantly impact global weather patterns and its phase (El Niño, La Niña or neutral) is one of the primary “knobs” that controls hurricane activity in the Atlantic and Pacific basins.
El Niño is characterized by warmer-than-average sea surface temperatures in the tropical Pacific Ocean, and tends to increase upper-level winds over the Atlantic, which disrupt and suppress hurricane formation.
El Niño’s influence on this season is still somewhat uncertain because it is only just beginning to develop. That it will eventually form this year is a “foregone conclusion,” said Phil Klotzbach, a research scientist in the Department of Atmospheric Science at Colorado State University, who was not involved in the NOAA outlook.
But there’s another factor that could negate or even outweigh El Niño’s influence this year: Sea surface temperatures in the Atlantic Ocean are already at or near record-high levels, and in a way that “matches up quite well with what we associate with active Atlantic hurricane seasons,” Klotzbach told CNN.
“If these warm anomalies in the North Atlantic persist through the hurricane season, it has the potential to cause less of an El Nino (wind) shear impact than we normally see,” he said, and that possibility is even showing up in “several climate model forecasts” for the summer and fall.
Hurricanes are natural phenomena shaped by complex atmospheric and oceanic dynamics. But they are now increasingly influenced by human-caused climate change.
As our planet continues to warm due to fossil fuel pollution, the impacts are manifesting in the intensification and altered behavior of these destructive storms. Through a combination of warmer waters, increased atmospheric moisture and rising sea levels, the climate crisis has set the stage for hurricanes to pose unprecedented risks to coastal communities.
If recent history is any indication, the US will face the threat of a high-end landfalling hurricane this season. There have been six category 4 or 5 hurricanes to hit the mainland since 2017, the most ever during a six-year period. Climate change, especially the buildup of heat increasing the ocean’s temperature, is leading to a larger percentage of hurricanes reaching the highest categories on the scale – a trend that is likely to continue as global temperatures climb.
The key difference between tropical depressions, tropical storms and hurricanes lies in their wind speeds and the level of organization within the system.
While a tropical depression represents the earliest stage of cyclone development, named tropical storms exhibit more structure and stronger winds. Hurricanes — the most powerful and dangerous of the three — possess the strongest winds and a well-defined eye, making them capable of causing extensive damage over large areas.
|
<urn:uuid:f9c1816b-97d3-49f4-9e29-a9054f8a1c7f>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510368.33/warc/CC-MAIN-20230928063033-20230928093033-00107.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9506182074546814,
"pii_count": 0,
"score": 3.046875,
"token_count": 810,
"url": "https://edition.cnn.com/2023/05/25/weather/hurricane-season-outlook-noaa-climate/index.html"
}
|
This year’s Atlantic hurricane season is expected to be near average, officials with the National Oceanic and Atmospheric Administration said Thursday.
Forecasters at the agency are predicting 12 to 17 named tropical storms, five to nine of which could become hurricanes. They expect as many of four of those could strengthen into major hurricanes – category 3 or stronger.
The last time there was fewer than the average of 14 named storms was in 2015. Hurricane season technically begins June 1, though storms have developed before that date in the past.
“As we saw with Hurricane Ian, it only takes one hurricane to cause widespread devastation and upend lives. So regardless of the number of storms predicted this season, it is critical that everyone understand their risk and heed the warnings of state and local officials,” FEMA Administrator Deanne Criswell said at a news conference. “Whether you live on the coast or further inland, hurricanes can cause serious impacts to everybody in their path.”
The El Niño-Southern Oscillation is a naturally occurring climate pattern impacting the ocean and atmosphere in the tropical Pacific, and consists of opposite phases known as El Niño (the warm phase) and La Niña (the cool phase). It has the potential to significantly impact global weather patterns and its phase (El Niño, La Niña or neutral) is one of the primary “knobs” that controls hurricane activity in the Atlantic and Pacific basins.
El Niño is characterized by warmer-than-average sea surface temperatures in the tropical Pacific Ocean, and tends to increase upper-level winds over the Atlantic, which disrupt and suppress hurricane formation.
El Niño’s influence on this season is still somewhat uncertain because it is only just beginning to develop. That it will eventually form this year is a “foregone conclusion,” said Phil Klotzbach, a research scientist in the Department of Atmospheric Science at Colorado State University, who was not involved in the NOAA outlook.
But there’s another factor that could negate or even outweigh El Niño’s influence this year: Sea surface temperatures in the Atlantic Ocean are already at or near record-high levels, and in a way that “matches up quite well with what we associate with active Atlantic hurricane seasons,” Klotzbach told CNN.
“If these warm anomalies in the North Atlantic persist through the hurricane season, it has the potential to cause less of an El Nino (wind) shear impact than we normally see,” he said, and that possibility is even showing up in “sever
|
al climate model forecasts” for the summer and fall.
Hurricanes are natural phenomena shaped by complex atmospheric and oceanic dynamics. But they are now increasingly influenced by human-caused climate change.
As our planet continues to warm due to fossil fuel pollution, the impacts are manifesting in the intensification and altered behavior of these destructive storms. Through a combination of warmer waters, increased atmospheric moisture and rising sea levels, the climate crisis has set the stage for hurricanes to pose unprecedented risks to coastal communities.
If recent history is any indication, the US will face the threat of a high-end landfalling hurricane this season. There have been six category 4 or 5 hurricanes to hit the mainland since 2017, the most ever during a six-year period. Climate change, especially the buildup of heat increasing the ocean’s temperature, is leading to a larger percentage of hurricanes reaching the highest categories on the scale – a trend that is likely to continue as global temperatures climb.
The key difference between tropical depressions, tropical storms and hurricanes lies in their wind speeds and the level of organization within the system.
While a tropical depression represents the earliest stage of cyclone development, named tropical storms exhibit more structure and stronger winds. Hurricanes — the most powerful and dangerous of the three — possess the strongest winds and a well-defined eye, making them capable of causing extensive damage over large areas.
|
Maui wildfires spark new fears about outdated U.S. power grid
The U.S. electric grid is outdated and overtaxed, and it's only growing more vulnerable under the pressures of soaring demand, extreme weather and climate change.
Driving the news: The conflagration that decimated Maui — in the process putting Hawai'i's utility provider in massive operational and financial jeopardy — is a stark reminder of just how bad this problem is.
Why it matters: Despite years of talk about modernization, the world's largest economy is struggling with a vulnerable grid that does not appear prepared to deal with soaring heat, a warming climate and other disruptive events that could literally turn off the lights for millions of people.
- The already daunting price tag for fixing the grid only gets steeper and steeper as these challenges pile up. As University of Texas at Austin grid expert Joshua Rhodes told Axios Closer's Nathan Bomey this week, the U.S. is "under invested by probably a few trillions of dollars."
- He's not exaggerating: a 2021 report by the National Conference of State Legislatures cited estimates that said at least $1.5 trillion would be needed through 2030 "to modernize the grid just to maintain reliability."
By the numbers: To get a sense of how large the scale of the problem is, consider just how sprawling the U.S. grid is. According to the Department of Energy, there are over 9,200 electric generating units that crank out more than 1 million megawatts of capacity, with 600,000+ miles of transmission lines.
- But natural disasters like wildfires, hurricanes and other calamities are heaping pressure on an electric infrastructure that's "aging, and it is being pushed to do more than it was originally designed to do," the Energy Department says.
What we're watching: With some of these challenges in mind, the Biden administration's signature Inflation Reduction Act dedicated billions to utilities and clean energy.
- "In broad economic terms…the IRA is working as planned. As much as $270 billion has been announced for utility-scale power developments and $50 billion for electric vehicle (EV) supply chains." Eurasia Group analyst Milo McBride wrote in a recent analysis.
Yes, but: That barely scratches the surface of what's needed. The NCSL points out that "while needs vary from state to state, the latest report from the American Society for Civil Engineers found that current grid investment trends will lead to funding gaps of $42 billion for transmission and $94 billion for distribution by 2025."
The bottom line: "Regardless of the exact numbers, investment will be needed to incorporate a more diverse energy supply, increase resiliency and upgrade infrastructure," the NCSL adds.
|
<urn:uuid:ee7b690d-ae07-4a58-8977-785eb4372b7f>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474660.32/warc/CC-MAIN-20240226130305-20240226160305-00322.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9478951096534729,
"pii_count": 0,
"score": 2.53125,
"token_count": 566,
"url": "https://www.axios.com/2023/08/18/maui-fire-electric-grid-outdated-dangerous?utm_campaign=Hot%20News&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz--hhfNyhFYxPvjt1TI2K2iy0fA00gmFGjqhPszMTZMOiZgQzX-T4e6bk6MnlohzWlaJpxeV"
}
|
Maui wildfires spark new fears about outdated U.S. power grid
The U.S. electric grid is outdated and overtaxed, and it's only growing more vulnerable under the pressures of soaring demand, extreme weather and climate change.
Driving the news: The conflagration that decimated Maui — in the process putting Hawai'i's utility provider in massive operational and financial jeopardy — is a stark reminder of just how bad this problem is.
Why it matters: Despite years of talk about modernization, the world's largest economy is struggling with a vulnerable grid that does not appear prepared to deal with soaring heat, a warming climate and other disruptive events that could literally turn off the lights for millions of people.
- The already daunting price tag for fixing the grid only gets steeper and steeper as these challenges pile up. As University of Texas at Austin grid expert Joshua Rhodes told Axios Closer's Nathan Bomey this week, the U.S. is "under invested by probably a few trillions of dollars."
- He's not exaggerating: a 2021 report by the National Conference of State Legislatures cited estimates that said at least $1.5 trillion would be needed through 2030 "to modernize the grid just to maintain reliability."
By the numbers: To get a sense of how large the scale of the problem is, consider just how sprawling the U.S. grid is. According to the Department of Energy, there are over 9,200 electric generating units that crank out more than 1 million megawatts of capacity, with 600,000+ miles of transmission lines.
- But natural disasters like wildfires, hurricanes and other calamities are heaping pressure on an electric infrastructure that's "aging, and it is being pushed to do more than it was originally designed to do," the Energy Department says.
What we're watching: With some of these challenges in mind, the Biden administration's signature Inflation Reduction Act dedicated billions to utilities and clean energy.
- "In broad economic terms…the IRA is working as planned. As much as $270 billion has been announced for utility-scale power developments and $50 billion for electric vehicle (EV) supply chains." Eurasia Group analyst Milo McBride wrote in a recent analysis.
Yes, but: That barely scratches the surface of what's needed. The NCSL points out that "while needs vary from state to state,
|
the latest report from the American Society for Civil Engineers found that current grid investment trends will lead to funding gaps of $42 billion for transmission and $94 billion for distribution by 2025."
The bottom line: "Regardless of the exact numbers, investment will be needed to incorporate a more diverse energy supply, increase resiliency and upgrade infrastructure," the NCSL adds.
|
Gerson: History of Israel is tragedy to triumph
With Fourth of July on the horizon, a certain reflection comes to my mind.
It is how various nations celebrate their Independence Day in different ways.
Here in the United States, July Fourth is mainly a day of light-hearted, rather raucous festivity. Fireworks, barbeques in yards, sales in large stores, and captivating sporting events fill the air. It is fun, and most of us look forward to a nice day off.
In Israel, it is entirely different. Actually two holidays are bound together, one day after another. It happened there a few days ago.
In the spring, on the Fourth of Iyar according to the Jewish calendar, with great profundity and emotion, Israel observes its Memorial Day, Yom Hazikaron. It is very solemn and sad. On that day, Israel remembers all the wars of defense it has had, and the losses of soldiers, a far greater percentage of its population than in most countries.
There are observances at cemeteries throughout the nation.
Then, the very next day, Iyar 5 (actually at sundown the night before), the mood , explosively, changes to happy exaltation, as Independence Day, Yom Ha'atzmaut, is celebrated. The day when the modern state of Israel was founded in 1948. There are parades, parties, dancing in the streets. A day of complete joy.
The symbolism here is remarkable. What is being marked here is a going from tragedy to triumph. And, indeed, that is what Jewish history, in it entirety, is all about. So many times this has happened.
Recently, we Jews observed Passover. We recalled how for 400 years, our people languished in Egypt as slaves. But around 1200 B.C., with God's help, the triumphant Exodus and coming to freedom in Israel took place.
Years later, in 586 B.C., another tragedy for the Jewish People occurred. The beautiful Temple in Jerusalem was obliterated by the Babylonians in battle, and Jews were exiled to Babylonia. But, 50 years later, with the help of the victorious Persian king Cyrus in the region, Jews courageously returned to Israel and rebuilt their lives there.
And still another example, in the late 19th century. Jews suffered in Europe , tremendously. And out of this, the Zionist movement was born. Jews began returning to Israel, building a life there, and eventually, in 1948, the modern state of Israel was founded.
Yes, from tragedy to triumph. But there is a very important corollary in all this. Each time, someone, feeling God, had to step forward and lead the way to that triumph. In Egypt, it was Moses who led his people out of that place, across the sea, through the desert, and on to Israel. While in Babylonian exile, the eloquent prophet Isaiah spurred on his people with his words: "Comfort, comfort ye, my People sayeth God ... your time of servitude is done.... God has opened the way for you." (Isaiah 40). And, in 19th century Europe, it was Theodore Herzl , who founded the Zionist movement, leading Jews to Israel.
This is how history, under God, works. Leaders, inspired by God, direct the way from tragedy to triumph. In our own country, Rev. Martin Luther King Jr. did this. So, let us always thank God that He has done this for the betterment of humankind.
|
<urn:uuid:d14c1d01-09e8-489b-ae06-9c22b59b6458>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654871.97/warc/CC-MAIN-20230608103815-20230608133815-00788.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9646261930465698,
"pii_count": 0,
"score": 2.859375,
"token_count": 736,
"url": "https://www.onlineathens.com/story/opinion/columns/2023/05/05/history-of-jews-is-one-of-tragedy-to-triumph/70180260007/"
}
|
Gerson: History of Israel is tragedy to triumph
With Fourth of July on the horizon, a certain reflection comes to my mind.
It is how various nations celebrate their Independence Day in different ways.
Here in the United States, July Fourth is mainly a day of light-hearted, rather raucous festivity. Fireworks, barbeques in yards, sales in large stores, and captivating sporting events fill the air. It is fun, and most of us look forward to a nice day off.
In Israel, it is entirely different. Actually two holidays are bound together, one day after another. It happened there a few days ago.
In the spring, on the Fourth of Iyar according to the Jewish calendar, with great profundity and emotion, Israel observes its Memorial Day, Yom Hazikaron. It is very solemn and sad. On that day, Israel remembers all the wars of defense it has had, and the losses of soldiers, a far greater percentage of its population than in most countries.
There are observances at cemeteries throughout the nation.
Then, the very next day, Iyar 5 (actually at sundown the night before), the mood , explosively, changes to happy exaltation, as Independence Day, Yom Ha'atzmaut, is celebrated. The day when the modern state of Israel was founded in 1948. There are parades, parties, dancing in the streets. A day of complete joy.
The symbolism here is remarkable. What is being marked here is a going from tragedy to triumph. And, indeed, that is what Jewish history, in it entirety, is all about. So many times this has happened.
Recently, we Jews observed Passover. We recalled how for 400 years, our people languished in Egypt as slaves. But around 1200 B.C., with God's help, the triumphant Exodus and coming to freedom in Israel took place.
Years later, in 586 B.C., another tragedy for the Jewish People occurred. The beautiful Temple in Jerusalem was obliterated by the Babylonians in battle, and Jews were exiled to Babylonia. But, 50 years later, with the help of the victorious Persian king Cyrus in the region, Jews courageously returned to Israel and rebuilt their lives there.
And still another example, in the late 19th century. Jews suffered in Europe , tremendously. And out of this, the
|
Zionist movement was born. Jews began returning to Israel, building a life there, and eventually, in 1948, the modern state of Israel was founded.
Yes, from tragedy to triumph. But there is a very important corollary in all this. Each time, someone, feeling God, had to step forward and lead the way to that triumph. In Egypt, it was Moses who led his people out of that place, across the sea, through the desert, and on to Israel. While in Babylonian exile, the eloquent prophet Isaiah spurred on his people with his words: "Comfort, comfort ye, my People sayeth God ... your time of servitude is done.... God has opened the way for you." (Isaiah 40). And, in 19th century Europe, it was Theodore Herzl , who founded the Zionist movement, leading Jews to Israel.
This is how history, under God, works. Leaders, inspired by God, direct the way from tragedy to triumph. In our own country, Rev. Martin Luther King Jr. did this. So, let us always thank God that He has done this for the betterment of humankind.
|
Prominent UFO sightings in Michigan's history
Sunday's unidentified flying object incident was not an isolated one in Michigan.
Multiple UFO sightings throughout the state have garnered headlines, remaining unsolved to this day:
Lake Huron — 2023
U.S. Air Force and National Guard pilots shot down an unidentified object flying 20,000 feet over Michigan's Upper Peninsula on Sunday.
This marked the third UFO shot down by the United States this week. On Friday, an unidentified object was shot down in Alaska, then again in Canada on Saturday, and now in Michigan.
Pentagon Press Secretary Brig. Gen. Pat Ryder said the object was not a military threat, but it was "a safety flight hazard and a threat due to its potential surveillance capabilities. Our team will now work to recover the object in an effort to learn more.”
Limited information has been given to the public about these objects.
In a statement, U.S. Rep. Debbie Dingell said, “the increasing incidents of unidentified objects, the latest over Lake Huron in Michigan airspace, are disturbing. We need the facts about where they are originating from, what their purpose is, and why their frequency is increasing."
More:U.S. downs unidentified object over Lake Huron after flight restrictions
Lake Michigan — 1994
Unfamiliar lights filled up the skies — nearly 200 miles along Lake Michigan's shoreline from Ludington south to the Indiana border, on March 8, 1994. This incident is known as one of the largest UFO sightings in Michigan history.
UFO reports were made by hundreds, not only to 911 dispatchers but also to the Mutual UFO Network's (MUFON) Michigan chapter. Sightings were reported by citizens, police officers, a meteorologist and others.
Cindy Pravda of Grand Haven told the Free Press in 2019 that four lights in the sky looked like "full moons" over the line of trees behind her horse pasture. She still believes the lights were UFOs.
A conversation between a National Weather Service meteorologist and a Holland police officer was reported by the Detroit Free Press in 1995.
“There were three and sometimes four blips, and they weren’t planes,” the NWS radar operator said. “Planes show as pinpoints on the scope, these were the size of half a thumbnail. They were from 5 to 12,000 feet at times, moving all over the place. Three were moving toward Chicago. I never saw anything like it before, not even when I’m doing severe weather.”
Wurtsmith Air Force Base — 1975
In October 1975, a bright white disc was spotted hovering over Wurtsmith Air Force Base. A plane was sent to pursue it, but the UFO reportedly shot into space before they could reach it.
“There were nuclear weapons at that base,” Bill Konkolesky, Michigan director of the Mutual UFO Network, told the Free Press in 2021. “It was seen on the ground by the soldiers on the ground. It was seen from the air traffic tower, it was caught on radar, so multiple ways that this was being observed. And then the other thing, too, is that within a two-week period, at least four other bases altogether in the United States that have nuclear weapons were visited by a very similar UFO.”
More: 'Unsolved Mysteries' investigates 1994 alleged UFO sighting that still haunts witnesses
Swamp gas — 1966
Speculation began on March 14, 1966, when Washtenaw County officers and Selfridge Air Force Base observers said they saw lights in the sky, moving at high speeds. Then on March 20, 1966, the sheriff's office received reports of a UFO landing in a swamp in Dexter Township.
“It was sort of shaped like a pyramid, with a blue-green light on the right-hand side and on the left, a white light. I didn’t see no antenna or porthole. The body was like a yellowish coral rock and looked like it had holes in it — sort of like if you took a piece of cardboard box and split it open,” Truck driver Frank Mannor told WDIV-TV (Channel 4) at the time. Mannor went into the swamp and "got about 500 yards of the thing" with his son Ronald.
Hundreds of sightings continued throughout the county all week.
Dr. J. Allen Hynek from Project Blue Book, a part of the Air Force that investigated UFOs, said the incident was just swamp gas. This drew criticism and accusations of a government cover-up.
Kinross — 1953
An Air Force jet disappeared over Lake Superior on Nov. 23, 1953.
A blip appeared on the radar in a restricted air space near Soo Locks, an important commercial gateway, and the U.S. Air Force at the Kinross base sent two experienced pilots in an F-89 Scorpion jet to investigate.
The jet chased the object for about 30 minutes and then the two radars, the jet, and the unidentified object, seemingly intersected over Lake Superior. They lost radio contact and the Air Force pilots were never heard from again.
Reporter Emma Stein contributed to this report.
Contact Nour Rahal: <email-pii>
|
<urn:uuid:8ac609c8-8125-4afc-94c5-5ec7b00de042>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00053.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9716466069221497,
"pii_count": 1,
"score": 2.578125,
"token_count": 1103,
"url": "https://www.freep.com/story/news/local/michigan/2023/02/13/michigan-ufo-sightings/69898235007/"
}
|
Prominent UFO sightings in Michigan's history
Sunday's unidentified flying object incident was not an isolated one in Michigan.
Multiple UFO sightings throughout the state have garnered headlines, remaining unsolved to this day:
Lake Huron — 2023
U.S. Air Force and National Guard pilots shot down an unidentified object flying 20,000 feet over Michigan's Upper Peninsula on Sunday.
This marked the third UFO shot down by the United States this week. On Friday, an unidentified object was shot down in Alaska, then again in Canada on Saturday, and now in Michigan.
Pentagon Press Secretary Brig. Gen. Pat Ryder said the object was not a military threat, but it was "a safety flight hazard and a threat due to its potential surveillance capabilities. Our team will now work to recover the object in an effort to learn more.”
Limited information has been given to the public about these objects.
In a statement, U.S. Rep. Debbie Dingell said, “the increasing incidents of unidentified objects, the latest over Lake Huron in Michigan airspace, are disturbing. We need the facts about where they are originating from, what their purpose is, and why their frequency is increasing."
More:U.S. downs unidentified object over Lake Huron after flight restrictions
Lake Michigan — 1994
Unfamiliar lights filled up the skies — nearly 200 miles along Lake Michigan's shoreline from Ludington south to the Indiana border, on March 8, 1994. This incident is known as one of the largest UFO sightings in Michigan history.
UFO reports were made by hundreds, not only to 911 dispatchers but also to the Mutual UFO Network's (MUFON) Michigan chapter. Sightings were reported by citizens, police officers, a meteorologist and others.
Cindy Pravda of Grand Haven told the Free Press in 2019 that four lights in the sky looked like "full moons" over the line of trees behind her horse pasture. She still believes the lights were UFOs.
A conversation between a National Weather Service meteorologist and a Holland police officer was reported by the Detroit Free Press in 1995.
“There were three and sometimes four blips, and they weren’t planes,” the NWS radar operator said. “Planes show as pinpoints on the scope, these were the size of half a thumbnail. They were from 5 to 12,
|
000 feet at times, moving all over the place. Three were moving toward Chicago. I never saw anything like it before, not even when I’m doing severe weather.”
Wurtsmith Air Force Base — 1975
In October 1975, a bright white disc was spotted hovering over Wurtsmith Air Force Base. A plane was sent to pursue it, but the UFO reportedly shot into space before they could reach it.
“There were nuclear weapons at that base,” Bill Konkolesky, Michigan director of the Mutual UFO Network, told the Free Press in 2021. “It was seen on the ground by the soldiers on the ground. It was seen from the air traffic tower, it was caught on radar, so multiple ways that this was being observed. And then the other thing, too, is that within a two-week period, at least four other bases altogether in the United States that have nuclear weapons were visited by a very similar UFO.”
More: 'Unsolved Mysteries' investigates 1994 alleged UFO sighting that still haunts witnesses
Swamp gas — 1966
Speculation began on March 14, 1966, when Washtenaw County officers and Selfridge Air Force Base observers said they saw lights in the sky, moving at high speeds. Then on March 20, 1966, the sheriff's office received reports of a UFO landing in a swamp in Dexter Township.
“It was sort of shaped like a pyramid, with a blue-green light on the right-hand side and on the left, a white light. I didn’t see no antenna or porthole. The body was like a yellowish coral rock and looked like it had holes in it — sort of like if you took a piece of cardboard box and split it open,” Truck driver Frank Mannor told WDIV-TV (Channel 4) at the time. Mannor went into the swamp and "got about 500 yards of the thing" with his son Ronald.
Hundreds of sightings continued throughout the county all week.
Dr. J. Allen Hynek from Project Blue Book, a part of the Air Force that investigated UFOs, said the incident was just swamp gas. This drew criticism and accusations of a government cover-up.
Kinross — 1953
An Air Force jet disappeared over Lake Superior on Nov. 23, 1953.
A blip appeared on the radar in a restricted air space near Soo Locks, an important commercial gateway, and the U.S. Air Force at the Kinross base sent two experienced pilots in an F-89 Scorpion jet to investigate.
The jet chased the object for about 30 minutes and then the two radars, the jet, and the unidentified object, seemingly intersected over Lake Superior. They lost radio contact and the Air Force pilots were never heard from again.
Reporter Emma Stein contributed to this report.
Contact Nour Rahal: <email-pii>
|
At a Glance
- Encounters between humans and polar bears are becoming more common as Arctic ice shrinks.
- One recent incident killed two people, but fatal attacks are extremely rare.
- Climate change is listed as the primary threat to polar bear survival.
Sign up for the Morning Brief email newsletter to get weekday updates from The Weather Channel and our meteorologists.
Encounters between people and polar bears are likely to rise as Arctic ice continues to shrink due to global warming, and that has communities in Alaska taking a new look at ways to keep their residents safe.
The renewed interest in protections, including polar bear patrols, comes after a fatal attack in remote Wales, Alaska. A 24-year-old woman and her 1-year-old son were killed.
Experts say such attacks are extremely rare. The one in Wales was the first in Alaska in 30 years, according to The Associated Press. But scientists say polar bears are being forced to rely more on land than ice for their habitat and while much of the areas where they live are remote and sparsely populated, that puts them in closer proximity to people.
Polar bears are listed as a threatened species, with climate change cited as the biggest factor for the designation. They are also protected under the Marine Mammal Protection Act. Both designations prohibit actions that would disturb, injure or kill them, unless necessary for human safety.
Wales used to have regular polar bear patrols, but those stopped in recent years due to the covid-19 pandemic, a lack of polar bears and other reasons, according to the AP.
“There’s absolutely discussion now in Wales, saying, ‘Hey, maybe things have changed to the point that we need this, and how do we do that?’” said Susan Nedza, the chief administrator for the Bering Strait School District which includes Wales.
About 150 people live in Wales. The community sits just 50 miles across the Bering Strait from Russia, at the westernmost point of mainland North America.
The polar bear attack on Jan. 17 killed Wales resident Summer Myomick and her son Clyde Ongtowasruk just as they left a school building. Witnesses said the polar bear charged amid low visibility from a snow squall. The school was put on lockdown, and window blinds were drawn so that students couldn't see what was happening.
“Just horrific. ... Something you never think you would ever experience," Nezda said.
The U.S. Fish and Wildlife Service and other agencies say they are investigating the attack and what measures could be taken in the future to prevent fatal encounters.
Polar bears are the largest carnivorous land mammals on Earth. They can weigh in excess of 1,700 pounds and be up to 8 feet long.
The Weather Company’s primary journalistic mission is to report on breaking weather news, the environment and the importance of science to our lives. This story does not necessarily represent the position of our parent company, IBM.
|
<urn:uuid:4a7a22c3-41ba-4ebd-b9be-db0ab70cfac3>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00297.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9665806889533997,
"pii_count": 0,
"score": 3.203125,
"token_count": 620,
"url": "https://weather.com/news/news/2023-01-23-alaska-polar-bear-attack-fatal"
}
|
At a Glance
- Encounters between humans and polar bears are becoming more common as Arctic ice shrinks.
- One recent incident killed two people, but fatal attacks are extremely rare.
- Climate change is listed as the primary threat to polar bear survival.
Sign up for the Morning Brief email newsletter to get weekday updates from The Weather Channel and our meteorologists.
Encounters between people and polar bears are likely to rise as Arctic ice continues to shrink due to global warming, and that has communities in Alaska taking a new look at ways to keep their residents safe.
The renewed interest in protections, including polar bear patrols, comes after a fatal attack in remote Wales, Alaska. A 24-year-old woman and her 1-year-old son were killed.
Experts say such attacks are extremely rare. The one in Wales was the first in Alaska in 30 years, according to The Associated Press. But scientists say polar bears are being forced to rely more on land than ice for their habitat and while much of the areas where they live are remote and sparsely populated, that puts them in closer proximity to people.
Polar bears are listed as a threatened species, with climate change cited as the biggest factor for the designation. They are also protected under the Marine Mammal Protection Act. Both designations prohibit actions that would disturb, injure or kill them, unless necessary for human safety.
Wales used to have regular polar bear patrols, but those stopped in recent years due to the covid-19 pandemic, a lack of polar bears and other reasons, according to the AP.
“There’s absolutely discussion now in Wales, saying, ‘Hey, maybe things have changed to the point that we need this, and how do we do that?’” said Susan Nedza, the chief administrator for the Bering Strait School District which includes Wales.
About 150 people live in Wales. The community sits just 50 miles across the Bering Strait from Russia, at the westernmost point of mainland North America.
The polar bear attack on Jan. 17 killed Wales resident Summer Myomick and her son Clyde Ongtowasruk just as they left a school building. Witnesses said the polar bear charged amid low visibility from a snow squall. The school was put on lockdown, and window blinds were drawn so that students couldn't see what was happening.
“Just horrific. ... Something you never think you would ever experience," Nezda said.
The U.S. Fish and Wildlife Service and other agencies
|
say they are investigating the attack and what measures could be taken in the future to prevent fatal encounters.
Polar bears are the largest carnivorous land mammals on Earth. They can weigh in excess of 1,700 pounds and be up to 8 feet long.
The Weather Company’s primary journalistic mission is to report on breaking weather news, the environment and the importance of science to our lives. This story does not necessarily represent the position of our parent company, IBM.
|
In mid-November 2021, a great storm begins brewing in the central Pacific Ocean north of Hawai‘i. Especially warm water, heated by the sun, steams off the sea surface and funnels into the sky.
This article is from Hakai Magazine, an online publication about science and society in coastal ecosystems.
A tendril of this floating moisture sweeps eastward across the ocean. It rides the winds for a day until it reaches the coasts of British Columbia and Washington State. There, the storm hits air turbulence, which pushes it into position—straight over British Columbia’s Fraser River valley.
Clouds gather and darken. Below, a patchwork of farms and subdivisions sprawls along the Fraser River from its mouth, south of Vancouver, to the eastward mountain slopes, and southeast across the US border. At the center of the valley lies Abbotsford, a city of around 150,000 people nestled in a fingerprint-like depression between two mountains. As the stream of humid air rises toward the peaks, it cools, condenses, and bursts.
To Murray Ned, it sounds like a creek is overflowing outside his home in Kilgard, on a hillside within Abbotsford that’s part of the Semá:th (Sumas) First Nation reserve. Lying in bed, Ned listens to water overflow his rain gutters and splash two stories to the ground. Rain is common in Abbotsford in November, but it’s usually quiet. And it usually lets up.
Over the next two days, nearly a month’s worth of rain dumps here and in other parts of the province. The resulting floods and landslides kill at least six people, rip apart buildings, and buckle roads. In Abbotsford, more than 1,000 homes are swamped and 640,000 farm animals perish as rivers reclaim agricultural land in the floodplain.
But amid the losses, Ned sees something else. The Tuesday evening of the flood—after he has vacuumed water from his mother’s basement and moved the family’s horses to high ground—the deluge stops. Ned settles into a folding chair in his backyard, pulls out a Kokanee lager, and takes in the view. The flood laps knee-high against his horse barn. Semá:th Xó:tsa, Sumas Lake, has returned to the territory.
Once a 6,475-hectare body of water, Sumas Lake brimmed with sturgeon, trout, and five species of salmon—sustaining the Semá:th people and larger Stó:lō Nation for millennia. The lake swelled with fall rains and spring snowmelt, and shrank during summer, leaving fertile ground between the high and low water marks where wild potatoes, berries, and blue camas flowers with edible bulbs thrived. By 1924, though, settlers had converted the lakebed into permanent farmland with a system of dikes, canals, and pumps. But after the 2021 storm, everything in the lowlands is submerged again, from cornfields to the Trans-Canada Highway to a castle-themed fun park.
Most of the floodwaters have come from the Nooksack River. High runoff shifted the Nooksack’s course from its usual east–west flow in the United States and sent it rushing northward into Canada. Stó:lō elders know it can do this. For most of the postglacial period, before natural sedimentation deflected its course, the Nooksack fed the Sumas and Fraser Rivers as well as Sumas Lake. Big floods today can still send the river back north, borders be damned. Water was made to change states. Not so long ago, Sumas Lake had been there to catch it.
As Ned surveys the moonlit water, glistening around horse barns, poultry sheds, and power lines, sturgeon and coho salmon swim old migration routes beneath the surface. “To see Mother Nature threaten [the region] but also see the lake in all its glory again was pretty amazing,” Ned tells me later. Alongside the lake, he sees the possibility of a different future: one that restores space and flexibility for water and that keeps communities safer from the extremes of climate change.
The storm that hit Abbotsford is known as an atmospheric river. These systems are common along the west coast of North America and midlatitudes around the world. They account for one-third to one-half of the annual precipitation in some areas and represent a major source of fresh water for many countries. But studies suggest that atmospheric rivers are becoming more volatile and are delivering water in bigger bursts. Paradoxically, recent storms, including the one in British Columbia, have occurred between some of the hottest and driest summers on record. When they deliver needed rain, it’s too much for parched soils and concrete channels to contain. This pendulum swing between deluge and drought—what meteorologists have started calling “weather whiplash”—will only grow more pronounced as the planet warms.
Ned and other members of Semá:th First Nation have begun advocating to revive at least part of Sumas Lake for the ecosystem and Stó:lō culture, and also for flood control and natural water storage that will make the region more resilient against future disasters. So far, they haven’t gained much traction in Abbotsford, but efforts elsewhere suggest they’re onto something. Perhaps nowhere provides more examples than California, which has long ridden the seesaw between dangerous downpour and punishing drought. Whether through foresight or surrender, communities there are giving up new ground for water and restoring some natural systems, to work with rain when it comes.
Sometimes when heavy rain arrives in California, it originated near Hawai‘i. Warm seawater evaporates and fills the air with water vapor, which gets blown across the Pacific until it collides with coastal mountains and falls as rain or snow. By the 1990s, meteorologists believed that a windy layer in the lower atmosphere called the low-level jet likely carried this tropical moisture. But just how much moisture, how warm, how windy, and where exactly it flowed remained mysterious.
In 1998, the National Oceanic and Atmospheric Administration set out to answer these questions with a program called CALJET that deployed sensor-studded planes to fly into West Coast storms. By releasing instruments called dropsondes, which look like mail tubes attached to small parachutes, the research team measured wind speed, temperature, and moisture content at different altitudes. A young scientist named Marty Ralph directed the crew from the flight deck.
Ralph—who founded and now directs the Center for Western Weather and Water Extremes at Scripps Institution of Oceanography at the University of California San Diego—first got interested in storms as a kid living in Arizona. He’d marvel from his bedroom window at the monsoon rains that made the desert bloom. Later, as a graduate student at the University of California, Los Angeles, Ralph mounted a rain gauge outside his bungalow during one of the state’s worst droughts on record. He was shocked to find 100 millimeters of water in the gauge one winter night; it ended up being half of the local rainfall for the entire year. “I got an early dose of how important individual storms can be in California,” Ralph says.
On his CALJET flights, Ralph got to know these storms more intimately, from their bumpy interiors and thick foggy cloaks to their distinct aroma that wafted through the plane’s air filters. “It smelled tropical,” Ralph recalls, “just sticky and warm.” The scent was coming from far away. In fact, satellite images later revealed that the bands of water vapor stretched a couple of thousand kilometers from the tropics to the coast, and they were as wide as the distance between Vancouver, British Columbia, and Portland, Oregon. But the most impressive part was the moisture, or “juice,” that the dropsondes measured: “The equivalent of 25 Mississippi Rivers of water, but as vapor instead of liquid,” Ralph says.
Once the team crunched the numbers, they learned that their findings lined up with a few landmark studies of the day. One, by researchers Yong Zhu and Reginald Newell, helped coin an evocative term—atmospheric river. “That’s when the light bulb went on,” Ralph says. “We were studying a river in the sky.”
In the quarter-century since the storms got their name, researchers have learned that on average a half-dozen of these systems are moving moisture around the planet at any given time. Local monikers point to their origins: the Rum Runner sends juice from the Caribbean to western Europe; the Pineapple Express is the famous rainmaker that whisks wet air from Hawai‘i to the West Coast. There, atmospheric rivers act like a traveling sprinkler system, spraying up at Alaska in late summer and swiveling down to California by winter or spring.
These sprinklers are usually beneficial, but as in British Columbia, they can become hazardous cascades. During California’s Great Flood over the winter of 1861–1862, a series of atmospheric rivers made it rain for 43 days. The floodwaters formed an inland sea that stretched from the under-construction capitol building in Sacramento to the bottom of the ancient Tulare Lake basin in the Central Valley, and beyond. Thousands of people and one-quarter of the state’s cattle died.
Paleo records from sediments show that atmospheric river–induced floods of at least that magnitude have occurred in California roughly every 200 years over the past two millennia. Today, even storms lesser than these mega rains cause 90 percent of flood damage along parts of the West Coast.
Atmospheric rivers also pummel the coasts of western Europe, Africa, South America, and New Zealand. They were responsible for the horrific floods in Pakistan in August 2022 that killed nearly 1,500 people and displaced 33 million more. Their warm moisture is so good at liquefying snow and ice that they’re thawing parts of Greenland and the Arctic. And because a warmer atmosphere can hold more water vapor, climate change is supercharging these storms. “You have more fuel basically,” Ralph says.
But at the same time that rainfall is intensifying, the droughts that occur are deepening. California—which already faces the most variable precipitation in the United States—will likely see a future with fewer storms overall and longer dry periods between them. Combined with the fact that the storms that do come are loaded with more water, this means the wet times will get wetter and the dry times will get drier. More of California’s water is going to pour down in floods.
“Flooding and drought really are connected,” says Michael Dettinger, a hydroclimatologist who’s been studying atmospheric rivers alongside Ralph since the mid-2000s. “One is just the flip side of the other.”
Sacramento in mid-August 2022 is searing. It’s not just the 40 °C temperature that’s setting records; California is about to close out its driest three-year period since 1895. In the summer, the Sacramento Valley, part of the larger Central Valley that grows one-quarter of the United States’ food, is usually a checkerboard of gold and green—the gold of spring wheat mixed with glowing green stalks of sushi rice that sway in the warm breeze. But in 2022, Californian farmers planted less than half the amount of rice they projected. That’s because there wasn’t enough water. Jacob Katz squints through his sunglasses at the vast dust bowl that’s resulted. “No one has ever seen this before,” he says, as we drive down a dirt lane surveying the barren fields.
The senior scientist at California Trout, Katz has been working for more than a decade to reconnect the Sacramento River—the state’s largest source of fresh water—to its adjacent floodplains. Like the leaders of Semá:th First Nation to the north, Katz knows that making more room for water could give the landscape and its inhabitants a buffer against weather whiplash.
At the end of the road we’re traveling is a case in point: dead grass gives way to a watery oasis, glimmering like a mirage in the sun. Dozens of white-faced ibises dip and shimmy their wings while a sandpiper sips a cool drink. Within minutes, an egret glides down from the hazy sky and sinks its black legs into the mud. All of the life in the valley, it seems, is taking refuge in this shallow 55-hectare lake.
The “lake” is actually a research plot at Davis Ranches, a heritage farm near the Sacramento River that’s testing new approaches to water management. One silver lining of the drought is that fallowed farmland can become wildlife habitat, says the farm’s manager, John Brennan, who’s checking on the site with Sandi Matsumoto, water program director for the Nature Conservancy, a partner on this project. “We need to build all the habitat we can to get ready for the dry years,” Brennan says. That’s because habitat can absorb water when it’s available—ideally during floods—and store it in the soil and underground for humans, plants, and animals to tap into during dry spells.
This artificial wetland was designed to assist shorebirds that visit Sacramento during their annual migration from the Arctic to South America on a path known as the Pacific Flyway. Shorebirds have declined by about 40 percent on the West Coast, and they’re particularly vulnerable to drought. But even seasonal habitat like this can make a big difference for birds, Matsumoto says. And it requires far less water than rice farming.
The project is a summer addition to what Davis Ranches and other farms already do for shorebirds and waterfowl in winter. After the rice harvest wraps in the fall, farmers intentionally flood their fields at the peak of the birds’ winter migration, when river water is more abundant. If done at scale, this could reduce the flooding of some infrastructure and communities while allowing more water to soak into the ground, which in turn can help support wildlife and humans at the mercy of drought. “So we have these two things: water insecurity and flood,” Katz says. “But they actually have a common solution—puddles.”
Historically, when rain and snowmelt surged down from the Sierra Nevada—California’s craggy backbone that, in wet years, is ground zero for atmospheric rivers—the Sacramento would sometimes swell to 70 times its average flow and spill across a marshy mosaic that connected to a wetland ecosystem larger than the state of Connecticut. Tens of millions of birds and one of the world’s greatest chinook salmon runs relied upon these floodplains. But after the California gold rush, water engineers constricted the river between 1,600 kilometers of steep levees designed to blast floodwater straight to the ocean. Over the coming decades, settlers replaced the wetlands with farms and other developments.
Katz sums up that history in one word: drainage. “We’re like anti-beavers,” he says. “Everywhere we go, we get water off the landscape as quickly as possible. It’s in our language: ‘drain the swamp.’ What does that mean? It means progress.”
According to a growing body of research, it also means more intense flooding and drought. Instead of allowing water to spread and seep into the earth, squeezing rivers between channels creates a superhighway for flood flows. And sending water away from the landscape means, for the most part, the valley remains parched; below-ground aquifers wait desperately for drips. Add in the fact that agriculture pumps groundwater significantly faster than those aquifers can recharge, and you’ve got a very thirsty state. But mimicking the natural flows of water across the terrain could help address both problems at the same time.
“We need to return to the natural cycles of our system, which include atmospheric rivers, which include drought,” Matsumoto says. “We need to restore our natural systems that are able to deal with those extremes.”
She looks across the pop-up wetland where dragonflies circle over neon-green algae. “Fish food!” Katz declares. When sunlight hits the shallow, nutrient-rich water, it blooms with phytoplankton and invertebrates, which then feed fish and birds. Efforts by groups like the Nature Conservancy and California Trout to flood the valley in winter have been supporting dozens of species of birds, which, before dropping off over the past half-century, were so numerous they’d black out the sun.
Native fish, like endangered winter-run chinook salmon, also benefit when they have access to food and shelter on the floodplain. Research shows that juvenile chinook released into wetlands can grow five times faster than those confined to a channel between levees. They’re more likely to survive their odyssey to the ocean and back as well. “It’s night and day,” Katz says. Straitjacketed rivers with managed flows favor fish like bass that are invasive in California. But natural swings and flood events support the native salmon they’ve shaped, just like they’ve shaped this fertile valley. “Flood doesn’t have to be catastrophic,” Katz says. “It can also be a driver of the productivity and abundance we so value.”
Freeing rivers has been catching on as a flood mitigation strategy around North America and the world. And the places welcoming water are seeing added benefits.
In the Netherlands, a program called Room for the River has removed dikes and created side channels for water and fish to meander through 34 riverside communities. Many of these places now offer greater biodiversity and recreation in addition to enhanced flood safety. The new outlets for water act like pressure-release valves when flows are high. During a freak summer flood on the Meuse river in 2021, water levels were the highest ever recorded in some places. But downstream, thanks in part to these Room for the River projects, there was much less flood damage than in previous years, even with bigger flows.
A similar program in Washington State has restored 114 kilometers of fish habitat and created more than 3,700 jobs, ranging from outreach workers to engineers. In the state’s southwest corner, along the Columbia River, locals are also knocking down levees to give lamprey and salmon year-round access to 370 riparian hectares while also protecting infrastructure from flooding.
While California’s plumbing is more complex—it’s one of the most engineered water systems in the world—the state is also restoring some riverways. On the Sacramento, for example, workers are digging 15,000 dump trucks’ worth of earth from a levee so that salmon and sturgeon have a better chance of reaching the floodplain. And now, thanks to growing awareness of atmospheric rivers and precipitation swings, water managers are starting to carry out floodplain projects specifically to boost groundwater: a near-miraculous co-benefit.
The last field we visit at Davis Ranches crunches with dry grass and weeds, another casualty of the drought. But the previous December, an atmospheric river unloaded a good amount of rain. This 24-hectare tract was waiting for the deluge. The ranch piped floodwater from the river into this plot, and its porous soils sucked it up like a sponge. Over a few weeks, the sponge trickled its contents through sand, silt, and gravel into layered pools in the earth: groundwater aquifers. At the same time that puddles were feeding stilts and sandpipers above ground, the water table below was rising.
This process of sending surface water underground is not a new idea, but the drought has given it new urgency along with a new name: managed aquifer recharge, also known as MAR, Ag-MAR, or Flood-MAR. With most river water already rationed among human uses, as well as minimum flows for fish, excess water from atmospheric rivers can be the shock absorber between flooding and drought if allowed to pool on the land instead of charging out to sea. “The only water not spoken for is these flood flows,” Brennan says. “Even in the driest years, we have extra water.”
Since starting the recharge project in 2019, Davis Ranches has captured about 370,000 cubic meters of water each winter. That’s around what 150 US households use in a year, and it’s just from one field, covering only one percent of this property. The farm is saving up that groundwater to share with neighbors at a later date, with the goal of leaving more water on the surface for wildlife.
In the most drought-prone reaches of the state, communities are similarly turning to groundwater recharge, often putting pipes and canals set up for summer irrigation to new use during winter to divert flood flows into aquifers. By 2022, local governments in areas where human use has depleted aquifers were required to submit sustainability plans in accordance with a 2014 state law aimed at restoring groundwater. Many of these plans include recharge projects like the one at Davis Ranches but larger.
The California Department of Water Resources (DWR) has already invested US $68 million into 42 of these projects, moving one-quarter of the way toward governor Gavin Newsom’s goal of expanding recharge capacity in the state by more than 616 million cubic meters. That’s the equivalent of adding another large reservoir but underground, safe from heat and evaporation. By some estimates, the sapped aquifers below the Central Valley have space for three times more water than all of California’s reservoirs combined. DWR is now vetting dozens of additional projects aimed at banking water below ground.
The summer I visit the Sacramento Valley is the first summer after the devastating storm in Abbotsford. Despite the city rebuilding a destroyed section of dike, water from the Sumas River still seeps through the reinforcements into a cattle farm. The flooded area has shrunk to the size of a few soccer fields, but its water still teems with ducks and geese. To the Semá:th people, it still carries the spirit of Sumas Lake.
Historically, Sumas Lake would have been this small only during a major drought. But even without the natural lake as an indicator, signs of dry times are cracking through the landscape. In 2022, the town of Chilliwack, just up the Fraser River from Abbotsford, has recorded the hottest August and September in more than 140 years. And while Abbotsford—which relies on some groundwater for drinking—has several stable aquifers, the region is also home to the only two aquifers known to be declining near the southern coast of British Columbia. (Aquifers are also crucial for replenishing local streams and rivers from below ground during summer.) And while drought may be more subtle here than it is in California, scientists project that parts of the province are due for hotter and longer dry spells between downpours, too.
The fact that his home feels distinctly “droughty” so soon after the costliest flood in BC history concerns Murray Ned. “For me and my generation, we may not have to endure it too much,” he says. “But for my grandchildren, my kids … Yeah, it’s very alarming.”
As executive director of the Lower Fraser Fisheries Alliance and an adviser to Semá:th First Nation, where he served as a councilor for 25 years, Ned is especially concerned about culturally important sturgeon and salmon. They’ve been cut off from around 85 percent of their floodplain habitat, which historically included Sumas Lake. Coho, chum, and chinook salmon still migrate to the area to spawn, along with sturgeon, but all these species are in decline. Warming water and drought are adding new pressures. According to scientists and Semá:th leaders, reviving Sumas Lake could help.
The City of Abbotsford has a different vision, aimed at protecting property and agriculture, and keeping the water at bay. A few months before the heatwaves, the city released a flood mitigation plan focused on new dikes and pump stations as well as on raising a section of highway. Like so many planning processes in Canada and around the world, this one didn’t initially have Indigenous leaders at the table. Semá:th chief Lemxyaltexw Dalton Silver sees that as a missed opportunity. Stó:lō leaders, after all, have access to deep generational knowledge of local watersheds, which once defined their clans and dialects.
“The knowledge that our people had has never been respected. The knowledge that some of our people still carry is not respected,” Silver says. “I would really like to see us included in the planning.”
The heart of the matter is reconciliation for past injustices. The Canadian government never consulted or compensated the Semá:th people for the original destruction of the lake that was their cultural and economic lifeblood. Ned’s great-grandfather, Chief Selesmlton, was the leader of Semá:th First Nation in the early 1920s when the government was planning the lake’s removal. Selesmlton—whose traditional name, Kwilosintun, was passed down to Ned—told a royal commission that draining the lake would starve and impoverish his people. In 1923, the colonists did it anyway. When the water finally subsided the following year, sturgeon were found fighting for their lives, poking their pointy heads from the mud. The Stó:lō call this defiant-looking act kw’ekw’e’liqw—“sticking up.”
Despite Semá:th First Nation pursuing claims and compensation for Sumas Lake and the provincial and federal governments both committing to reconciliation with First Nations, flood management remains the responsibility of municipalities. The City of Abbotsford came up with plans and shared them with Semá:th leaders later.
According to city officials, they incorporated some of the nation’s feedback into their proposal, which includes setting back the Sumas River dike to create an overflow channel for water to feed into the lake bottom during big floods. Decisions to revive Sumas Lake more permanently would need to be hashed out with senior levels of government as well as with neighboring First Nations and communities during ongoing negotiations, says Melissa Godbout, a communications officer with the city. Then, actually restoring the lake would require some level of managed retreat: the strategic movement of people and infrastructure out of harm’s way by buying out flood-impacted properties.
Ned and Silver are not naive to the challenges of managed retreat. Silver stresses the need to gain buy-in from the wider community and not inflict displacement on others like the colonial government inflicted on his people when the lake was drained. “I also weigh the human factor,” Silver says. Yet, given the lake’s importance to Stó:lō culture and biodiversity as well as flood protection and water security, the option needs to be on the table, Silver says. “We’ve got to realize our connection to everything around us.”
A few months before the flood, Ned had a dream that felt like reality. He stood ankle-deep in water with his wife, near a beachy bank of the Fraser River. It was a sunny day, and two men fished nearby. But soon, his wife, the fishermen, and the beach all disappeared. Ned found himself waist-deep in rising water. Downriver, a six-meter-long sturgeon—the massive size his great-grandfather would have known—swam straight for him, its spiked tail slicing through the river.
Stó:lō oral history says that the sturgeon was once a girl who spent all her time in the water. The girl’s father, a community leader, gathered his people and decided his daughter should remain in the water forever. He transformed her into a sturgeon who wouldn’t die after spawning. She would survive and keep living to help the people of the future.
The sturgeon in the dream didn’t slam into Ned; it just slithered softly by. Ned ran his hand across its prehistoric body. The next thing he knew, Ned was many meters from shore, surrounded by deep water. What he thought was a river was actually Semá:th Xó:tsa—the lake of his ancestors.
As calendars change to 2023, the pendulum swings in California yet again, this time from severe drought to flood. Between late December and mid-January, nine back-to-back atmospheric rivers dump 121 billion cubic meters of water across the state. That’s enough to fill 48 million Olympic swimming pools or flood 120 Sumas Lakes.
The storms are a wicked brew of torrential rain and roaring winds that swallow streets, uproot redwoods, knock out power for more than two million households, unleash 700 landslides, and kill at least 20 people. Cities like San Francisco receive almost their annual rainfall in just three weeks. Most of it pours down pavement into the Pacific.
These atmospheric rivers also penetrate hundreds of kilometers inland. In Tulare County—one of California’s most drought-plagued regions, where farmers have pumped aquifers so hollow that even some levees are sinking—the rains at first feel like a gift.
“Water is a new emotion in our area,” Aaron Fukuda, general manager of the Tulare Irrigation District, tells me later. People and nature are happy. The birds come back. The frogs are croaking. “It’s really miraculous what water does bring. When it’s not there, you’re suffering.”
After the first few storms, when the San Joaquin River starts surging, Fukuda and his colleagues open the floodgates to 11 different recharge basins, welcoming water to pool across more than 730 soccer fields’ worth of land. Local farmers call to ask if they can send water onto their properties, too. “On the first day, I think we had 70 or 80 orders,” Fukuda recalls. Through this collective effort, the county secures enough groundwater in January alone to run about 18,000 homes for a year.
But by spring, the emotion has changed; the community is experiencing weather whiplash. So many storms have bashed the state that Governor Newsom passes an executive order allowing people to temporarily divert floodwater onto just about any land that will take it. As the drama unfolds, Tulare County also witnesses a surprise transformation: an ancient lakebed—which historically held the largest body of fresh water west of the Mississippi—is rapidly refilling. The lake once supported the Yokut and other Indigenous tribes, and long before them, wooly mammoths and other now-extinct megafauna. At the end of May, it ripples across 470 square kilometers—nearly the size of Lake Tahoe to the north.
Many fear that Tulare Lake will swell more with the “big melt.” All those atmospheric rivers have stacked enough snowpack on the Sierra Nevada to fill the lakebed four times over. “It’s like your bathtub’s full, your rugs are saturated, and somebody turns the nozzle back on,” Fukuda says.
In this case, the bathtub contains staple crops like tomatoes, cotton, and safflower as well as industrial infrastructure from chemical plants to rail lines. Farmers get to work building berms in a last-ditch effort to save crops and cows, many of which have already been lost.
How much more they and others lose depends on what communities do next. When water rises and reclaims some of its land, will people keep fighting with dirt, dikes, and pumps? Or will more of us pull back and make room for water to transform?
Similar questions are emerging along the former shores of Sumas Lake. “There’s going to be a lake returned at some point, no matter how big you build the infrastructure,” says Ned. “That’s Mother Nature, that’s climate change. It’s pretty hard to control.”
Like the Tulare Lake basin, the dry bed of Sumas Lake is the product of more than a century of colonization. The area has been divvied up into 1,375 properties—many of them potato and dairy farms. It’s also bisected by power lines, a national highway, and the controversial Trans Mountain oil pipeline.
Yet, restoring the lake may still be cheaper than the status quo. In a forthcoming study that Semá:th First Nation worked on in partnership with the University of British Columbia, buying out all the lakebed properties at current land values would cost around CAN $2-billion, compared with Abbotsford’s $3-billion-plus plan to reinforce flood barriers. These savings line up with other Canadian examples of managed retreat. In Grand Forks, British Columbia—which was devastated by flooding in 2018—the community agreed to remove or relocate 70 buildings after learning that long-term flooding could triple the cost of retreat.
Whatcom County, Washington, is working on a similar program for the most threatening bends of the Nooksack River—that wild and shifting waterway that defied the US border in November 2021 when it reverted back to its ancient northward course and barreled, downhill, into Sumas Lake. There, the county is in the process of acquiring nearly US $23-million in state and federal grants to buy out 23 properties and elevate 29 structures, several in the town of Everson, where the Nooksack roared through Main Street.
The Whatcom County plan is an example of partial retreat that prioritizes the most at-risk homes and businesses in the floodplain. “We’re not looking at buying out the whole impacted area,” says Paula Harris, the county’s river and flood manager. “That’s whole towns. They’re all we’ve got.” Still, it’s an acknowledgment of the power of water, which will rise again and flow downhill. “We’re fighting gravity.”
Fourteen months after the Abbotsford flood, workers suck up the last remnants of Sumas Lake, almost exactly 100 years after settlers began efforts to tame it. As eagles and herons look on, representatives from the City of Abbotsford, Semá:th First Nation, and the Lower Fraser Fisheries Alliance drag a 60-meter seine across the water. They search for any fish that may have gotten trapped on the wrong side of a barrier humans built.
Four people on foot plus two in a small boat struggle through the cold water to close the net. When they finally pull it to the surface, one wild coho salmon stands out in blazing spawning colors, months after it should have released its sperm and died. And amid a sea of invasive carp, three large sturgeon wriggle their iridescent bodies. They twitch. They fight. They stick their heads up, defiantly, from the receding water.
This article first appeared in Hakai Magazine and is republished here with permission. Read more stories like this at hakaimagazine.com.
|
<urn:uuid:6696193c-67f4-4749-88c1-8c8e8fd07275>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474361.75/warc/CC-MAIN-20240223053503-20240223083503-00154.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9575244188308716,
"pii_count": 0,
"score": 3.15625,
"token_count": 7531,
"url": "https://baynature.org/2023/12/19/a-river-runs-above-us/"
}
|
In mid-November 2021, a great storm begins brewing in the central Pacific Ocean north of Hawai‘i. Especially warm water, heated by the sun, steams off the sea surface and funnels into the sky.
This article is from Hakai Magazine, an online publication about science and society in coastal ecosystems.
A tendril of this floating moisture sweeps eastward across the ocean. It rides the winds for a day until it reaches the coasts of British Columbia and Washington State. There, the storm hits air turbulence, which pushes it into position—straight over British Columbia’s Fraser River valley.
Clouds gather and darken. Below, a patchwork of farms and subdivisions sprawls along the Fraser River from its mouth, south of Vancouver, to the eastward mountain slopes, and southeast across the US border. At the center of the valley lies Abbotsford, a city of around 150,000 people nestled in a fingerprint-like depression between two mountains. As the stream of humid air rises toward the peaks, it cools, condenses, and bursts.
To Murray Ned, it sounds like a creek is overflowing outside his home in Kilgard, on a hillside within Abbotsford that’s part of the Semá:th (Sumas) First Nation reserve. Lying in bed, Ned listens to water overflow his rain gutters and splash two stories to the ground. Rain is common in Abbotsford in November, but it’s usually quiet. And it usually lets up.
Over the next two days, nearly a month’s worth of rain dumps here and in other parts of the province. The resulting floods and landslides kill at least six people, rip apart buildings, and buckle roads. In Abbotsford, more than 1,000 homes are swamped and 640,000 farm animals perish as rivers reclaim agricultural land in the floodplain.
But amid the losses, Ned sees something else. The Tuesday evening of the flood—after he has vacuumed water from his mother’s basement and moved the family’s horses to high ground—the deluge stops. Ned settles into a folding chair in his backyard, pulls out a Kokanee lager, and takes in the view. The flood laps knee-high against his horse barn. Semá:th Xó:tsa, Sumas Lake, has returned to the territory.
Once a 6,475-hectare body
|
of water, Sumas Lake brimmed with sturgeon, trout, and five species of salmon—sustaining the Semá:th people and larger Stó:lō Nation for millennia. The lake swelled with fall rains and spring snowmelt, and shrank during summer, leaving fertile ground between the high and low water marks where wild potatoes, berries, and blue camas flowers with edible bulbs thrived. By 1924, though, settlers had converted the lakebed into permanent farmland with a system of dikes, canals, and pumps. But after the 2021 storm, everything in the lowlands is submerged again, from cornfields to the Trans-Canada Highway to a castle-themed fun park.
Most of the floodwaters have come from the Nooksack River. High runoff shifted the Nooksack’s course from its usual east–west flow in the United States and sent it rushing northward into Canada. Stó:lō elders know it can do this. For most of the postglacial period, before natural sedimentation deflected its course, the Nooksack fed the Sumas and Fraser Rivers as well as Sumas Lake. Big floods today can still send the river back north, borders be damned. Water was made to change states. Not so long ago, Sumas Lake had been there to catch it.
As Ned surveys the moonlit water, glistening around horse barns, poultry sheds, and power lines, sturgeon and coho salmon swim old migration routes beneath the surface. “To see Mother Nature threaten [the region] but also see the lake in all its glory again was pretty amazing,” Ned tells me later. Alongside the lake, he sees the possibility of a different future: one that restores space and flexibility for water and that keeps communities safer from the extremes of climate change.
The storm that hit Abbotsford is known as an atmospheric river. These systems are common along the west coast of North America and midlatitudes around the world. They account for one-third to one-half of the annual precipitation in some areas and represent a major source of fresh water for many countries. But studies suggest that atmospheric rivers are becoming more volatile and are delivering water in bigger bursts. Paradoxically, recent storms, including the one in British Columbia, have occurred between some of the hottest and driest summers on record. When they deliver needed rain, it’s too much for parched soils and concrete channels to contain. This pendulum swing between deluge and drought—what meteorologists have started calling “weather whiplash”—will only grow more pronounced as the planet warms.
Ned and other members of Semá:th First Nation have begun advocating to revive at least part of Sumas Lake for the ecosystem and Stó:lō culture, and also for flood control and natural water storage that will make the region more resilient against future disasters. So far, they haven’t gained much traction in Abbotsford, but efforts elsewhere suggest they’re onto something. Perhaps nowhere provides more examples than California, which has long ridden the seesaw between dangerous downpour and punishing drought. Whether through foresight or surrender, communities there are giving up new ground for water and restoring some natural systems, to work with rain when it comes.
Sometimes when heavy rain arrives in California, it originated near Hawai‘i. Warm seawater evaporates and fills the air with water vapor, which gets blown across the Pacific until it collides with coastal mountains and falls as rain or snow. By the 1990s, meteorologists believed that a windy layer in the lower atmosphere called the low-level jet likely carried this tropical moisture. But just how much moisture, how warm, how windy, and where exactly it flowed remained mysterious.
In 1998, the National Oceanic and Atmospheric Administration set out to answer these questions with a program called CALJET that deployed sensor-studded planes to fly into West Coast storms. By releasing instruments called dropsondes, which look like mail tubes attached to small parachutes, the research team measured wind speed, temperature, and moisture content at different altitudes. A young scientist named Marty Ralph directed the crew from the flight deck.
Ralph—who founded and now directs the Center for Western Weather and Water Extremes at Scripps Institution of Oceanography at the University of California San Diego—first got interested in storms as a kid living in Arizona. He’d marvel from his bedroom window at the monsoon rains that made the desert bloom. Later, as a graduate student at the University of California, Los Angeles, Ralph mounted a rain gauge outside his bungalow during one of the state’s worst droughts on record. He was shocked to find 100 millimeters of water in the gauge one winter night; it ended up being half of the local rainfall for the entire year. “I got an early dose of how important individual storms can be in California,” Ralph says.
On his CALJET flights, Ralph got to know these storms more intimately, from their bumpy interiors and thick foggy cloaks to their distinct aroma that wafted through the plane’s air filters. “It smelled tropical,” Ralph recalls, “just sticky and warm.” The scent was coming from far away. In fact, satellite images later revealed that the bands of water vapor stretched a couple of thousand kilometers from the tropics to the coast, and they were as wide as the distance between Vancouver, British Columbia, and Portland, Oregon. But the most impressive part was the moisture, or “juice,” that the dropsondes measured: “The equivalent of 25 Mississippi Rivers of water, but as vapor instead of liquid,” Ralph says.
Once the team crunched the numbers, they learned that their findings lined up with a few landmark studies of the day. One, by researchers Yong Zhu and Reginald Newell, helped coin an evocative term—atmospheric river. “That’s when the light bulb went on,” Ralph says. “We were studying a river in the sky.”
In the quarter-century since the storms got their name, researchers have learned that on average a half-dozen of these systems are moving moisture around the planet at any given time. Local monikers point to their origins: the Rum Runner sends juice from the Caribbean to western Europe; the Pineapple Express is the famous rainmaker that whisks wet air from Hawai‘i to the West Coast. There, atmospheric rivers act like a traveling sprinkler system, spraying up at Alaska in late summer and swiveling down to California by winter or spring.
These sprinklers are usually beneficial, but as in British Columbia, they can become hazardous cascades. During California’s Great Flood over the winter of 1861–1862, a series of atmospheric rivers made it rain for 43 days. The floodwaters formed an inland sea that stretched from the under-construction capitol building in Sacramento to the bottom of the ancient Tulare Lake basin in the Central Valley, and beyond. Thousands of people and one-quarter of the state’s cattle died.
Paleo records from sediments show that atmospheric river–induced floods of at least that magnitude have occurred in California roughly every 200 years over the past two millennia. Today, even storms lesser than these mega rains cause 90 percent of flood damage along parts of the West Coast.
Atmospheric rivers also pummel the coasts of western Europe, Africa, South America, and New Zealand. They were responsible for the horrific floods in Pakistan in August 2022 that killed nearly 1,500 people and displaced 33 million more. Their warm moisture is so good at liquefying snow and ice that they’re thawing parts of Greenland and the Arctic. And because a warmer atmosphere can hold more water vapor, climate change is supercharging these storms. “You have more fuel basically,” Ralph says.
But at the same time that rainfall is intensifying, the droughts that occur are deepening. California—which already faces the most variable precipitation in the United States—will likely see a future with fewer storms overall and longer dry periods between them. Combined with the fact that the storms that do come are loaded with more water, this means the wet times will get wetter and the dry times will get drier. More of California’s water is going to pour down in floods.
“Flooding and drought really are connected,” says Michael Dettinger, a hydroclimatologist who’s been studying atmospheric rivers alongside Ralph since the mid-2000s. “One is just the flip side of the other.”
Sacramento in mid-August 2022 is searing. It’s not just the 40 °C temperature that’s setting records; California is about to close out its driest three-year period since 1895. In the summer, the Sacramento Valley, part of the larger Central Valley that grows one-quarter of the United States’ food, is usually a checkerboard of gold and green—the gold of spring wheat mixed with glowing green stalks of sushi rice that sway in the warm breeze. But in 2022, Californian farmers planted less than half the amount of rice they projected. That’s because there wasn’t enough water. Jacob Katz squints through his sunglasses at the vast dust bowl that’s resulted. “No one has ever seen this before,” he says, as we drive down a dirt lane surveying the barren fields.
The senior scientist at California Trout, Katz has been working for more than a decade to reconnect the Sacramento River—the state’s largest source of fresh water—to its adjacent floodplains. Like the leaders of Semá:th First Nation to the north, Katz knows that making more room for water could give the landscape and its inhabitants a buffer against weather whiplash.
At the end of the road we’re traveling is a case in point: dead grass gives way to a watery oasis, glimmering like a mirage in the sun. Dozens of white-faced ibises dip and shimmy their wings while a sandpiper sips a cool drink. Within minutes, an egret glides down from the hazy sky and sinks its black legs into the mud. All of the life in the valley, it seems, is taking refuge in this shallow 55-hectare lake.
The “lake” is actually a research plot at Davis Ranches, a heritage farm near the Sacramento River that’s testing new approaches to water management. One silver lining of the drought is that fallowed farmland can become wildlife habitat, says the farm’s manager, John Brennan, who’s checking on the site with Sandi Matsumoto, water program director for the Nature Conservancy, a partner on this project. “We need to build all the habitat we can to get ready for the dry years,” Brennan says. That’s because habitat can absorb water when it’s available—ideally during floods—and store it in the soil and underground for humans, plants, and animals to tap into during dry spells.
This artificial wetland was designed to assist shorebirds that visit Sacramento during their annual migration from the Arctic to South America on a path known as the Pacific Flyway. Shorebirds have declined by about 40 percent on the West Coast, and they’re particularly vulnerable to drought. But even seasonal habitat like this can make a big difference for birds, Matsumoto says. And it requires far less water than rice farming.
The project is a summer addition to what Davis Ranches and other farms already do for shorebirds and waterfowl in winter. After the rice harvest wraps in the fall, farmers intentionally flood their fields at the peak of the birds’ winter migration, when river water is more abundant. If done at scale, this could reduce the flooding of some infrastructure and communities while allowing more water to soak into the ground, which in turn can help support wildlife and humans at the mercy of drought. “So we have these two things: water insecurity and flood,” Katz says. “But they actually have a common solution—puddles.”
Historically, when rain and snowmelt surged down from the Sierra Nevada—California’s craggy backbone that, in wet years, is ground zero for atmospheric rivers—the Sacramento would sometimes swell to 70 times its average flow and spill across a marshy mosaic that connected to a wetland ecosystem larger than the state of Connecticut. Tens of millions of birds and one of the world’s greatest chinook salmon runs relied upon these floodplains. But after the California gold rush, water engineers constricted the river between 1,600 kilometers of steep levees designed to blast floodwater straight to the ocean. Over the coming decades, settlers replaced the wetlands with farms and other developments.
Katz sums up that history in one word: drainage. “We’re like anti-beavers,” he says. “Everywhere we go, we get water off the landscape as quickly as possible. It’s in our language: ‘drain the swamp.’ What does that mean? It means progress.”
According to a growing body of research, it also means more intense flooding and drought. Instead of allowing water to spread and seep into the earth, squeezing rivers between channels creates a superhighway for flood flows. And sending water away from the landscape means, for the most part, the valley remains parched; below-ground aquifers wait desperately for drips. Add in the fact that agriculture pumps groundwater significantly faster than those aquifers can recharge, and you’ve got a very thirsty state. But mimicking the natural flows of water across the terrain could help address both problems at the same time.
“We need to return to the natural cycles of our system, which include atmospheric rivers, which include drought,” Matsumoto says. “We need to restore our natural systems that are able to deal with those extremes.”
She looks across the pop-up wetland where dragonflies circle over neon-green algae. “Fish food!” Katz declares. When sunlight hits the shallow, nutrient-rich water, it blooms with phytoplankton and invertebrates, which then feed fish and birds. Efforts by groups like the Nature Conservancy and California Trout to flood the valley in winter have been supporting dozens of species of birds, which, before dropping off over the past half-century, were so numerous they’d black out the sun.
Native fish, like endangered winter-run chinook salmon, also benefit when they have access to food and shelter on the floodplain. Research shows that juvenile chinook released into wetlands can grow five times faster than those confined to a channel between levees. They’re more likely to survive their odyssey to the ocean and back as well. “It’s night and day,” Katz says. Straitjacketed rivers with managed flows favor fish like bass that are invasive in California. But natural swings and flood events support the native salmon they’ve shaped, just like they’ve shaped this fertile valley. “Flood doesn’t have to be catastrophic,” Katz says. “It can also be a driver of the productivity and abundance we so value.”
Freeing rivers has been catching on as a flood mitigation strategy around North America and the world. And the places welcoming water are seeing added benefits.
In the Netherlands, a program called Room for the River has removed dikes and created side channels for water and fish to meander through 34 riverside communities. Many of these places now offer greater biodiversity and recreation in addition to enhanced flood safety. The new outlets for water act like pressure-release valves when flows are high. During a freak summer flood on the Meuse river in 2021, water levels were the highest ever recorded in some places. But downstream, thanks in part to these Room for the River projects, there was much less flood damage than in previous years, even with bigger flows.
A similar program in Washington State has restored 114 kilometers of fish habitat and created more than 3,700 jobs, ranging from outreach workers to engineers. In the state’s southwest corner, along the Columbia River, locals are also knocking down levees to give lamprey and salmon year-round access to 370 riparian hectares while also protecting infrastructure from flooding.
While California’s plumbing is more complex—it’s one of the most engineered water systems in the world—the state is also restoring some riverways. On the Sacramento, for example, workers are digging 15,000 dump trucks’ worth of earth from a levee so that salmon and sturgeon have a better chance of reaching the floodplain. And now, thanks to growing awareness of atmospheric rivers and precipitation swings, water managers are starting to carry out floodplain projects specifically to boost groundwater: a near-miraculous co-benefit.
The last field we visit at Davis Ranches crunches with dry grass and weeds, another casualty of the drought. But the previous December, an atmospheric river unloaded a good amount of rain. This 24-hectare tract was waiting for the deluge. The ranch piped floodwater from the river into this plot, and its porous soils sucked it up like a sponge. Over a few weeks, the sponge trickled its contents through sand, silt, and gravel into layered pools in the earth: groundwater aquifers. At the same time that puddles
|
More than one-third of the Amazon forest is degraded, study says
Two new analyses detail how land clearing and degradation are pushing the Amazon rainforest toward a tipping point of no longer being a forest that supports an abundance of life and buffers Earth from climate change.
Why it matters: The findings offer insights for policy paths and priorities aimed at trying to save the climate-crucial ecosystem.
Driving the news: Brazilian President Luiz Inácio Lula da Silva and Colombian President Gustavo Petro formed a "grand pact" earlier this month to try to save the Amazon "for humanity."
- The first anti-deforestation raids on Lula's watch took place last week to stop illegal clearing of the forest.
- Earlier this month, Lula signed a series of executive orders to address illegal deforestation in Brazil, which is home to 60% of the Amazon forest, and reactivated the Amazon Fund that invests in efforts to stop deforestation.
- "Somewhere in the next 10% or 20%, there will be a phase shift," says James Albert, a biologist at the University of Louisiana at Lafayette. A recent analysis suggests about 31% of the eastern Amazon has been deforested — far above the estimated threshold of 20% to 25%.
- "Forests exist within a range of factors that can keep them as forest," he says. But they can be pushed to a point where they turn from a forest to a savanna or degraded landscape, get too dry, and "simply burn."
What's new: In a paper published today in the journal Science, Albert and an international team of scientists report humans are causing changes to the Amazonian ecosystem in a matter of decades or centuries, as opposed to millions to tens of millions of years for natural processes.
- "Organisms can't adapt in the period of decades or centuries," Albert says.
Another analysis published today looked at the lesser-known problem of land degradation in the Amazon due to logging, fires, extreme droughts and changes at the edges of the forest caused by the habitat being fragmented.
- Deforestation changes the landcover and can be spotted by satellites. Degradation stems from changes in how the land is used and can be hidden by the forest canopy — a forest continues to be a forest but is degraded and weakened.
- Using data from earlier studies and new satellite images, the authors estimate about 2.5 million square kilometers of the Amazon — about 38% of the remaining forest — is considered degraded by one or more the disturbances. That's in addition to the deforestation.
- They also found the carbon lost from the forest due to degradation is on par with that due to deforestation — and degradation can lead to as much loss of the forest's biodiversity as deforestation.
Their projections suggest "degradation will continue to be a major source of emissions in the region, regardless of what happens with deforestation," says study co-author David Lapola, a research scientist at the University of Campinas (Unicamp) in Brazil.
- "We need specific policies to handle degradation. It’s not using the same policies and actions for deforestation," he says.
Between the lines: Deforestation reduces the forest's ability to generate rainclouds, which, combined with climate change, significantly raises the odds of drought. And the more fragmented the forest is, the harder it is to bounce back after a drought ends.
- Controlling fire, timber extraction and deforestation is "something Brazil and the other Amazonian countries can tackle and it's their responsibility," Lapola says.
- But the extreme droughts are caused by "global climate change, which is not only for us to solve," he says.
The big picture: Passing the forest's tipping point would be devastating for the forest's biodiversity and for the Indigenous people that live there.
- But the forests of the Amazon aren't just a reservoir for more than 3 million species of plants and animals: They store large amounts of carbon dioxide that, if the forests die, would be released back into the atmosphere.
- Releasing that carbon dioxide would be "throwing away the goal" of the Paris Agreement to limit global warming to 1.5 degrees Celsius, Colombia's Minister of Environment and Sustainable Development Maria Susana Muhamad said last week at the World Economic Forum in Davos.
"But at the same time we know that could be the future prosperity for our countries," she added, referring to the economic benefits stemming from the Amazon's resources that are largely reaped by people in cities and other countries.
- "We're not going to stop development and nor should we. Humans have the right to use their land and resources," Albert says.
- "But the problem is how it's being done," he adds. He and his co-authors outline a "new Amazonian bioeconomy" based on sustainable use of the forest's resources that "extends beyond extractive and export-based economic activities."
|
<urn:uuid:45e24abe-2e65-4085-b91b-51750666d125>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100057.69/warc/CC-MAIN-20231129073519-20231129103519-00729.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9529352784156799,
"pii_count": 0,
"score": 3.296875,
"token_count": 1020,
"url": "https://www.axios.com/2023/01/26/amazon-rainforest-degraded"
}
|
More than one-third of the Amazon forest is degraded, study says
Two new analyses detail how land clearing and degradation are pushing the Amazon rainforest toward a tipping point of no longer being a forest that supports an abundance of life and buffers Earth from climate change.
Why it matters: The findings offer insights for policy paths and priorities aimed at trying to save the climate-crucial ecosystem.
Driving the news: Brazilian President Luiz Inácio Lula da Silva and Colombian President Gustavo Petro formed a "grand pact" earlier this month to try to save the Amazon "for humanity."
- The first anti-deforestation raids on Lula's watch took place last week to stop illegal clearing of the forest.
- Earlier this month, Lula signed a series of executive orders to address illegal deforestation in Brazil, which is home to 60% of the Amazon forest, and reactivated the Amazon Fund that invests in efforts to stop deforestation.
- "Somewhere in the next 10% or 20%, there will be a phase shift," says James Albert, a biologist at the University of Louisiana at Lafayette. A recent analysis suggests about 31% of the eastern Amazon has been deforested — far above the estimated threshold of 20% to 25%.
- "Forests exist within a range of factors that can keep them as forest," he says. But they can be pushed to a point where they turn from a forest to a savanna or degraded landscape, get too dry, and "simply burn."
What's new: In a paper published today in the journal Science, Albert and an international team of scientists report humans are causing changes to the Amazonian ecosystem in a matter of decades or centuries, as opposed to millions to tens of millions of years for natural processes.
- "Organisms can't adapt in the period of decades or centuries," Albert says.
Another analysis published today looked at the lesser-known problem of land degradation in the Amazon due to logging, fires, extreme droughts and changes at the edges of the forest caused by the habitat being fragmented.
- Deforestation changes the landcover and can be spotted by satellites. Degradation stems from changes in how the land is used and can be hidden by the forest canopy — a forest continues to be a forest but is degraded and weakened.
- Using data from earlier studies and new satellite images, the authors estimate about 2.5 million square kilometers of the Amazon — about 38% of the remaining
|
forest — is considered degraded by one or more the disturbances. That's in addition to the deforestation.
- They also found the carbon lost from the forest due to degradation is on par with that due to deforestation — and degradation can lead to as much loss of the forest's biodiversity as deforestation.
Their projections suggest "degradation will continue to be a major source of emissions in the region, regardless of what happens with deforestation," says study co-author David Lapola, a research scientist at the University of Campinas (Unicamp) in Brazil.
- "We need specific policies to handle degradation. It’s not using the same policies and actions for deforestation," he says.
Between the lines: Deforestation reduces the forest's ability to generate rainclouds, which, combined with climate change, significantly raises the odds of drought. And the more fragmented the forest is, the harder it is to bounce back after a drought ends.
- Controlling fire, timber extraction and deforestation is "something Brazil and the other Amazonian countries can tackle and it's their responsibility," Lapola says.
- But the extreme droughts are caused by "global climate change, which is not only for us to solve," he says.
The big picture: Passing the forest's tipping point would be devastating for the forest's biodiversity and for the Indigenous people that live there.
- But the forests of the Amazon aren't just a reservoir for more than 3 million species of plants and animals: They store large amounts of carbon dioxide that, if the forests die, would be released back into the atmosphere.
- Releasing that carbon dioxide would be "throwing away the goal" of the Paris Agreement to limit global warming to 1.5 degrees Celsius, Colombia's Minister of Environment and Sustainable Development Maria Susana Muhamad said last week at the World Economic Forum in Davos.
"But at the same time we know that could be the future prosperity for our countries," she added, referring to the economic benefits stemming from the Amazon's resources that are largely reaped by people in cities and other countries.
- "We're not going to stop development and nor should we. Humans have the right to use their land and resources," Albert says.
- "But the problem is how it's being done," he adds. He and his co-authors outline a "new Amazonian bioeconomy" based on sustainable use of the forest's resources that "extends beyond extractive and export-based economic activities."
|
Imagine the following scenarios at a high school.
A student can’t find her dad, who is experiencing homelessness, so she can't get an appointment to go to the doctor. He has all her IDs.
A student experiencing homelessness can’t concentrate in class because of daily pain from a toothache.
A student's dad works in the oilfields, so she can’t get a ride to a doctor’s appointment.
A student who would rather die than disappoint or embarrass his parents by asking for mental health help.
These obstacles to medical, dental and mental health care are a daily occurrence in schools without their own health clinic, according to Lori Plantiko, a school counselor at Grand Junction High School.
“These are just a small sample of what obstacles affect grades, attendance, graduation rates and quality of life here at Grand Junction High School,” she told the Mesa Valley School District 51 school board at a meeting last week. She was speaking in support of a plan to build a school-based health center inside the new Grand Junction High School, expected to open next year.
But whether high school students should have easy access to health care has become a bitterly contested issue. Some city residents are pushing back. There’s an age divide. Nearly everyone who testified against the project at last week’s board meeting was older or retired, while high school students unanimously argued for it.
What are school-based health centers?
They are health care clinics located inside a school or on school grounds operated by a healthcare provider. Colorado has 70, eight of them on the Western Slope, according to Youth Healthcare Alliance, formerly the Colorado Association for School-Based Health Care. They offer students wellness checks, sports exams, strep tests, care for chronic conditions like diabetes, mental and sexual health care – and some offer dental screenings. They often serve underserved children and youth who have limited access to health care. More than 40 percent of Grand Junction High’s students are eligible for free and reduced-price lunch and a third are students of color.
Multiple national studies show higher grades and graduation rates for students who have access to school-based clinics, while absenteeism goes down. Other research has shown that school-based health clinics can reduce student hospitalizations, emergency department visits and overall health care costs.
“Parents can take less time off work, and students don’t have to take as much time from the class to be able to access health care because it’s right there at the school,” said Aubrey Hill, executive director of the Youth Healthcare Alliance. “The care is high quality, comprehensive, and it’s available regardless of insurance or ability to pay.”
The Colorado Health Institute published a report in 2021 that identified 50 schools where students struggled with health issues. Three were in D51 schools. The report found schools with high numbers of students of color have significant health needs. It found schools in rural Colorado, particularly the San Luis Valley and southeast Colorado, had relatively limited access to school-based health centers. It found stark disparities in urban Colorado, with high need for clinics in Pueblo and Adams counties.
Central High School student Kenya Contreras recalls when she sunk into a bad depression last fall.
“I was unmotivated and at times all I did was cry in my bed for hours,” she told the school board at a recent meeting.
The champion wrestler said at her first match of the season she spent all her time sobbing. She decided to get help. Central High, another Grand Junction high school, has a clinic. She saw the free therapist at her school. He gave her strategies to help get through the season and through everyday life.
“From then on I’ve forever been grateful for the wellness center at Central,” she said. “It is simply a safe place for students to be treated as people for any need they have and to exclude other students from that experience, I believe, is simply unfair."
A parade of Central High students went to bat for their Grand Junction High peers at last week’s board meeting.
Karami Lyle told the board she knows they come from a different generation but said mental illness is widespread and severe in her generation. Lyle has anxiety-induced seizures. She’s gotten mental health care inside and outside of school. She said she’s lucky she has a supportive mom.
“But other kids have parents who disagree strongly with therapy, and those kids without clinics are unable to go because of needing a ride or payment method,” she said.
She and other youth noted that Central High has not had a suicide since the clinic came to the school in 2020, which she attributes to the help clinic staff Rosa Gardner and Steven Martinez have given youth.
“As someone who has lost multiple friends and my father by suicide and getting close myself, we need this wellness center. We need Steven. We need Rosa. And they need a Rosa. They need a Steven.”
The impetus for Central’s clinic began in 2017 when the community was rocked by opioid overdoses and several teen suicides. Since the clinic opened, the top five medical diagnoses have been sports physicals, acute cough, sore throat, headache and COVID-19 screenings. On the behavioral health side, it’s been depression, anxiety, stress from home, social anxiety and relationship problems, according to MarillacHealth.
‘There are citizens who feel betrayed by what is happening.’
On the other side of the debate were many older residents and some parents who oppose the project. Resident Jay Hosberg, who said he has not had children or grandchildren in the district, said when voters in 2021 overwhelmingly approved a bond to build a new high school it didn’t specify that a school-based health clinic would be part of it.
“There are citizens who feel betrayed by what is happening,” he said. “Bottom line, I now regret having voted for this project.”
Resident Anna Elliot said the ballot measure approved by voters should have specifically delineated a health care clinic.
“Do not ignore your taxpayers or invite legal challenges,” she said. “Future funding requests will be difficult to secure if you do.”
Some parents worry about being kept in the loop about their children’s health care, and some worried that students can access contraceptives in the clinic.
Retired nurse Connie McDowell called the proposed clinic an injustice to the doctor-patient relationship.
“We should not diminish or bypass the pediatrician or the family doctor visits when seeking care for physical, emotional or mental health concerns,” she said. “It's vital to preserve the child, patient, doctor relationships here in our community.”
Cindy Ficklan has a son at Grand Junction High.
“A school district itself cannot be all things to all people, and the district does not need to insert itself in between parents and students,” she said.
But many residents weren’t aware of rules surrounding school-based health clinics and Colorado law.
Parental consent is needed for most medical services at a clinic, including vaccinations. There are exceptions. Colorado law allows minors access to reproductive services like STI testing and treatment and substance use counseling without parental consent at any clinic or pediatrician's office in the state. Colorado law allows minors 12 and over to access mental health services on their own.
For the period of August to December 2022, the vast majority of medical visits to Central High’s clinic involved parental consent. Mental health visits were evenly split between minor and parental consent.
Just a tiny fraction of thousands of clinic visits over the past two and a half years at Central were for contraceptive services.
Rosa Gardner, a physician assistant who provides care to students at the Central High clinic, said the staff encourages students to involve parents as much as possible.
“We are not in the business of creating divides between parents and children,” she said.
Most of her work consists of sprained ankles, sports physicals, concussion evaluations and strep tests. But other visits are more urgent. The clinic helped a student who was missing school because of chronic stomach pain get a formal diagnosis. A surgery to correct the problem was scheduled. In the meantime, the student is back in class because they learned techniques to manage triggers for their pain.
“Imagine how much easier it is to learn when you're not in constant pain,” Gardner said.
A student stopped taking their medication because their psychiatrist moved away. Their hallucinations came back at school. The clinic connected the student to a new outside psychiatrist and the student resumed taking their medicine.
In all cases, Gardner said parents were involved and grateful for the help their child received. That’s true especially for low-income youth.
“They come to us at the clinic because they can’t go anywhere else,” she said.
The fact of the matter is, even for parents who have close relationships with their teen children, youth don’t tell their parents everything.
That’s what Dr. Laura Campbell, a family practice physician who will have a freshman at Grand Junction High School next year, told the school board.
“Having another trusted adult in those children’s lives is critical,” said Campbell. “I would want to make sure that there is another trusted adult in the life of my kids if I can’t provide all they need.”
That’s all the more important when teen mental health is at crisis levels. A recent CDC report finds unprecedented levels of hopelessness and suicidal thoughts among U.S. high school students. Nearly three in five teen girls said they felt “persistently sad or hopeless."
“We need this medical clinic. We need help,” said Grand Junction High teacher Justin Whitehead.
Board chair Andrea Haitz said some community members asked if the clinic could be next to the school but said MeriallacHealth said that plan wouldn’t work. The health care provider would fund the clinic, operating in the school rent-free, with the district paying a one-time fee of $247,000.
“It’s unfortunate because we’re trying to figure out a way to have a ‘happy medium’ on both sets of concerns on this issue,” Haitz said.
Board member Kari Sholtes said another factor is student safety. Modern school buildings limit student access to and from the building.
“It’s not just about what Marillac can accomplish in a school-based health center but also: how do we keep our kids safe? That also does play a large role in where that facility can be located.”
The board is expected to vote on whether there will be a clinic at Grand Junction High School next Tuesday.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
<urn:uuid:17ee6cec-a4e5-4fea-a39f-b5e1b1bec244>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00636.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9718413352966309,
"pii_count": 0,
"score": 2.90625,
"token_count": 2345,
"url": "https://www.cpr.org/2023/03/02/plans-for-grand-junction-high-school-health-clinic-creates-rift-among-students-board-members-and-voters/"
}
|
Imagine the following scenarios at a high school.
A student can’t find her dad, who is experiencing homelessness, so she can't get an appointment to go to the doctor. He has all her IDs.
A student experiencing homelessness can’t concentrate in class because of daily pain from a toothache.
A student's dad works in the oilfields, so she can’t get a ride to a doctor’s appointment.
A student who would rather die than disappoint or embarrass his parents by asking for mental health help.
These obstacles to medical, dental and mental health care are a daily occurrence in schools without their own health clinic, according to Lori Plantiko, a school counselor at Grand Junction High School.
“These are just a small sample of what obstacles affect grades, attendance, graduation rates and quality of life here at Grand Junction High School,” she told the Mesa Valley School District 51 school board at a meeting last week. She was speaking in support of a plan to build a school-based health center inside the new Grand Junction High School, expected to open next year.
But whether high school students should have easy access to health care has become a bitterly contested issue. Some city residents are pushing back. There’s an age divide. Nearly everyone who testified against the project at last week’s board meeting was older or retired, while high school students unanimously argued for it.
What are school-based health centers?
They are health care clinics located inside a school or on school grounds operated by a healthcare provider. Colorado has 70, eight of them on the Western Slope, according to Youth Healthcare Alliance, formerly the Colorado Association for School-Based Health Care. They offer students wellness checks, sports exams, strep tests, care for chronic conditions like diabetes, mental and sexual health care – and some offer dental screenings. They often serve underserved children and youth who have limited access to health care. More than 40 percent of Grand Junction High’s students are eligible for free and reduced-price lunch and a third are students of color.
Multiple national studies show higher grades and graduation rates for students who have access to school-based clinics, while absenteeism goes down. Other research has shown that school-based health clinics can reduce student hospitalizations, emergency department visits and overall health care costs.
“Parents can take less time off work, and students don’t have to take as much time from the class to be able to access health care because it’s right there at the school,” said Aubrey Hill, executive director of the Youth Healthcare Alliance
|
. “The care is high quality, comprehensive, and it’s available regardless of insurance or ability to pay.”
The Colorado Health Institute published a report in 2021 that identified 50 schools where students struggled with health issues. Three were in D51 schools. The report found schools with high numbers of students of color have significant health needs. It found schools in rural Colorado, particularly the San Luis Valley and southeast Colorado, had relatively limited access to school-based health centers. It found stark disparities in urban Colorado, with high need for clinics in Pueblo and Adams counties.
Central High School student Kenya Contreras recalls when she sunk into a bad depression last fall.
“I was unmotivated and at times all I did was cry in my bed for hours,” she told the school board at a recent meeting.
The champion wrestler said at her first match of the season she spent all her time sobbing. She decided to get help. Central High, another Grand Junction high school, has a clinic. She saw the free therapist at her school. He gave her strategies to help get through the season and through everyday life.
“From then on I’ve forever been grateful for the wellness center at Central,” she said. “It is simply a safe place for students to be treated as people for any need they have and to exclude other students from that experience, I believe, is simply unfair."
A parade of Central High students went to bat for their Grand Junction High peers at last week’s board meeting.
Karami Lyle told the board she knows they come from a different generation but said mental illness is widespread and severe in her generation. Lyle has anxiety-induced seizures. She’s gotten mental health care inside and outside of school. She said she’s lucky she has a supportive mom.
“But other kids have parents who disagree strongly with therapy, and those kids without clinics are unable to go because of needing a ride or payment method,” she said.
She and other youth noted that Central High has not had a suicide since the clinic came to the school in 2020, which she attributes to the help clinic staff Rosa Gardner and Steven Martinez have given youth.
“As someone who has lost multiple friends and my father by suicide and getting close myself, we need this wellness center. We need Steven. We need Rosa. And they need a Rosa. They need a Steven.”
The impetus for Central’s clinic began in 2017 when the community was rocked by opioid overdoses and several teen suicides. Since the clinic opened, the top five medical diagnoses have been sports physicals, acute cough, sore throat, headache and COVID-19 screenings. On the behavioral health side, it’s been depression, anxiety, stress from home, social anxiety and relationship problems, according to MarillacHealth.
‘There are citizens who feel betrayed by what is happening.’
On the other side of the debate were many older residents and some parents who oppose the project. Resident Jay Hosberg, who said he has not had children or grandchildren in the district, said when voters in 2021 overwhelmingly approved a bond to build a new high school it didn’t specify that a school-based health clinic would be part of it.
“There are citizens who feel betrayed by what is happening,” he said. “Bottom line, I now regret having voted for this project.”
Resident Anna Elliot said the ballot measure approved by voters should have specifically delineated a health care clinic.
“Do not ignore your taxpayers or invite legal challenges,” she said. “Future funding requests will be difficult to secure if you do.”
Some parents worry about being kept in the loop about their children’s health care, and some worried that students can access contraceptives in the clinic.
Retired nurse Connie McDowell called the proposed clinic an injustice to the doctor-patient relationship.
“We should not diminish or bypass the pediatrician or the family doctor visits when seeking care for physical, emotional or mental health concerns,” she said. “It's vital to preserve the child, patient, doctor relationships here in our community.”
Cindy Ficklan has a son at Grand Junction High.
“A school district itself cannot be all things to all people, and the district does not need to insert itself in between parents and students,” she said.
But many residents weren’t aware of rules surrounding school-based health clinics and Colorado law.
Parental consent is needed for most medical services at a clinic, including vaccinations. There are exceptions. Colorado law allows minors access to reproductive services like STI testing and treatment and substance use counseling without parental consent at any clinic or pediatrician's office in the state. Colorado law allows minors 12 and over to access mental health services on their own.
For the period of August to December 2022, the vast majority of medical visits to Central High’s clinic involved parental consent. Mental health visits were evenly split between minor and parental consent.
Just a tiny fraction of thousands of clinic visits over the past two and a half years at Central were for contraceptive services.
Rosa Gardner, a physician assistant who provides care to students at the Central High clinic, said the staff encourages students to involve parents as much as possible.
“We are not in the business of creating divides between parents and children,” she said.
Most of her work consists of sprained ankles, sports physicals, concussion evaluations and strep tests. But other visits are more urgent. The clinic helped a student who was missing school because of chronic stomach pain get a formal diagnosis. A surgery to correct the problem was scheduled. In the meantime, the student is back in class because they learned techniques to manage triggers for their pain.
“Imagine how much easier it is to learn when you're not in constant pain,” Gardner said.
A student stopped taking their medication because their psychiatrist moved away. Their hallucinations came back at school. The clinic connected the student to a new outside psychiatrist and the student resumed taking their medicine.
In all cases, Gardner said parents were involved and grateful for the help their child received. That’s true especially for low-income youth.
“They come to us at the clinic because they can’t go anywhere else,” she said.
The fact of the matter is, even for parents who have close relationships with their teen children, youth don’t tell their parents everything.
That’s what Dr. Laura Campbell, a family practice physician who will have a freshman at Grand Junction High School next year, told the school board.
“Having another trusted adult in those children’s lives is critical,” said Campbell. “I would want to make sure that there is another trusted adult in the life of my kids if I can’t provide all they need.”
That’s all the more important when teen mental health is at crisis levels. A recent CDC report finds unprecedented levels of hopelessness and suicidal thoughts among U.S. high school students. Nearly three in five teen girls said they felt “persistently sad or hopeless."
“We need this medical clinic. We need help,” said Grand Junction High teacher Justin Whitehead.
Board chair Andrea Haitz said some community members asked if the clinic could be next to the school but said MeriallacHealth said that plan wouldn’t work. The health care provider would fund the clinic, operating in the school rent-free, with the district paying a one-time fee of $247,000.
“It’s unfortunate because we’re trying to figure out a way to have a ‘happy medium’ on both sets of concerns on this issue,” Haitz said.
Board member Kari Sholtes said another factor is student safety. Modern school buildings limit student access to and from the building.
“It’s not just about what Marillac can accomplish in a school-based health center but also: how do we keep our kids safe? That also does play a large role in where that facility can be located.”
The board is expected to vote on whether there will be a clinic at Grand Junction High School next Tuesday.
You want to know what is really going on these days, especially in Colorado. We can help you keep up. The Lookout is a free, daily email newsletter with news and happenings from all over Colorado. Sign up here and we will see you in the morning!
|
St. Croix: The Birthplace of Emancipation in the United States of America
The United States Virgin Islands—the cluster of central-Caribbean islands consisting principally of St. Croix, St. John, and St. Thomas—is the cradle of Emancipation in the United States of America: Slavery was abolished in the Virgin Islands (then, the Danish West Indies) on July 3, 1848, seventeen (17) years before the 1865 Emancipation on mainland USA and the 1873 abolition of slavery in Puerto Rico. And the 1848 Emancipation in the Virgin Islands was precipitated by rebellion rather than proclamation. As such, the Virgin Islands has served as a Beacon of Freedom across three centuries—from the middle of the 19th century, throughout the 20th century, to present-day.
July 3, 2023, marks the 175th Anniversary of Emancipation in the U.S. Virgin Islands, and celebratory events will extend until July 3, 2024.
Like the Haitian Revolution (1791-1804) led by black liberator Toussaint Louverture, the July 1848 rebellion on St. Croix was led by enslaved John “General Budhoe” Gottlieb (also spelled Gutliff). Undetected by the plantocracy and the Danish militia, approximately 8,000 slaves—about 40 percent of St. Croix’s total enslaved population of approximately 20,000—marched to Fort Frederik in the town of Frederiksted and demanded their freedom. In order to avoid widespread violence and loss of life, Governor-General Peter von Scholten declared the enslaved immediately free.
“Except for Haiti and St. Croix, all other emancipations in the history of Trans-Atlantic Slavery, beginning with the British in 1834 and ending with Brazil in 1888, were accomplished by proclamation,” said Wayne James, former senator of the United States Virgin Islands and president of the Homeward Bound Foundation (HBF), the organization that, on July 3, 1999, in recognition of the closing of the foundation’s year-long celebrations to mark the 150th Anniversary of Emancipation in the Virgin Islands, lowered the 12-foot tall, 17-foot wide Middle Passage Monument onto the floor of the Atlantic Ocean’s infamous Middle Passage, thereby placing a gravestone onto what has been described as the World’s Largest Graveyard. “This year’s 175th Anniversary of Emancipation in the U.S. Virgin Islands allows the entire nation to participate in the discussion,” James added. “All Americans should know that the U.S. Virgin Islands cleared the path to freedom, not only serving as a southern stop on the Underground Railroad, but also inspiring discourse on liberation.”
Consistent with its mission of 25 years ago, the Homeward Bound Foundation is gearing up to invite Denmark; the United States, with special attention being paid to collaborations with the Congressional Black Caucus (CBC) and the National Caucus of Black State Legislators (NCBSL); and the international community to a series of socio-historical events and educational programming aimed at heightening awareness of the post-Emancipation contributions of Africans to the cultural evolution of the world. https://millenniumarch.wordpress.com
“Anniversaries afford an opportune time to look back,” James said. “But they also allow us to look forward with clearer, more informed vision. This 175th Anniversary celebratory year of Emancipation in what is today the United States Virgin Islands will be an excellent platform for the world to look at race relations, artistic collaborations, scholarship, and cultural exchanges. And the Homeward Bound Foundation looks forward to again playing a key role in those discussions,” James concluded.
One thought on “St. Croix: Birthplace of Emancipation in the United States of America”
Thank you, Senator, for this outstanding piece of writing! It is a refreshingly vibrant reflection on our history.
|
<urn:uuid:6f7132ff-acf3-48ed-8695-2f73484b4b5c>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649518.12/warc/CC-MAIN-20230604061300-20230604091300-00316.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9322202205657959,
"pii_count": 0,
"score": 4,
"token_count": 855,
"url": "https://manlymanners.wordpress.com/2023/05/07/st-croix-birthplace-of-emancipation-in-the-united-states-of-america/"
}
|
St. Croix: The Birthplace of Emancipation in the United States of America
The United States Virgin Islands—the cluster of central-Caribbean islands consisting principally of St. Croix, St. John, and St. Thomas—is the cradle of Emancipation in the United States of America: Slavery was abolished in the Virgin Islands (then, the Danish West Indies) on July 3, 1848, seventeen (17) years before the 1865 Emancipation on mainland USA and the 1873 abolition of slavery in Puerto Rico. And the 1848 Emancipation in the Virgin Islands was precipitated by rebellion rather than proclamation. As such, the Virgin Islands has served as a Beacon of Freedom across three centuries—from the middle of the 19th century, throughout the 20th century, to present-day.
July 3, 2023, marks the 175th Anniversary of Emancipation in the U.S. Virgin Islands, and celebratory events will extend until July 3, 2024.
Like the Haitian Revolution (1791-1804) led by black liberator Toussaint Louverture, the July 1848 rebellion on St. Croix was led by enslaved John “General Budhoe” Gottlieb (also spelled Gutliff). Undetected by the plantocracy and the Danish militia, approximately 8,000 slaves—about 40 percent of St. Croix’s total enslaved population of approximately 20,000—marched to Fort Frederik in the town of Frederiksted and demanded their freedom. In order to avoid widespread violence and loss of life, Governor-General Peter von Scholten declared the enslaved immediately free.
“Except for Haiti and St. Croix, all other emancipations in the history of Trans-Atlantic Slavery, beginning with the British in 1834 and ending with Brazil in 1888, were accomplished by proclamation,” said Wayne James, former senator of the United States Virgin Islands and president of the Homeward Bound Foundation (HBF), the organization that, on July 3, 1999, in recognition of the closing of the foundation’s year-long celebrations to mark the 150th Anniversary of Emancipation in the Virgin Islands, lowered the 12-foot tall, 17-foot wide Middle Passage Monument onto the
|
floor of the Atlantic Ocean’s infamous Middle Passage, thereby placing a gravestone onto what has been described as the World’s Largest Graveyard. “This year’s 175th Anniversary of Emancipation in the U.S. Virgin Islands allows the entire nation to participate in the discussion,” James added. “All Americans should know that the U.S. Virgin Islands cleared the path to freedom, not only serving as a southern stop on the Underground Railroad, but also inspiring discourse on liberation.”
Consistent with its mission of 25 years ago, the Homeward Bound Foundation is gearing up to invite Denmark; the United States, with special attention being paid to collaborations with the Congressional Black Caucus (CBC) and the National Caucus of Black State Legislators (NCBSL); and the international community to a series of socio-historical events and educational programming aimed at heightening awareness of the post-Emancipation contributions of Africans to the cultural evolution of the world. https://millenniumarch.wordpress.com
“Anniversaries afford an opportune time to look back,” James said. “But they also allow us to look forward with clearer, more informed vision. This 175th Anniversary celebratory year of Emancipation in what is today the United States Virgin Islands will be an excellent platform for the world to look at race relations, artistic collaborations, scholarship, and cultural exchanges. And the Homeward Bound Foundation looks forward to again playing a key role in those discussions,” James concluded.
One thought on “St. Croix: Birthplace of Emancipation in the United States of America”
Thank you, Senator, for this outstanding piece of writing! It is a refreshingly vibrant reflection on our history.
|
Here are some tips for effectively pruning the shrubs in your landscaping
Pruning the shrubs in your landscape is important. We prune for several reasons, including to control size, to shape the plant, and to prevent disease.
We prune for several reasons, including to control size, to shape the plant, and to prevent disease.
It gets quite confusing sometimes when to prune your plants or how to prune your plant. Here are a few guidelines to help.
Every species of shrub has its own timeline of growth and flowering. Many species put their new flower buds on several months before they actually flower. Deciding when to prune a plant should take into consideration when the plant flowers.
Most all shrubs, however, do not like to be pruned in late summer or early fall. This will promote new growth that will likely not survive the winter.
A good general rule to follow is if the plant flowers before June 1, prune it shortly after it flowers.
If the plant flowers after June 1, prune it in late winter or early spring, before the flower buds are visible. A good example is the forsythia. It blooms in early spring (before June 1). The forsythia will put its new flower buds on the late summer or early fall. It should be pruned in late spring.
The June 1 rule generally applies to most plants, but don’t be too worried about missing the timeline. Most plants will tolerate some pruning at the wrong time of year and still live to see another day. It may not flower, however. Keep in mind that pruning is damaging the plant and you should not prune when the plant is under stress such as drought or extreme temperatures.
Knowing where to cut is also important. Always prune back to, or just above, a growing point such as a bud or another branch. Never leave a short stem or a branch stub. Completely remove dead branches and branches that will girdle other branches. Cut out branches that are growing toward the center of the plant as well as suckers or water sprouts.
Sometimes, a plant gets too dense and we need to thin out the center to help with air flow and prevent diseases. Never remove more than 30 percent of a plant in one season, however.
These are just some guidelines as every species is a little different and good gardeners will learn about their plant and it’s specific requirements for best results.
P. Andrew Rideout is the University of Kentucky, Extension Agent for Horticulture. He can be reached at <email-pii>.
|
<urn:uuid:ee7b66f3-becd-4292-b448-f66e962d9422>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499826.71/warc/CC-MAIN-20230130165437-20230130195437-00184.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9561021327972412,
"pii_count": 1,
"score": 2.890625,
"token_count": 544,
"url": "https://www.thegleaner.com/story/life/columnists/2023/01/05/some-tips-for-effectively-pruning-the-shrubs-in-your-landscapingsome-tips-for-effectively-pruning-th/69773192007/"
}
|
Here are some tips for effectively pruning the shrubs in your landscaping
Pruning the shrubs in your landscape is important. We prune for several reasons, including to control size, to shape the plant, and to prevent disease.
We prune for several reasons, including to control size, to shape the plant, and to prevent disease.
It gets quite confusing sometimes when to prune your plants or how to prune your plant. Here are a few guidelines to help.
Every species of shrub has its own timeline of growth and flowering. Many species put their new flower buds on several months before they actually flower. Deciding when to prune a plant should take into consideration when the plant flowers.
Most all shrubs, however, do not like to be pruned in late summer or early fall. This will promote new growth that will likely not survive the winter.
A good general rule to follow is if the plant flowers before June 1, prune it shortly after it flowers.
If the plant flowers after June 1, prune it in late winter or early spring, before the flower buds are visible. A good example is the forsythia. It blooms in early spring (before June 1). The forsythia will put its new flower buds on the late summer or early fall. It should be pruned in late spring.
The June 1 rule generally applies to most plants, but don’t be too worried about missing the timeline. Most plants will tolerate some pruning at the wrong time of year and still live to see another day. It may not flower, however. Keep in mind that pruning is damaging the plant and you should not prune when the plant is under stress such as drought or extreme temperatures.
Knowing where to cut is also important. Always prune back to, or just above, a growing point such as a bud or another branch. Never leave a short stem or a branch stub. Completely remove dead branches and branches that will girdle other branches. Cut out branches that are growing toward the center of the plant as well as suckers or water sprouts.
Sometimes, a plant gets too dense and we need to thin out the center to help with air flow and prevent diseases. Never remove more than 30 percent of a plant in one season, however.
These are just some guidelines as every species is a little different and good gardeners will learn about their plant and it’s specific requirements for best results.
P. Andrew Rideout is the University of
|
Kentucky, Extension Agent for Horticulture. He can be reached at <email-pii>.
|
This story is part of The Dallas Morning News monthlong series on how fentanyl has affected our community.
Fentanyl is a potent and lethal synthetic opioid, causing the majority of opioid overdoses in the U.S., according to the Centers for Disease Control and Prevention.
Despite being the subject of many news stories and political talking points, the drug is still the source of a lot of confusion. PolitiFact recently dispelled common myths about fentanyl, including that unintentional exposure to the drug in its pill or powder form can cause an overdose.
We asked you to send us your questions about fentanyl. Here are answers to the most commonly asked questions.
Reader question: What’s the difference between “street fentanyl and medical fentanyl in hospitals?”
There are two types of fentanyl: pharmaceutical (what our reader called “medical fentanyl”), and illicit (which the reader called “street fentanyl”). Medical fentanyl also is a synthetic opioid, according to the CDC.
“Fentanyl is an old, well-understood drug that is used safely” every day in medicine, said Dr. Ryan Marino, a Case Western Reserve University toxicologist and emergency room physician who studies addiction. “It is one of the most valuable and most” commonly used “medical therapeutics.”
Illicit fentanyl, on the other hand, is made in clandestine labs and is the version of the drug most closely associated with overdose deaths. Illicit fentanyl, according to the Drug Enforcement Administration, can be sold as a powder, pill or nasal spray.
Reader question: If you can’t absorb fentanyl through the skin, why are there fentanyl patches?
Despite viral stories of police officers or emergency responders supposedly overdosing or getting sick through unintentional contact with fentanyl, experts say it’s not scientifically possible. Fentanyl isn’t well absorbed by the skin, especially in its illicit forms.
However, medical fentanyl is sometimes administered through skin patches. But experts say this isn’t evidence that you can get sick by touching fentanyl in powdered or pill form.
For example, even though there are nicotine patches, your skin can’t absorb nicotine by touching tobacco, “and the same principle applies to fentanyl,” according to a report from Brandon Del Pozo, a Brown University public health expert.
However, the Food and Drug Administration warns about the possibility of children accidentally overdosing after exposure to fentanyl patches. According to the Mayo Clinic, if you touch the sticky side of a fentanyl patch you should contact a medical professional and rinse the area with water.
Reader question: Does naloxone work on fentanyl overdoses in the same way it does for other opioids?
Naloxone, a medicine commonly sold under the brand name Narcan, reverses opioid overdoses, according to the National Institute on Drug Abuse. Naloxone is administered via nasal spray or injection. Sometimes multiple doses are needed, depending on the strength of the opioid.
The National Institute on Drug Abuse says friends, family members and bystanders can give a person naloxone in case of an opioid overdose. However, the person should still seek emergency medical assistance. Naloxone reverses an opioid overdose for 30 to 90 minutes, so it is possible overdose symptoms can return once the treatment wears off.
Reader question: How much fentanyl is deadly?
The DEA says that as little as 2 mg of fentanyl can be deadly for an adult. However, a dose’s lethality can vary based on height, weight and tolerance from past exposure.
Fentanyl’s potency is what makes it so lethal.
Reader question: Does all illicit fentanyl come from the southern border?
Most illicit fentanyl in the U.S. comes from Mexico, mainly smuggled by U.S. citizens. Illicit fentanyl seizures are higher at official ports of entry than in between ports of entry, U.S. Customs and Border Protection data shows.
However, illicit fentanyl also arrives in the U.S. by mail, passenger boats, cargo ships, commercial planes and drones, according to a report from Rand Corp., a global policy think tank. Most of the illicit fentanyl that comes from Mexico is made in labs using chemicals from China, the Rand Corp. report said.
Reader question: Is it true that enough fentanyl has been seized at the southern border to kill every American?
Politicians, news reports and government agencies often cite federal fentanyl seizure statistics to claim that enough is seized to kill a high number of people. For example, in January, the DEA said that in 2022 it seized enough illicit fentanyl nationally to kill every American.
However, there are some caveats. As we mentioned before, certain characteristics, such as a person’s weight, can affect lethality. Also, it’s unclear how pure all the seized fentanyl is, which can also affect lethality, Timothy J. Pifer, the director of the New Hampshire State Police Forensic Laboratory and a specialist on fentanyl’s lethality, told PolitiFact in 2019.
Just because enough fentanyl has been seized to kill every American does not mean every American has the same chance of dying of a fentanyl overdose, said Dr. Andrew Stolbach, a toxicologist and emergency doctor at The Johns Hopkins Hospital.
“That would assume that all that drug was somehow going to get into everybody,” Stolbach said.
Stolbach said there’s likely enough water to drown everybody in the world. But that doesn’t mean everyone is going to drown.
By Maria Ramirez Uribe, PolitiFact staff writer
|
<urn:uuid:1c1dd3db-c46f-4054-9200-7f9ca869257a>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00832.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9395517110824585,
"pii_count": 0,
"score": 2.875,
"token_count": 1156,
"url": "https://www.dallasnews.com/news/politifact/2023/09/10/what-are-fentanyl-patches-does-narcan-work-for-fentanyl-overdoses-heres-what-to-know/"
}
|
This story is part of The Dallas Morning News monthlong series on how fentanyl has affected our community.
Fentanyl is a potent and lethal synthetic opioid, causing the majority of opioid overdoses in the U.S., according to the Centers for Disease Control and Prevention.
Despite being the subject of many news stories and political talking points, the drug is still the source of a lot of confusion. PolitiFact recently dispelled common myths about fentanyl, including that unintentional exposure to the drug in its pill or powder form can cause an overdose.
We asked you to send us your questions about fentanyl. Here are answers to the most commonly asked questions.
Reader question: What’s the difference between “street fentanyl and medical fentanyl in hospitals?”
There are two types of fentanyl: pharmaceutical (what our reader called “medical fentanyl”), and illicit (which the reader called “street fentanyl”). Medical fentanyl also is a synthetic opioid, according to the CDC.
“Fentanyl is an old, well-understood drug that is used safely” every day in medicine, said Dr. Ryan Marino, a Case Western Reserve University toxicologist and emergency room physician who studies addiction. “It is one of the most valuable and most” commonly used “medical therapeutics.”
Illicit fentanyl, on the other hand, is made in clandestine labs and is the version of the drug most closely associated with overdose deaths. Illicit fentanyl, according to the Drug Enforcement Administration, can be sold as a powder, pill or nasal spray.
Reader question: If you can’t absorb fentanyl through the skin, why are there fentanyl patches?
Despite viral stories of police officers or emergency responders supposedly overdosing or getting sick through unintentional contact with fentanyl, experts say it’s not scientifically possible. Fentanyl isn’t well absorbed by the skin, especially in its illicit forms.
However, medical fentanyl is sometimes administered through skin patches. But experts say this isn’t evidence that you can get sick by touching fentanyl in powdered or pill form.
For example, even though there are nicotine patches, your skin can’t absorb nicotine by touching tobacco, “and the same principle applies to fentanyl,” according to a report from Brandon Del Pozo, a Brown University public health expert.
However, the Food and Drug Administration warns about the possibility of children accidentally overdosing after exposure to fentanyl patches. According to the Mayo Clinic, if you touch the sticky side of a fentanyl patch you should
|
contact a medical professional and rinse the area with water.
Reader question: Does naloxone work on fentanyl overdoses in the same way it does for other opioids?
Naloxone, a medicine commonly sold under the brand name Narcan, reverses opioid overdoses, according to the National Institute on Drug Abuse. Naloxone is administered via nasal spray or injection. Sometimes multiple doses are needed, depending on the strength of the opioid.
The National Institute on Drug Abuse says friends, family members and bystanders can give a person naloxone in case of an opioid overdose. However, the person should still seek emergency medical assistance. Naloxone reverses an opioid overdose for 30 to 90 minutes, so it is possible overdose symptoms can return once the treatment wears off.
Reader question: How much fentanyl is deadly?
The DEA says that as little as 2 mg of fentanyl can be deadly for an adult. However, a dose’s lethality can vary based on height, weight and tolerance from past exposure.
Fentanyl’s potency is what makes it so lethal.
Reader question: Does all illicit fentanyl come from the southern border?
Most illicit fentanyl in the U.S. comes from Mexico, mainly smuggled by U.S. citizens. Illicit fentanyl seizures are higher at official ports of entry than in between ports of entry, U.S. Customs and Border Protection data shows.
However, illicit fentanyl also arrives in the U.S. by mail, passenger boats, cargo ships, commercial planes and drones, according to a report from Rand Corp., a global policy think tank. Most of the illicit fentanyl that comes from Mexico is made in labs using chemicals from China, the Rand Corp. report said.
Reader question: Is it true that enough fentanyl has been seized at the southern border to kill every American?
Politicians, news reports and government agencies often cite federal fentanyl seizure statistics to claim that enough is seized to kill a high number of people. For example, in January, the DEA said that in 2022 it seized enough illicit fentanyl nationally to kill every American.
However, there are some caveats. As we mentioned before, certain characteristics, such as a person’s weight, can affect lethality. Also, it’s unclear how pure all the seized fentanyl is, which can also affect lethality, Timothy J. Pifer, the director of the New Hampshire State Police Forensic Laboratory and a specialist on fentanyl’s lethality, told PolitiFact in 2019.
Just because enough fentanyl has been seized to kill every American does not mean every American has the same chance of dying of a fentanyl overdose, said Dr. Andrew Stolbach, a toxicologist and emergency doctor at The Johns Hopkins Hospital.
“That would assume that all that drug was somehow going to get into everybody,” Stolbach said.
Stolbach said there’s likely enough water to drown everybody in the world. But that doesn’t mean everyone is going to drown.
By Maria Ramirez Uribe, PolitiFact staff writer
|
Banking know-how is one of the most important steps on the road to financial stability, and it’s an area that often doesn’t get enough attention in schools. Some districts are taking steps to change that, steps that can help improve student outcomes in the long run.
Here’s a novel twist to us. Richardson ISD is opening a student-staffed bank in partnership with Credit Union of Texas at Berkner High School and STEM Academy that will be open to the general public, according to an RISD official.
Credit Union of Texas calls these high school banks “SMART Branches,” and while Richardson’s is the first in Dallas County, this isn’t a new program; there are two others in Allen and Little Elm.
In 2021, 5.6% of Texas households didn’t have a bank account, down from 7.7% in 2019 and 9.5% in 2017, according to the Federal Deposit Insurance Corporation. That number can get even lower with education partnerships like these.
The parts of Dallas county south of Interstate 30 aren’t the only places that need better financial education. Roughly 56% of RISD students are considered economically disadvantaged, according to the TEA.
It doesn’t help that students will graduate from high school into a world of high costs and difficult financial decisions. The Consumer Price Index for All Urban Consumers rose 0.6% last month after increasing 0.2% in July, according to the Federal Bureau of Labor Statistics. And Texas has an affordability crisis of its own to worry about.
Navigating all that without a bank account often drives people to manage their money in less secure ways like cashing checks with payday lenders or pawnshops. It leaves some in our communities more vulnerable to exploitation and financial failure.
Learning about finance through a bank in school can help prevent that outcome — and it will teach students in a hands-on, practical way that may even help unbanked parents and other family members. The students working in the branch will be trained by a bank manager to provide key services like opening accounts and cashing checks, an RISD official said. Pairing a physical location with on-the job education and classroom learning can improve engagement and retainment.
The SMART branch comes with a “Pay for Grades” program, which offers monetary rewards to some students who are credit union members for every A and B on their report card, according to a presentation to the school board.
That might sound like a strange way to incentivize opening a bank account and getting good grades, but it’s a way of encouraging students to take charge of their finances early on.
|
<urn:uuid:d3c0f74d-58b5-4f0b-bd0f-def146585fbd>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00326.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9591625332832336,
"pii_count": 0,
"score": 2.609375,
"token_count": 558,
"url": "https://www.dallasnews.com/opinion/editorials/2023/09/24/opening-a-bank-inside-a-richardson-school-is-on-the-money/"
}
|
Banking know-how is one of the most important steps on the road to financial stability, and it’s an area that often doesn’t get enough attention in schools. Some districts are taking steps to change that, steps that can help improve student outcomes in the long run.
Here’s a novel twist to us. Richardson ISD is opening a student-staffed bank in partnership with Credit Union of Texas at Berkner High School and STEM Academy that will be open to the general public, according to an RISD official.
Credit Union of Texas calls these high school banks “SMART Branches,” and while Richardson’s is the first in Dallas County, this isn’t a new program; there are two others in Allen and Little Elm.
In 2021, 5.6% of Texas households didn’t have a bank account, down from 7.7% in 2019 and 9.5% in 2017, according to the Federal Deposit Insurance Corporation. That number can get even lower with education partnerships like these.
The parts of Dallas county south of Interstate 30 aren’t the only places that need better financial education. Roughly 56% of RISD students are considered economically disadvantaged, according to the TEA.
It doesn’t help that students will graduate from high school into a world of high costs and difficult financial decisions. The Consumer Price Index for All Urban Consumers rose 0.6% last month after increasing 0.2% in July, according to the Federal Bureau of Labor Statistics. And Texas has an affordability crisis of its own to worry about.
Navigating all that without a bank account often drives people to manage their money in less secure ways like cashing checks with payday lenders or pawnshops. It leaves some in our communities more vulnerable to exploitation and financial failure.
Learning about finance through a bank in school can help prevent that outcome — and it will teach students in a hands-on, practical way that may even help unbanked parents and other family members. The students working in the branch will be trained by a bank manager to provide key services like opening accounts and cashing checks, an RISD official said. Pairing a physical location with on-the job education and classroom learning can improve engagement and retainment.
The SMART branch comes with a “Pay for Grades” program, which offers monetary rewards to some students who are credit union members for every A and B on their report card, according to
|
a presentation to the school board.
That might sound like a strange way to incentivize opening a bank account and getting good grades, but it’s a way of encouraging students to take charge of their finances early on.
|
21 species have been declared extinct, the U.S. Fish and Wildlife Service says
Twenty-one species, including birds, a bat and several mussels, have been labeled extinct, the U.S. Fish and Wildlife Servicesaid Monday.
The species were previously on the national list of threatened and endangered species.
The extinct species include:
- Eight Hawaiian honeycreeper birds
- Bridled white-eye bird of Guam
- Mariana fruit bat of Guam
- San Marcos gambusia, a one-inch long fish from Texas
- Scioto madtom, a small catfish found exclusively in the Big Darby Creek in Ohio
- Bachman's warbler, a black and yellow songbird found in several Southern states and Cuba
- Eight freshwater mussels: the flat pigtoe, green-blossom pearly mussel, southern acornshell, stirrupshell, tubercled-blossom pearly mussel, turgid-blossom pearly mussel, upland combshell and yellow-blossom pearly mussel
"Our determinations of whether the best available information indicates that a species is extinct included an analysis of the following criteria: detectability of the species, adequacy of survey efforts, and time since last detection," the U.S. Fish and Wildlife Service said.
The U.S. Fish and Wildlife first proposed the species be taken off the endangered and threatened list in 2021, as they had not been seen since as early as 1899 and as late as 2004.
There are now 650 species that have gone extinct in the U.S., according to the Center for Biological Diversity, which says factors such as climate change, pollution and invasive species contribute to species loss.
Between 2004 and 2022, climate change effects contributed to 39% of amphibian species moving closer to extinction. About 3 billion birds have been decimated in North America since 1970, Fish and Wildlife said.
Still, 99% of the animals on the endangered and threatened list have not reached extinction. Fifty-four have been taken off the list due to recovery efforts, while 56 have been downgraded from endangered to threatened, Fish and Wildlife said.
"Federal protection came too late to reverse these species' decline, and it's a wake-up call on the importance of conserving imperiled species before it's too late," Fish and Wildlife Director Martha Williams said. "As we commemorate 50 years of the Endangered Species Act this year, we are reminded of the Act's purpose to be a safety net that stops the journey toward extinction. The ultimate goal is to recover these species, so they no longer need the Act's protection."
The Hawaiian honeycreepers are now extinct due to their forest habitat being cut down for development and agriculture. Mosquitoes, which are not native to Hawaii, also spread avian pox and avian malaria.
Other Hawaiian birds, such as the 'akikiki, are also on the brink of extinction, with as little as five known pairs in the wild, the Center for Biological Diversity said.
According to the Center for Biological Diversity, the Bachman's warbler was also lost to habitat destruction and the bridled white-eye and Mariana fruit bat was lost to an invasive brown tree snake.
The Mariana fruit bat was also compromised by agriculture and overconsumption as food. The San Marcos gambusia suffered from water overuse that impacted groundwater supply and spring flow. The scioto madtom was lost to runoff and silt buildup from dams.
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
<urn:uuid:208f9aff-f04e-4296-a32b-1337084684e8>
|
{
"dump": "CC-MAIN-2023-50",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100523.4/warc/CC-MAIN-20231204020432-20231204050432-00206.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9538676738739014,
"pii_count": 0,
"score": 3.609375,
"token_count": 746,
"url": "https://www.ualrpublicradio.org/npr-news/2023-10-17/21-species-have-been-declared-extinct-the-u-s-fish-and-wildlife-service-says"
}
|
21 species have been declared extinct, the U.S. Fish and Wildlife Service says
Twenty-one species, including birds, a bat and several mussels, have been labeled extinct, the U.S. Fish and Wildlife Servicesaid Monday.
The species were previously on the national list of threatened and endangered species.
The extinct species include:
- Eight Hawaiian honeycreeper birds
- Bridled white-eye bird of Guam
- Mariana fruit bat of Guam
- San Marcos gambusia, a one-inch long fish from Texas
- Scioto madtom, a small catfish found exclusively in the Big Darby Creek in Ohio
- Bachman's warbler, a black and yellow songbird found in several Southern states and Cuba
- Eight freshwater mussels: the flat pigtoe, green-blossom pearly mussel, southern acornshell, stirrupshell, tubercled-blossom pearly mussel, turgid-blossom pearly mussel, upland combshell and yellow-blossom pearly mussel
"Our determinations of whether the best available information indicates that a species is extinct included an analysis of the following criteria: detectability of the species, adequacy of survey efforts, and time since last detection," the U.S. Fish and Wildlife Service said.
The U.S. Fish and Wildlife first proposed the species be taken off the endangered and threatened list in 2021, as they had not been seen since as early as 1899 and as late as 2004.
There are now 650 species that have gone extinct in the U.S., according to the Center for Biological Diversity, which says factors such as climate change, pollution and invasive species contribute to species loss.
Between 2004 and 2022, climate change effects contributed to 39% of amphibian species moving closer to extinction. About 3 billion birds have been decimated in North America since 1970, Fish and Wildlife said.
Still, 99% of the animals on the endangered and threatened list have not reached extinction. Fifty-four have been taken off the list due to recovery efforts, while 56 have been downgraded from endangered to threatened, Fish and Wildlife said.
"Federal protection came too late to reverse these species' decline, and it's a wake-up call on the importance of conserving imperiled species before it's too late," Fish and Wildlife Director Martha Williams said. "As we commemorate
|
50 years of the Endangered Species Act this year, we are reminded of the Act's purpose to be a safety net that stops the journey toward extinction. The ultimate goal is to recover these species, so they no longer need the Act's protection."
The Hawaiian honeycreepers are now extinct due to their forest habitat being cut down for development and agriculture. Mosquitoes, which are not native to Hawaii, also spread avian pox and avian malaria.
Other Hawaiian birds, such as the 'akikiki, are also on the brink of extinction, with as little as five known pairs in the wild, the Center for Biological Diversity said.
According to the Center for Biological Diversity, the Bachman's warbler was also lost to habitat destruction and the bridled white-eye and Mariana fruit bat was lost to an invasive brown tree snake.
The Mariana fruit bat was also compromised by agriculture and overconsumption as food. The San Marcos gambusia suffered from water overuse that impacted groundwater supply and spring flow. The scioto madtom was lost to runoff and silt buildup from dams.
Copyright 2023 NPR. To see more, visit https://www.npr.org.
|
The P-51 Mustang was one of WWII’s greatest fighters and one of the best era-adjusted fighter planes of all time. Within the American consciousness it is almost synonymous with WWII.
Decades after WWII and after the P-51 had left service as a fighter, the Mustang briefly “came back from the grave” to serve not in the US Air Force, but in the US Army.
(P-51 Mustang during WWII.)
(F-51D Mustang chase plane follows the Sikorsky YUH-60A prototype during the US Army’s UTTAS competition of 1976, seeking a replacement for the UH-1 Iroquois of Vietnam War fame. Sikorsky’s design defeated Boeing’s YUH-61 to win UTTAS and was developed into the UH-60 Blackhawk of today.) (official US Army photo)
(US Army F-51 Mustang during 1970s experiments with airborne recoilless rifles.)
the Mustang after WWII and the division of American airpower
The P-51 Mustang was a standard fighter in its class at the end of WWII in September 1945 and remained in service.
(P-51 Mustangs at Conn Barracks in the post-WWII American occupation zone of Germany. During WWII this was Flugplatz Schweinfurt of the Luftwaffe, home to a Ju-87 Stuka unit and later a fighter tactics school.) (photo by Peter Randall)
On 18 September 1947, the US Army Air Forces of WWII was split off into an independent fifth branch of the USA’s military, the US Air Force. One of the first changes was altering the aircraft nomenclature series, for example the pursuit (P-) category was rolled into a new overall fighter (F-) category and hence the P-51 Mustang became the F-51 Mustang.
For three days in March 1948, the heads of the armed forces met in Key West, FL to iron out what roles the air wings of the five armed forces would fill. The goal was to prevent inter-service squabbling or duplication of missions. It was agreed that the US Air Force would handle all strategic missions, land-based fighters, reconnaissance missions, ground attack duties, air logistics, and many other duties. It would also assist the US Navy with land-based overwater missions. The US Navy and by extension US Marine Corps would continue carrier-based aviation, shipboard helicopters, and land-based anti-submarine missions. The US Coast Guard’s tiny air wing was so specialized that it had already found its niche.
Finally this left the US Army. It would have a rump air wing, oriented towards “hyper-tactical” roles like the transport helicopters of air cavalry units, scout helicopters, little artillery spotting lightplanes, medevac helicopters, and a small number of utility planes.
This delineation of tasks was called “the Key West Agreement” and was made official Pentagon policy on 1 July 1948. It has remained so ever since.
(An example of US Army fixed-wing aviation after the Key West Agreement is this L-20 Beaver utility plane, here taking off from the WWII-veteran aircraft carrier USS Corregidor (CVU-58) during 1958.) (official US Navy photo)
As for the Mustangs in the US Air Force, by the start of the 1950s they were mostly in Air National Guard units, with the Korean War resulting in some being federalized and again returning to frontline combat service. After the Korean War surviving aircraft were transferred back to ANG units where they were soon replaced by jet fighters.
(“Wham Bam”, a F-51D of the West Virginia Air National Guard, the very last Mustang in the US Air Force.)
The last Mustang user overall – at least until the chase planes described below – was the West Virginia ANG. On 27 January 1957, it retired its last F-51 Mustang. It was the last Mustang in the US Air Force and the last remaining propeller fighter of any type still in use.
There in not much to elaborate about chase planes, as the concept is not complex. A chase plane follows another aircraft, usually a prototype or experimental design, on test flights as an observer. Chase planes are still used today in the 21st century but during aviation’s golden age, were even more important. Not until the 1950s did test flights have regular access to ground tracking radars, and inflight data recording and telemetry links came even later. If there was a crash, observations of the chase pilot were often the first step in determining what went wrong. They were also useful in real-time. For example if the test pilot reported severe turbulence but the chase pilot didn’t, it might indicate an impending problem on the aircraft rather than weather issues.
There is no rigid criteria for a chase plane. No air force is going to spend money developing a type just for this role, so they were invariably just some other existing design.
That said, there are a few loose needs. The chase plane has to have performance slightly superior to whatever it is chasing; obviously a chase plane slower and less maneuverable than the test aircraft would be of little use. At the same time it can’t be “too superior”; for example nobody would want a supersonic jet fighter as the chase plane for a prototype piston-engined trainer.
Other than that, there is little else needed for the job. During and after WWII, warplanes of the WWII generation were used as chase aircraft during test flights.
(P-40 Warhawk serving as the chase plane for the Douglas XB-19 over Los Angeles, CA during 1941.)
The P-40 Warhawk fighter served various Allied air forces during the first part of WWII. Only one XB-19 strategic bomber was built, as it was never ordered into production. The gigantic lone prototype served as a utility and test plane throughout WWII and then a year afterwards. The XB-19 was bigger than any production WWII bomber and the United States would not try a bomber this large again until the B-36 Peacemaker of the early Cold War era.
(P-51 Mustang serving as the chase plane for the second XP-82 prototype.)
Contrary to popular lore, the F-82 Twin Mustang was not two stock P-51s joined at the factory by a center wing section. It had a different electrical system, different tail structure, and other alterations. None the less it was clearly derived from its single-seat cousin. The F-82 Twin Mustang did not see combat during WWII but fought in the Korean War.
(P-61 Black Widow serving as the chase plane for the XB-35 prototype during 1946.)
The P-61 Black Widow was a successful night fighter of WWII, serving on until 1950. Northrop’s XB-35 project started early in WWII however development was protracted and not completed until WWII’s end. By the time the prototype was ready for test flights during 1946, the peacetime military had limited interest in a new piston-engined bomber and no order was placed. The design was reconfigured for jet engines as the YB-49. That type also never entered service.
The United States had a lot of high-quality types in inventory after the end of WWII in 1945 so it might seem logical that they would appear as chase planes for a long time afterwards, but that was not the case. Aeronautics was moving at an incredible pace after WWII and the problem was simple: even the best WWII types were soon just too slow. For example the XP-86 prototype, which would become the F-86 Sabre of Korean War fame, first flew only 25 months after Japan surrendered and the prototype YB-47 Stratojet bomber only ten weeks behind it.
(The third prototype Vought X-F6U jet fighter at Naval Air Test Center Patuxent, MD preparing for a flight in 1948. Its chase plane, a WWII F6F Hellcat, is already airborne and waiting.)
Design of the F6U Pirate carrier-based jet fighter started in December 1944 but the US Navy did not realistically expect it to enter fleet service before WWII ended. Only 33 Pirates were built and they only served 3½ years. Technology was advancing very fast.
One WWII fighter could marginally keep up as a chase plane, but it was one which did not see any active combat during WWII. The P-80 Shooting Star was the first American jet fighter to enter mass production. During WWII a few were deployed to Europe but none saw active combat. Redesignated F-80 after 1947, these saw combat in Korea and also use as chase planes during the first decade or so after WWII.
(F-80 Shooting Star chase plane with the prototype Lockheed XF-90 during 1949.)
A F-80 Shooting Star served as the chase plane for the swept-wing XF-90 during the summer of 1949. The XF-90 was 52% faster than a F-51 Mustang, which was still in use as a fighter at the time. The XF-90 was not selected for service. Off-topic, the XF-90 prototype was used as a target for a 1950s nuclear weapons test in Nevada after the project’s cancellation. During 2003, the smashed-up plane was rediscovered in the desert. With a half-century having decayed the radioactivity, the wreckage was taken to the USAF Museum in Dayton, OH for display as an artifact of 1950s nuclear testing.
the Cheyenne and the Mustang
The story of how the WWII legend P-51 Mustang briefly came “back from the grave” as chase planes began in 1964 with the US Army’s Advanced Aerial Fire Support System competition for a dedicated attack helicopter. This in turn would later intertwine with the post-WWII Key West Agreement which had been struck when the Mustang was still in service as a fighter.
Previously after WWII the US Army had armed some transport helicopters with door guns, then starting in 1962 began to assign “gunship” transport helicopters with no troops aboard to escort air cavalry helicopters. The next step was a dedicated attack type dispensing with a passenger cabin altogether. In what was supposed to be an interim step only, the AH-1 Cobra was quickly designed and put into production, pending a proper winner of the AAFSS competition.
Lockheed’s submission was the Cheyenne. Somewhat out of the scope of this writing, the two-seat AH-56 was a remarkable aircraft, either now in the 2020s or certainly for 1967 when the first prototype flew. It was a compound helicopter, with a vertically-oriented pusher propeller on the extreme rear receiving up to 75% of the engine’s power in forward flight. Much of the lift was provided by 26’7″-span airplane-style wings.
The gunner, who sat in the forward position, had a seat which rotated in unison with either the nose 40mm grenade launcher or belly 30mm autocannon, so he was physically looking the same direction as the weapon’s muzzle. Six hardpoints allowed use of unguided rockets or BGM-71 TOW missiles; for the latter the Cheyenne had night vision and laser rangefinder, with the gunner using a helmet sight. The pilot had a primitive digital “waypoint” feature by which he could lock in a particular location, and then let the navigation system guide him there for an attack.
(A BGM-71 missile fired by the Cheyenne prototype against a target hulk WWII M4 Sherman tank. This is probably painful for military museum curators of the 21st century to see, but the military still had a lot of relic WWII hardware in the late 1960s / early 1970s.) (photo via Lockheed-Martin)
The Cheyenne flew 190 kts in normal flight with a top speed of 212 kts. For 1960s comparison, the top speed of a AH-1 Cobra was 149 kts while the top speed of the then-most common helicopter, the UH-1 Iroquois, was 109 kts.
How the WWII Mustang entered the equation was the need for something to serve as the chase aircraft. Previously when the US Army tested a new helicopter, it simply used another helicopter. Now this would be impossible; the Cheyenne was twice as fast as a Huey and would leave it in the dust.
At the same time, as mentioned earlier, a chase plane is best not “too” much faster, as (especially with helicopters) a large portion of the test flight program is not full-bore speed runs but rather handling crosswinds, transitions to and from hover, etc.
For these reasons the US Army took the unusual step of resurrecting a WWII fighter for chase plane duties more than two decades after WWII had ended. The US Army procured three previously demilitarized Mustangs: a (basically) “stock” F-51D and two Cavalier Mustangs.
the three aircraft
The first Mustang obtained by the US Army was originally a P-51D, serial # 44-72990, ordered under the 1944 budget and built in 1945. The D model was the most common Mustang of WWII, with 8,102 built or roughly half of all versions combined. After WWII this particular plane went to the Royal Canadian Air Force, which retired it in 1959. Sold as surplus to an American buyer, it was refurbished for recreational flying with guns removed and a second “rumble seat” in the cockpit. This plane was acquired commercially by the US Army in 1967 specifically as a chase plane for the AH-56.
(The basically “stock” F-51D after being acquired by the US Army in 1967. Behind it is a U-8 Seminole utility plane, and behind that a CH-47 Chinook helicopter.) (photo via mustangsmustangs.com website)
After receiving this F-51D, it was considered a success for the chase plane role and two more Mustangs were acquired in 1967 – 1968. These were Cavalier Mustangs which are described later below.
(The two Cavalier Mustangs acquired as Cheyenne chase planes.)
The Mustang was not the only type suitable for being a chase plane to the Cheyenne. For example the T-37 Tweet, a jet trainer of the era, had a flight envelope roughly the same as the WWII propeller fighter. It cruised at 310 kts and topped out at 369 kts and was a decently-maneuverable plane already in military service.
Why exactly the US Army took the unorthodox route of using a WWII fighter plane in 1967 has been lost to time, and it is possible there really wasn’t one single reason. The Mustangs, either the modified F-51D or the two Cavaliers, had a second seat for a photographer, were as fast and as maneuverable as the Cheyenne, had no safety issues, and were not a big-dollar procurement. All things being equal with more contemporary types, something had to be selected and the Mustang ended up being it.
Perhaps another factor, for the second and third Mustangs, was that this was during a high point of interest in the Cavalier aircraft company within the United States government.
Cavalier was originally Trans-Florida Aviation Inc., a company founded in 1957 as the last F-51s left Air National Guard service. CEO David Lindsay’s vision was that the glut of Mustang airframes being disposed of by the Pentagon could be converted into private use; specifically that business executives might buy them and use them both for recreation and business travel. Trans-Florida took surplus F-51s and gutted them, zeroing out the airframe fatigue life and rebuilding the engine. The WWII gunsight and other remaining combat features were removed. Plush leather seating was installed, and a passenger seat was installed behind the pilot. A luggage compartment was added. To keep pace with FAA regulations a new radio was installed as was a Regency civilian flight transponder. Two removable fuel tanks were mounted on the wingtips. The tail was replaced by a new design 1’2″ taller.
(A F-51 Mustang being “cavaliered” at Trans-Florida’s Sarasota, FL factory during October 1961.) (photo via swissmustangs.ch website)
Mr. Lindsay’s vision for the “Executive Mustang” never caught on and sales were very poor. The concept was rebranded as the “Cavalier Mustang”, oriented towards sports flying, with only a handful more being sold during the early 1960s.
Things looked to possibly be turning around in the late 1960s when the Department Of Defense became interested in Cavalier Mustangs. The US Air Force had no interest in them itself, however, it was considered a good idea for supporting small, third-world allied air arms. A lot of these small air forces were flying WWII-surplus warplanes into the end of the 1950s. Now for the 1960s counter-insurgency (COIN) strike role the USAF considered that to aid them, it might be less preferable to wean them onto low-performance jets and more preferable to just give them another (updated) WWII piston-engined type. The company had proven itself competent with a contract to overhaul the Dominican Republic air force’s F-51 Mustangs.
Trans-Florida renamed itself Cavalier Aircraft Corporation. Three versions were made available: the Cavalier F-51 which was a rebuilt P-51D airframe, the Cavalier Mustang II which was the same but with weapons restored, and the exotic Cavalier Mustang III which used a Rolls-Royce Dart turboprop engine.
The two planes selected by the US Army as chase aircraft were Cavalier F-51s, the simplest of the three types. The Department of Defense considered the Cavalier rebuild so extensive that they were deemed “new” airplanes reusing some old components, and as such, the two received new serial numbers of the fiscal year 1968 series; 21 years after actual P-51 production had ended.
the Cheyenne test program
The AH-56 evaluation had two phases. The first ran from the first flight in September 1967 until April 1969. During this phase, the Cheyenne clearly demonstrated itself as a remarkable helicopter but was also found deficient in several key areas, mainly the flight controls, empty weight, and pilot workload. The US Army issued a “cure notice” which cancelled Lockheed’s planned production contract but did not altogether cancel the AH-56, leaving it an open developmental contract.
Lockheed was still confident in the design and worked out the problems. Test flights resumed later in 1969.
(One of the Cavalier chase planes shortly after delivery in November 1968.)
The Mustangs, the F-51D and two Cavaliers, worked exceptionally well in their role as chase planes for the Cheyenne. During 1968, the US Army issued a new pilot’s manual for the F-51 tailored for the chase plane role. It will remain almost certainly the last American military manual for operating the Mustang.
Both the modified F-51D and the two Cavalier rebuilds performed equally well. Between the two, surprisingly the F-51D was considered a favorite if there was one, as it was regarded by ground mechanics as less demanding to maintain.
(The modified “stock” F-51D. Warplane serials of the time used the last two digits of the contract year as their first two numerals, with the first digit truncated off. During the early post-WWII years and Korean War era this was no problem as aviation was advancing so rapidly, it was rare for a plane to stay in service more than a decade. As some WWII-generation planes lingered past the mid-1950s longer than expected, this presented an issue as a serial number might be repeated every 10 years. A prefix “0-” was added, the humor being that the 0 was really an O meaning “Old”.)
(One of the two Cavaliers with a UH-1 Iroquois. Like the Mustang had been during WWII, the Huey is often associated as the “classical” Vietnam War aircraft.)
(One of the two Cavaliers with a CH-54 Tarhe behind it.)
The demise of the Cheyenne is outside the scope of this writing and even today remains a debated topic. The final round of test flights were during 1972. Only two deficiencies were noted, an issue with the rigid rotors stalling and mediocre performance in bad weather.
The Cheyenne’s biggest hurdles were not mechanical but rather political. Within the US Army, the AH-56’s abilities were of course strongly wanted but there was hesitation at the Cheyenne’s complexity. US Army field mechanics would be servicing something basically on the same technology tier as high-performance US Air Force aircraft. At the same time, the supposedly interim AH-1 Cobra performed very well in Vietnam.
An unrelated, and probably bigger, problem was a simmering dislike of the project within the US Air Force. Since the post-WWII Key West Agreement, there had previously been little inter-service squabbling between the US Air Force and the US Army.
During 1966 US Air Force Gen. J.P. McConnell and US Army Gen. Harold Johnson held an informal meeting, sort of a “20 years on…” follow-up to the Key West Agreement, to look at how airpower had evolved. It was agreed that if the US Army abandoned interest in regaining fixed-wing strike planes, the US Air Force would be fine with it flying armed helicopters.
However now the Cheyenne was such an unexpectedly advanced thing that the USAF began to have second thoughts. The Cheyenne seemed to intrude upon the “A-X” attack plane concept the USAF was seeking funding for. The US Air Force lobbied Congress that the US Army was encroaching on its ground attack mission and that the Cheyenne should be cancelled.
In 1972 the US Army started the Advanced Attack Helicopter competition for an eventual replacement for the AH-1 Cobra. A lot of the AAFSS competition’s criteria overlapped that of the new AAH competition, and that may have been intentional. At the same time the Pentagon added a two-engine criteria to AAH, eliminating the single-engine Cheyenne from “re-submission” into the new competition. Perhaps the final nail in the coffin was an estimate that production Cheyennes would cost $500,000 more than originally envisioned; giving the Department Of Defense a politically-friendly reason to end the project. The Cheyenne was permanently cancelled that year.
The Mustangs had proven themselves as good chase aircraft and remained in service after the Cheyenne’s cancellation.
(The US Army Aviation Engineering Flight Activity was located on the grounds of Edwards AFB, CA. This 1977 photo shows an interesting lineup, with the WWII Mustang sharing apron space with aircraft of the Vietnam War and early post-Vietnam eras.) (photo via ARC online forum)
(One of the later helicopters the Mustangs served as chase planes for was the Hughes YAH-64A prototype which eventually became the AH-64 Apache.)
By the late 1970s the US Army had obtained great service from this surprising choice of a chase plane, but was ready to move on. A pair of T-28 Trojans, a propeller trainer of the mid-Cold War era, replaced them. With one of the Cavaliers already detailed to the recoilless rifle trials described later below, the last to go was the modified “stock” F-51D.
(photo by Larry Kline)
This F-51D made its last US Army flight on 7 February 1978. It was sent for museum display to Ft. Rucker, AL. With that the P-51’s career with the US Army ended, 38 years after it had started and three decades after the US Army gave up fixed-wing combat planes.
(North American Aviation’s NA-73X private-design prototype which eventually became the P-51 Mustang.)
the recoilless rifle trials
One of the “chase Mustangs” had another quite interesting mission after the Cheyenne’s cancellation.
During the summer of 1974, the Department of the Navy (the US Navy and US Marine Corps) tested a concept for a low-cost air-to-ground weapon to arm FAC/TacRec (forward air controller / tactical reconnaissance) types, mainly the USMC’s OV-10 Bronco.
Warplanes like the Bronco normally “target-find” for artillery or airstrike assets, perform battlefield reconnaissance, etc. They are not really intended as strike planes in their own right, but based upon experiences during the Vietnam War the US Marine Corps considered that giving them a decent air-to-ground ability might be beneficial. “Targets of opportunity” like an enemy tank unit vulnerable in road transit could be immediately attacked while an airstrike or howitzer fire was made ready. The plane might also pick off isolated targets like a scouting armored car not worth its own strike package, or give emergency help to friendly infantry on the ground at risk of being overrun.
The problem was suitable ordnance. Free-fall bombs had significant effect but in the tiny numbers one Bronco might carry, were unlikely to score first-pass hits. Once dropped, they were gone. Unguided rocket pods had a better area effect, but were unlikely to disable or destroy the latest Soviet tanks. And again, once the pod was fired there would be no second chance. First generation smart bombs like the Bullpup and Maverick were in service but defeated the low-cost objective, or were too big, or both.
It was theorized that a recoilless rifle mounted on a plane might be best: these had the punch to knock out a tank, were decently accurate, and could be repeatedly fired. The USMC had previously considered their use to deliver “willy petes” (white phosphorus target-marking rounds) but now wanted to perhaps just use them as de jure weapons.
(The “flying artillery” concept was by no means novel or new in 1974. During WWII, Beech designed the XA-34 Grizzly. It was centered on a T15E1 75mm autocannon in the nose. Both the T15E1 and XA-34 were very successful, however the Grizzly shared components with the B-29 Superfortress which was a higher priority. When production bottlenecks finally ended, Japan was near defeat anyways and there was no interest in fielding yet another new warplane type. The XA-34 was cancelled.) (photo via Old Machine Press)
Leonardo da Vinci is often credited with the original idea for a recoilless rifle but the concept wasn’t fully perfected and mass-produced until WWII.
(US Army paratroopers engage German forces with a M18 during 1945. One of the earliest and smallest recoilless rifles, the 57mm M18 was designed for tripod use but was light enough to be fired like a traditional rifle as seen here.)
A recoilless rifle is not a bazooka-type rocket launcher, although it often fills the same tactical niche. A recoilless rifle is a true firearm with a chamber and rifled barrel, firing a gun-configuration cartridge (casing, propellant, bullet). The difference is that recoilless rifle rounds have a perforated casing, which expels combustion laterally into a bellmouth at the rear end of the gun, countering the force exerted on the departing shell going the other way and negating its felt recoil.
By this a relatively lightweight, man-portable weapon can fire ammunition of calibers equal to that of towed field artillery. The disadvantages are that the projectile’s range is significantly less, and that use produces a back-blast danger zone along with significant noise and smoke.
(The M344 round used by the Mustang during the 1974 experiments. It is actually 105mm but labeled “106mm” to avoid confusion with incompatible 105mm ammunition.)
After WWII recoilless rifles had been repeatedly considered, and rejected, as ordnance for warplanes.
(Harvey Aluminum Company designed a lightweight recoilless rifle which could be semi-automatically reloaded. It was not accepted for production.)
The most major stumbling block was that recoilless rifles are hand-loaded with the breech being in the extreme rear of the gun. Unlike a machine gun or autocannon, there was no practical way to do this remotely and automatically external to an aircraft. The solution developed was for an internal magazine in the center of the plane’s fuselage, feeding downwards. A hydraulic “shuttle” would strip a round out of the magazine, carry it above the gun on an arm trailing behind the breech, lower it and chamber it, then catch the bellmouth section swinging it upwards, putting the rifle into battery.
There were lesser problems with the concept, and these are what the F-51 Mustang aimed to explore. A recoilless rifle produces a significant backblast. It was unknown how this would affect airflow over the plane’s tail. A recoilless rifle’s round flies a relatively flat profile. As a weapon designed to be static-fired on the ground, it was unclear how it would perform when fired off an object already moving through three dimensions.
As the US Army had no further use for all three chase Mustangs by 1974, it loaned one of the Cavaliers to Naval Air Weapons Station China Lake, CA for US Marine Corps use. Watervliet Arsenal, NY provided two M40 105mm recoilless rifles and ammunition. The Mustang was flown by LtCol John Pipa of the US Marine Corps but remained painted in US Army colors.
For certain, there was zero interest in actually fielding this system on Mustangs. The Cavalier Mustang was just an available airframe. The M40s were mounted on the Mustang’s wingtips, replacing the Cavalier fuel tanks. The elaborate auto-loading system was not used, and the guns were hand-loaded on the ground for single-shot use.
During the spring of 1974, the idea was tested against two target hulk trucks in the desert. Firing took place with the WWII fighter in a 22º dive at 250 kts. The slant range to the trucks was 5,072yds or roughly a 2¾ miles line-of-sight for the pilot.
Mid-June, a final determining test series was run.
Out of eight shots from the Mustang, one direct hit on a truck was scored. The other seven rounds landed in a tight grouping. The US Marine Corps concluded that the marginal inaccuracy was likely due to flexing of the WWII wings on the Mustang, which would not be present on the rigid under-fuselage mounting envisioned for production use.
The pilot reported that the backblast of the M40s did not disrupt airflow around the tail and in fact, the only noticeable effect was the extremely loud noise produced by all recoilless rifles.
The concept was also investigated by mounting non-firing weapons on an OV-10 Bronco and a A-4 Skyhawk, plus a full firing set-up on a bizarre arrangement with a Bronco fuselage / cockpit section suspended from a gantry. That was as far as it went. Like a number of other early post-Vietnam War projects, Congress had no interest in funding it and the idea died.
The US Army did not want the test Cavalier Mustang back in 1974 and it was sold to a civilian owner.
In both the helicopter test flights and the gunnery test, the reborn Mustangs were an operational success and a budgetary one as well. The US Navy’s budget for the whole recoilless rifle project was $95,000 ($579,251 in 2023 money) which would have easily been exceeded by the flight-hour costs on a modern jet alone.
Somewhat off-topic, the head-butting between the US Army and US Air Force over the Cheyenne was the last of its type. When the AH-64 Apache helicopter and A-10 Thunderbolt II jet respectively entered use, both the US Army and US Air Force stated that overlaps between them were unavoidable and beneficial. In December 2022, the US Army picked Bell’s V-280 Valor winged tiltrotor as the replacement for the UH-60 and other helicopters, and again the US Air Force did not object. The Key West Agreement, struck 3 years after WWII, still seems to be good policy in the 21st century.
(photo by Larry Kline)
When the F-51D chase plane was retired in February 1978, perhaps there was a bit of silent satisfaction that the P-51 Mustang had ridden off into the sunset wearing US Army green, as it had when it began during WWII.
8 thoughts on “the last Mustangs in the US Army”
The idea of arming a Bronco with a M40 recoilless rifle baffled me… At this pace, we could have ended seeing a B-52 with a Long Tom under the wingtips.
LikeLiked by 1 person
the OV-10D they used during Desert Storm did pretty well, it could carry a variety of more sensible weapons and could also carry AIM-9 Sidewinders
all of this is very interesting……did not know that the P-(F) 51 had such a long useful career……I have just seen a lot of them in private hands, and as racers….best..
LikeLiked by 1 person
Reblogged this on .
Reminds me of the story of spitfires being used in place of possible Indonesian P51s against Lightnings – https://theaviationgeekclub.com/lightning-vs-spitfire-why-the-iconic-mach-2-interceptor-struggled-to-win-mock-dogfights-against-the-legendary-wwii-plane-during-the-trial-flights-conducted-by-the-raf-between-the-two-types/amp/
LikeLiked by 1 person
Very good article.
I was at Ft Wolters, MWL, when one of the Cavalier Mustangs landed for a couple of days. The Army colonel pilot had a son going through helicopter flight school there and he was able to visit with him. A/C was in route to Ft Rucker for the AH 56 Cheyenne program. The second seat had a spot for his crew chief, an Army Master Sergeant that was a P 51 mechanic at one time. I got to climb up and get a good look inside. Great day for a young pilot. (About 1968-69 ish)
I had an uncle that flew the A 36 Apache ( P 51A) in WW 2.
LikeLiked by 1 person
Great article, in particular the recoilless rocket part. Reminds me of the 6 bazookas that were installed on a WW2 artillery observation plane.
LikeLiked by 1 person
As a side note, before WWII the Soviets developed recoilless rifle (by Leonid Kurchevsky) and tested those on the Tupolev I-12 and in service on the Grigorovich I-Z.
LikeLiked by 1 person
|
<urn:uuid:16af7413-2af5-4c09-a97f-a9715aaecb3c>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00266.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9759898781776428,
"pii_count": 0,
"score": 2.6875,
"token_count": 7527,
"url": "https://wwiiafterwwii.wordpress.com/2023/01/21/the-last-mustangs-in-the-us-army/"
}
|
The P-51 Mustang was one of WWII’s greatest fighters and one of the best era-adjusted fighter planes of all time. Within the American consciousness it is almost synonymous with WWII.
Decades after WWII and after the P-51 had left service as a fighter, the Mustang briefly “came back from the grave” to serve not in the US Air Force, but in the US Army.
(P-51 Mustang during WWII.)
(F-51D Mustang chase plane follows the Sikorsky YUH-60A prototype during the US Army’s UTTAS competition of 1976, seeking a replacement for the UH-1 Iroquois of Vietnam War fame. Sikorsky’s design defeated Boeing’s YUH-61 to win UTTAS and was developed into the UH-60 Blackhawk of today.) (official US Army photo)
(US Army F-51 Mustang during 1970s experiments with airborne recoilless rifles.)
the Mustang after WWII and the division of American airpower
The P-51 Mustang was a standard fighter in its class at the end of WWII in September 1945 and remained in service.
(P-51 Mustangs at Conn Barracks in the post-WWII American occupation zone of Germany. During WWII this was Flugplatz Schweinfurt of the Luftwaffe, home to a Ju-87 Stuka unit and later a fighter tactics school.) (photo by Peter Randall)
On 18 September 1947, the US Army Air Forces of WWII was split off into an independent fifth branch of the USA’s military, the US Air Force. One of the first changes was altering the aircraft nomenclature series, for example the pursuit (P-) category was rolled into a new overall fighter (F-) category and hence the P-51 Mustang became the F-51 Mustang.
For three days in March 1948, the heads of the armed forces met in Key West, FL to iron out what roles the air wings of the five armed forces would fill. The goal was to prevent inter-service squabbling or duplication of missions. It was agreed that the US Air Force would handle all strategic missions, land-based fighters, reconnaissance missions, ground attack duties, air logistics, and many other duties. It would also assist the US Navy with land-based overwater missions. The US Navy and by extension US Marine Corps would continue carrier
|
-based aviation, shipboard helicopters, and land-based anti-submarine missions. The US Coast Guard’s tiny air wing was so specialized that it had already found its niche.
Finally this left the US Army. It would have a rump air wing, oriented towards “hyper-tactical” roles like the transport helicopters of air cavalry units, scout helicopters, little artillery spotting lightplanes, medevac helicopters, and a small number of utility planes.
This delineation of tasks was called “the Key West Agreement” and was made official Pentagon policy on 1 July 1948. It has remained so ever since.
(An example of US Army fixed-wing aviation after the Key West Agreement is this L-20 Beaver utility plane, here taking off from the WWII-veteran aircraft carrier USS Corregidor (CVU-58) during 1958.) (official US Navy photo)
As for the Mustangs in the US Air Force, by the start of the 1950s they were mostly in Air National Guard units, with the Korean War resulting in some being federalized and again returning to frontline combat service. After the Korean War surviving aircraft were transferred back to ANG units where they were soon replaced by jet fighters.
(“Wham Bam”, a F-51D of the West Virginia Air National Guard, the very last Mustang in the US Air Force.)
The last Mustang user overall – at least until the chase planes described below – was the West Virginia ANG. On 27 January 1957, it retired its last F-51 Mustang. It was the last Mustang in the US Air Force and the last remaining propeller fighter of any type still in use.
There in not much to elaborate about chase planes, as the concept is not complex. A chase plane follows another aircraft, usually a prototype or experimental design, on test flights as an observer. Chase planes are still used today in the 21st century but during aviation’s golden age, were even more important. Not until the 1950s did test flights have regular access to ground tracking radars, and inflight data recording and telemetry links came even later. If there was a crash, observations of the chase pilot were often the first step in determining what went wrong. They were also useful in real-time. For example if the test pilot reported severe turbulence but the chase pilot didn’t, it might indicate an impending problem on the aircraft rather than weather issues.
There is no rigid criteria for a chase plane. No air force is going to spend money developing a type just for this role, so they were invariably just some other existing design.
That said, there are a few loose needs. The chase plane has to have performance slightly superior to whatever it is chasing; obviously a chase plane slower and less maneuverable than the test aircraft would be of little use. At the same time it can’t be “too superior”; for example nobody would want a supersonic jet fighter as the chase plane for a prototype piston-engined trainer.
Other than that, there is little else needed for the job. During and after WWII, warplanes of the WWII generation were used as chase aircraft during test flights.
(P-40 Warhawk serving as the chase plane for the Douglas XB-19 over Los Angeles, CA during 1941.)
The P-40 Warhawk fighter served various Allied air forces during the first part of WWII. Only one XB-19 strategic bomber was built, as it was never ordered into production. The gigantic lone prototype served as a utility and test plane throughout WWII and then a year afterwards. The XB-19 was bigger than any production WWII bomber and the United States would not try a bomber this large again until the B-36 Peacemaker of the early Cold War era.
(P-51 Mustang serving as the chase plane for the second XP-82 prototype.)
Contrary to popular lore, the F-82 Twin Mustang was not two stock P-51s joined at the factory by a center wing section. It had a different electrical system, different tail structure, and other alterations. None the less it was clearly derived from its single-seat cousin. The F-82 Twin Mustang did not see combat during WWII but fought in the Korean War.
(P-61 Black Widow serving as the chase plane for the XB-35 prototype during 1946.)
The P-61 Black Widow was a successful night fighter of WWII, serving on until 1950. Northrop’s XB-35 project started early in WWII however development was protracted and not completed until WWII’s end. By the time the prototype was ready for test flights during 1946, the peacetime military had limited interest in a new piston-engined bomber and no order was placed. The design was reconfigured for jet engines as the YB-49. That type also never entered service.
The United States had a lot of high-quality types in inventory after the end of WWII in 1945 so it might seem logical that they would appear as chase planes for a long time afterwards, but that was not the case. Aeronautics was moving at an incredible pace after WWII and the problem was simple: even the best WWII types were soon just too slow. For example the XP-86 prototype, which would become the F-86 Sabre of Korean War fame, first flew only 25 months after Japan surrendered and the prototype YB-47 Stratojet bomber only ten weeks behind it.
(The third prototype Vought X-F6U jet fighter at Naval Air Test Center Patuxent, MD preparing for a flight in 1948. Its chase plane, a WWII F6F Hellcat, is already airborne and waiting.)
Design of the F6U Pirate carrier-based jet fighter started in December 1944 but the US Navy did not realistically expect it to enter fleet service before WWII ended. Only 33 Pirates were built and they only served 3½ years. Technology was advancing very fast.
One WWII fighter could marginally keep up as a chase plane, but it was one which did not see any active combat during WWII. The P-80 Shooting Star was the first American jet fighter to enter mass production. During WWII a few were deployed to Europe but none saw active combat. Redesignated F-80 after 1947, these saw combat in Korea and also use as chase planes during the first decade or so after WWII.
(F-80 Shooting Star chase plane with the prototype Lockheed XF-90 during 1949.)
A F-80 Shooting Star served as the chase plane for the swept-wing XF-90 during the summer of 1949. The XF-90 was 52% faster than a F-51 Mustang, which was still in use as a fighter at the time. The XF-90 was not selected for service. Off-topic, the XF-90 prototype was used as a target for a 1950s nuclear weapons test in Nevada after the project’s cancellation. During 2003, the smashed-up plane was rediscovered in the desert. With a half-century having decayed the radioactivity, the wreckage was taken to the USAF Museum in Dayton, OH for display as an artifact of 1950s nuclear testing.
the Cheyenne and the Mustang
The story of how the WWII legend P-51 Mustang briefly came “back from the grave” as chase planes began in 1964 with the US Army’s Advanced Aerial Fire Support System competition for a dedicated attack helicopter. This in turn would later intertwine with the post-WWII Key West Agreement which had been struck when the Mustang was still in service as a fighter.
Previously after WWII the US Army had armed some transport helicopters with door guns, then starting in 1962 began to assign “gunship” transport helicopters with no troops aboard to escort air cavalry helicopters. The next step was a dedicated attack type dispensing with a passenger cabin altogether. In what was supposed to be an interim step only, the AH-1 Cobra was quickly designed and put into production, pending a proper winner of the AAFSS competition.
Lockheed’s submission was the Cheyenne. Somewhat out of the scope of this writing, the two-seat AH-56 was a remarkable aircraft, either now in the 2020s or certainly for 1967 when the first prototype flew. It was a compound helicopter, with a vertically-oriented pusher propeller on the extreme rear receiving up to 75% of the engine’s power in forward flight. Much of the lift was provided by 26’7″-span airplane-style wings.
The gunner, who sat in the forward position, had a seat which rotated in unison with either the nose 40mm grenade launcher or belly 30mm autocannon, so he was physically looking the same direction as the weapon’s muzzle. Six hardpoints allowed use of unguided rockets or BGM-71 TOW missiles; for the latter the Cheyenne had night vision and laser rangefinder, with the gunner using a helmet sight. The pilot had a primitive digital “waypoint” feature by which he could lock in a particular location, and then let the navigation system guide him there for an attack.
(A BGM-71 missile fired by the Cheyenne prototype against a target hulk WWII M4 Sherman tank. This is probably painful for military museum curators of the 21st century to see, but the military still had a lot of relic WWII hardware in the late 1960s / early 1970s.) (photo via Lockheed-Martin)
The Cheyenne flew 190 kts in normal flight with a top speed of 212 kts. For 1960s comparison, the top speed of a AH-1 Cobra was 149 kts while the top speed of the then-most common helicopter, the UH-1 Iroquois, was 109 kts.
How the WWII Mustang entered the equation was the need for something to serve as the chase aircraft. Previously when the US Army tested a new helicopter, it simply used another helicopter. Now this would be impossible; the Cheyenne was twice as fast as a Huey and would leave it in the dust.
At the same time, as mentioned earlier, a chase plane is best not “too” much faster, as (especially with helicopters) a large portion of the test flight program is not full-bore speed runs but rather handling crosswinds, transitions to and from hover, etc.
For these reasons the US Army took the unusual step of resurrecting a WWII fighter for chase plane duties more than two decades after WWII had ended. The US Army procured three previously demilitarized Mustangs: a (basically) “stock” F-51D and two Cavalier Mustangs.
the three aircraft
The first Mustang obtained by the US Army was originally a P-51D, serial # 44-72990, ordered under the 1944 budget and built in 1945. The D model was the most common Mustang of WWII, with 8,102 built or roughly half of all versions combined. After WWII this particular plane went to the Royal Canadian Air Force, which retired it in 1959. Sold as surplus to an American buyer, it was refurbished for recreational flying with guns removed and a second “rumble seat” in the cockpit. This plane was acquired commercially by the US Army in 1967 specifically as a chase plane for the AH-56.
(The basically “stock” F-51D after being acquired by the US Army in 1967. Behind it is a U-8 Seminole utility plane, and behind that a CH-47 Chinook helicopter.) (photo via mustangsmustangs.com website)
After receiving this F-51D, it was considered a success for the chase plane role and two more Mustangs were acquired in 1967 – 1968. These were Cavalier Mustangs which are described later below.
(The two Cavalier Mustangs acquired as Cheyenne chase planes.)
The Mustang was not the only type suitable for being a chase plane to the Cheyenne. For example the T-37 Tweet, a jet trainer of the era, had a flight envelope roughly the same as the WWII propeller fighter. It cruised at 310 kts and topped out at 369 kts and was a decently-maneuverable plane already in military service.
Why exactly the US Army took the unorthodox route of using a WWII fighter plane in 1967 has been lost to time, and it is possible there really wasn’t one single reason. The Mustangs, either the modified F-51D or the two Cavaliers, had a second seat for a photographer, were as fast and as maneuverable as the Cheyenne, had no safety issues, and were not a big-dollar procurement. All things being equal with more contemporary types, something had to be selected and the Mustang ended up being it.
Perhaps another factor, for the second and third Mustangs, was that this was during a high point of interest in the Cavalier aircraft company within the United States government.
Cavalier was originally Trans-Florida Aviation Inc., a company founded in 1957 as the last F-51s left Air National Guard service. CEO David Lindsay’s vision was that the glut of Mustang airframes being disposed of by the Pentagon could be converted into private use; specifically that business executives might buy them and use them both for recreation and business travel. Trans-Florida took surplus F-51s and gutted them, zeroing out the airframe fatigue life and rebuilding the engine. The WWII gunsight and other remaining combat features were removed. Plush leather seating was installed, and a passenger seat was installed behind the pilot. A luggage compartment was added. To keep pace with FAA regulations a new radio was installed as was a Regency civilian flight transponder. Two removable fuel tanks were mounted on the wingtips. The tail was replaced by a new design 1’2″ taller.
(A F-51 Mustang being “cavaliered” at Trans-Florida’s Sarasota, FL factory during October 1961.) (photo via swissmustangs.ch website)
Mr. Lindsay’s vision for the “Executive Mustang” never caught on and sales were very poor. The concept was rebranded as the “Cavalier Mustang”, oriented towards sports flying, with only a handful more being sold during the early 1960s.
Things looked to possibly be turning around in the late 1960s when the Department Of Defense became interested in Cavalier Mustangs. The US Air Force had no interest in them itself, however, it was considered a good idea for supporting small, third-world allied air arms. A lot of these small air forces were flying WWII-surplus warplanes into the end of the 1950s. Now for the 1960s counter-insurgency (COIN) strike role the USAF considered that to aid them, it might be less preferable to wean them onto low-performance jets and more preferable to just give them another (updated) WWII piston-engined type. The company had proven itself competent with a contract to overhaul the Dominican Republic air force’s F-51 Mustangs.
Trans-Florida renamed itself Cavalier Aircraft Corporation. Three versions were made available: the Cavalier F-51 which was a rebuilt P-51D airframe, the Cavalier Mustang II which was the same but with weapons restored, and the exotic Cavalier Mustang III which used a Rolls-Royce Dart turboprop engine.
The two planes selected by the US Army as chase aircraft were Cavalier F-51s, the simplest of the three types. The Department of Defense considered the Cavalier rebuild so extensive that they were deemed “new” airplanes reusing some old components, and as such, the two received new serial numbers of the fiscal year 1968 series; 21 years after actual P-51 production had ended.
the Cheyenne test program
The AH-56 evaluation had two phases. The first ran from the first flight in September 1967 until April 1969. During this phase, the Cheyenne clearly demonstrated itself as a remarkable helicopter but was also found deficient in several key areas, mainly the flight controls, empty weight, and pilot workload. The US Army issued a “cure notice” which cancelled Lockheed’s planned production contract but did not altogether cancel the AH-56, leaving it an open developmental contract.
|
Bullying has become a major concern especially in schools and other places of work and learning.How to Curb Bullying in Schools is what you will learn in the article you are about to read.
But before we dive in, let’s understand some basic concepts;
What is Bullying?
Bullying is a pattern of unsafe behavior that is usually intentional. It also involves an imbalance of power or strength. It typically includes actions or words that are meant to harm, intimidate, or control another person. Bullying can manifest in various forms, such as physical, verbal, social, or cyberbullying. It often occurs repeatedly over time, creating a hostile and distressing environment for the victim. Common examples of bullying include name-calling, teasing, spreading rumors, physical aggression, exclusion, and online harassment. It’s important to address bullying to protect the well-being and mental health of individuals.
Bullying is also when people intentionally use words or actions against someone or a group of people to cause distress. These actions are usually by people who have more influence or power over someone else. It can also be someone who want to make you feel less powerful or helpless.
Bullying is not the same as conflict between people or disliking someone. People even might bully each other because of conflict or dislike.
The sort of repeated behaviour that can be a form of bullying includes:
- Keeping someone out of a group (online or offline)
- Acting in an unpleasant way near or towards someone
- Giving nasty looks, making rude gestures, calling names, being rude and impolite, and constantly negative teasing.
- Spreading rumours or lies, or misrepresenting someone (i.e. using their Facebook account to post messages as if it were them)
- Mocking about that can sometimes go too far
- Harassing someone based on their race, sex, religion, gender or a disability
- Intentionally and repeatedly hurting someone physically
- Intentionally stalking someone
- Taking advantage of any power over someone else like a Prefect or a Student Representative.
Bullying can happen anywhere. It can be in schools, at home, at work, in online social spaces, via text messaging or via email. It can be physical, verbal, emotional, and it also includes messages, public statements and behaviour online intended to cause distress or harm (also known as cyberbullying). But no matter what form bullying takes, the results can be the same: severe distress and pain for the person.
Who is a Bully?
A bully is an individual who harasses, abuses, intimidates people. It is especially those with less power they have or those weaker or vulnerable in some way. The word often implies that such behavior is habitual.
Bully can also be a verb meaning to treat people in this way (to act as a bully toward them), as in The man who was a bully to his classmates in school is now teaching children how not to bully others. Someone who is treated in this way is said to be bullied. The act of treating people in this way is called bullying.
How to Curb Bullying in Schools?
Separation is a very effective way of curbing bullying in schools. It is important to keep the bully away from the person they are bullying. If you’re helping a peer who is experiencing bullying go with them to a safe space away from the bully. If you’re an adult trying to stop a bullying situation, do not force the two parties to be in the same room together or to shake hands and make up. Put them in separate rooms and talk with each one individually. At the end, you then look for a safe space to iron things out amicably.
Support plays a critical role in curbing bullying by addressing the needs and concerns of both victims and potential bullies. Another important way to get support is to ask for help. Students that are being bullied must learn to always speak up to avoid the situation from getting worse. The help and support that students gets can drastically reduce the effects of bullying on them. It will also help to reduce and curb bullying in schools.
Open Communication should be among students and even teachers so as to avoid misunderstanding. You can prevent Bullying easily through open communication and having honest conversations with the students or people. Another effective approach is for you to spread the word that bullying must stop. Communicate this with your peers and the people around you especially if you are in a school or an environment where bullying is very prevalent.
Are you in need of a web software solution that doesn’t only connect students and teachers but also connect parents and other stakeholders?SchoolTry makes it possible for teachers and students to connect without any hassle.
SchoolTry, a web platform that helps to Automate, Digitize and Transform Your School work. If you are a school owner who needs a web solution to make your school work better and more efficient. You can simply request for a free demo to understand how it works or click here to sign up for free. No credit card required.
|
<urn:uuid:fb98aa73-6869-4ec8-9389-3e68901f2394>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474688.78/warc/CC-MAIN-20240227220707-20240228010707-00380.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9696887135505676,
"pii_count": 0,
"score": 4.34375,
"token_count": 1042,
"url": "http://ec2-52-214-81-77.eu-west-1.compute.amazonaws.com/2023/10/26/how-to-curb-bullying-in-schools/"
}
|
Bullying has become a major concern especially in schools and other places of work and learning.How to Curb Bullying in Schools is what you will learn in the article you are about to read.
But before we dive in, let’s understand some basic concepts;
What is Bullying?
Bullying is a pattern of unsafe behavior that is usually intentional. It also involves an imbalance of power or strength. It typically includes actions or words that are meant to harm, intimidate, or control another person. Bullying can manifest in various forms, such as physical, verbal, social, or cyberbullying. It often occurs repeatedly over time, creating a hostile and distressing environment for the victim. Common examples of bullying include name-calling, teasing, spreading rumors, physical aggression, exclusion, and online harassment. It’s important to address bullying to protect the well-being and mental health of individuals.
Bullying is also when people intentionally use words or actions against someone or a group of people to cause distress. These actions are usually by people who have more influence or power over someone else. It can also be someone who want to make you feel less powerful or helpless.
Bullying is not the same as conflict between people or disliking someone. People even might bully each other because of conflict or dislike.
The sort of repeated behaviour that can be a form of bullying includes:
- Keeping someone out of a group (online or offline)
- Acting in an unpleasant way near or towards someone
- Giving nasty looks, making rude gestures, calling names, being rude and impolite, and constantly negative teasing.
- Spreading rumours or lies, or misrepresenting someone (i.e. using their Facebook account to post messages as if it were them)
- Mocking about that can sometimes go too far
- Harassing someone based on their race, sex, religion, gender or a disability
- Intentionally and repeatedly hurting someone physically
- Intentionally stalking someone
- Taking advantage of any power over someone else like a Prefect or a Student Representative.
Bullying can happen anywhere. It can be in schools, at home, at work, in online social spaces, via text messaging or via email. It can be physical, verbal, emotional, and it also includes messages, public statements and behaviour online intended to cause distress or harm (also known as cyberbullying). But no matter what form bullying takes, the results can be the same: severe distress and pain for the person.
Who is
|
a Bully?
A bully is an individual who harasses, abuses, intimidates people. It is especially those with less power they have or those weaker or vulnerable in some way. The word often implies that such behavior is habitual.
Bully can also be a verb meaning to treat people in this way (to act as a bully toward them), as in The man who was a bully to his classmates in school is now teaching children how not to bully others. Someone who is treated in this way is said to be bullied. The act of treating people in this way is called bullying.
How to Curb Bullying in Schools?
Separation is a very effective way of curbing bullying in schools. It is important to keep the bully away from the person they are bullying. If you’re helping a peer who is experiencing bullying go with them to a safe space away from the bully. If you’re an adult trying to stop a bullying situation, do not force the two parties to be in the same room together or to shake hands and make up. Put them in separate rooms and talk with each one individually. At the end, you then look for a safe space to iron things out amicably.
Support plays a critical role in curbing bullying by addressing the needs and concerns of both victims and potential bullies. Another important way to get support is to ask for help. Students that are being bullied must learn to always speak up to avoid the situation from getting worse. The help and support that students gets can drastically reduce the effects of bullying on them. It will also help to reduce and curb bullying in schools.
Open Communication should be among students and even teachers so as to avoid misunderstanding. You can prevent Bullying easily through open communication and having honest conversations with the students or people. Another effective approach is for you to spread the word that bullying must stop. Communicate this with your peers and the people around you especially if you are in a school or an environment where bullying is very prevalent.
Are you in need of a web software solution that doesn’t only connect students and teachers but also connect parents and other stakeholders?SchoolTry makes it possible for teachers and students to connect without any hassle.
SchoolTry, a web platform that helps to Automate, Digitize and Transform Your School work. If you are a school owner who needs a web solution to make your school work better and more efficient. You can simply request for a free demo to understand how it works or click here to sign up for free. No credit card required.
|
The Inflation Reduction Act is a landmark piece of legislation that directs more than $1 trillion in subsidies and incentives toward clean energy production. It includes tax credits for buyers of new clean vehicles, production tax credits for clean energy like wind and solar, and more production tax credits for advanced energy technologies like batteries. The U.S. government has been waving around big financial incentives to lure manufacturing to the country, and many producers have taken notice.
While some battery makers were already well on their way to setting up new U.S. plants, others are now rethinking their location decisions. For instance, Tesla recently announced it will pause its plans to build a battery cell plant in Germany and forgo $1.3 billion of state aid there, and instead build one in Texas. Recent statements from electric vehicle and battery makers that they are expecting big government payouts thanks to the IRA raise the question: Just how much will the battery production credits cost American taxpayers?
Under Section 13502, “Advanced Manufacturing Production Credit,” the IRA includes production credits for battery cells and battery modules produced in the United States. The Congressional Budget Office’s cost estimate of the provision over the 2022-2031 fiscal years was $30.6 billion.
The credit can be monetized so that a producer is eligible for a direct payment from Treasury. (Section 13502 includes other production credits, but here I focus on the production credits for battery cell capacity and battery module capacity.)
The amount of the credit depends on the amount of energy the battery produces, in terms of kilowatt hours. As a Congressional Research Service report notes, battery cells can qualify for a credit of $35 per kilowatt hour of capacity, and battery modules for a credit of $10 per kilowatt of capacity, or $45 in the case of a battery module that does not use battery cells.
Estimating the potential cost of these production credits is speculative because of large uncertainties: How many eligible EV batteries will be produced in the United States? How many EVs will be sold in the U.S.? An Argonne National Labs report includes estimates of announced battery plant capacity in North America for plug-in EVs and the estimated U.S. share of that capacity. Taking those figures, and assuming the battery plants maintain at least 75% capacity utilization, we can come up with annual estimates for the production credits.
Applying the full $45 production credit across the board, the total value of the production credits over calendar years 2023 to 2032 is approximately $196.5 billion. Applying the $10 and $35 production credits, the value drops to $43.7 billion and $152.8 billion, respectively. Currently, most cell production is outside of North America and U.S. battery module assembly plants would only qualify for the $10 production credit until the full cell production is moved to the country. (My figures take into account the IRA’s phase out of the production credits: 100% from 2023-2029, 75% in 2030, 50% in 2031 and 25% in 2032.)
Despite the current numbers, the recent announcements on the surge of new battery cell plants across the country suggest more and more battery producers will be eligible for the higher credits in the coming years. Perhaps more than what was considered when the CBO conducted their cost estimates.
My figures also assume battery makers can get the minerals they need to make the batteries: If they cannot, then production levels and production credits would be less. On the other hand, if battery capacity were to increase beyond the ANL projections, then the production credits would be greater.
Finally, these production credits appear to be actual payments and not just tax write offs. This would suggest that even a company that does not pay taxes could still receive these monetized production credits.
CBO’s task of scoring massive pieces of legislation like the IRA is often next to impossible when so little information is available. But the difference between CBO’s estimates of $30.6 billion and estimates based on more recent information of up to $196.5 billion is large enough to warrant a deeper dive by policymakers. Treasury will be writing important guidelines in the coming months that will define eligibility. My hope is that these new figures spur discussion around the costs of the “Advanced Manufacturing Production Credit” in the IRA.
|
<urn:uuid:91ee1539-442e-4bad-9cb3-01c01ac63555>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00037.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9494778513908386,
"pii_count": 0,
"score": 2.53125,
"token_count": 892,
"url": "https://www.forbes.com/sites/christinemcdaniel/2023/02/01/the-cost-of-battery-production-tax-credits-provided-in-the-ira/?sh=114a72ca79ef"
}
|
The Inflation Reduction Act is a landmark piece of legislation that directs more than $1 trillion in subsidies and incentives toward clean energy production. It includes tax credits for buyers of new clean vehicles, production tax credits for clean energy like wind and solar, and more production tax credits for advanced energy technologies like batteries. The U.S. government has been waving around big financial incentives to lure manufacturing to the country, and many producers have taken notice.
While some battery makers were already well on their way to setting up new U.S. plants, others are now rethinking their location decisions. For instance, Tesla recently announced it will pause its plans to build a battery cell plant in Germany and forgo $1.3 billion of state aid there, and instead build one in Texas. Recent statements from electric vehicle and battery makers that they are expecting big government payouts thanks to the IRA raise the question: Just how much will the battery production credits cost American taxpayers?
Under Section 13502, “Advanced Manufacturing Production Credit,” the IRA includes production credits for battery cells and battery modules produced in the United States. The Congressional Budget Office’s cost estimate of the provision over the 2022-2031 fiscal years was $30.6 billion.
The credit can be monetized so that a producer is eligible for a direct payment from Treasury. (Section 13502 includes other production credits, but here I focus on the production credits for battery cell capacity and battery module capacity.)
The amount of the credit depends on the amount of energy the battery produces, in terms of kilowatt hours. As a Congressional Research Service report notes, battery cells can qualify for a credit of $35 per kilowatt hour of capacity, and battery modules for a credit of $10 per kilowatt of capacity, or $45 in the case of a battery module that does not use battery cells.
Estimating the potential cost of these production credits is speculative because of large uncertainties: How many eligible EV batteries will be produced in the United States? How many EVs will be sold in the U.S.? An Argonne National Labs report includes estimates of announced battery plant capacity in North America for plug-in EVs and the estimated U.S. share of that capacity. Taking those figures, and assuming the battery plants maintain at least 75% capacity utilization, we can come up with annual estimates for the production credits.
Applying the full $45 production credit across the board, the total value of
|
the production credits over calendar years 2023 to 2032 is approximately $196.5 billion. Applying the $10 and $35 production credits, the value drops to $43.7 billion and $152.8 billion, respectively. Currently, most cell production is outside of North America and U.S. battery module assembly plants would only qualify for the $10 production credit until the full cell production is moved to the country. (My figures take into account the IRA’s phase out of the production credits: 100% from 2023-2029, 75% in 2030, 50% in 2031 and 25% in 2032.)
Despite the current numbers, the recent announcements on the surge of new battery cell plants across the country suggest more and more battery producers will be eligible for the higher credits in the coming years. Perhaps more than what was considered when the CBO conducted their cost estimates.
My figures also assume battery makers can get the minerals they need to make the batteries: If they cannot, then production levels and production credits would be less. On the other hand, if battery capacity were to increase beyond the ANL projections, then the production credits would be greater.
Finally, these production credits appear to be actual payments and not just tax write offs. This would suggest that even a company that does not pay taxes could still receive these monetized production credits.
CBO’s task of scoring massive pieces of legislation like the IRA is often next to impossible when so little information is available. But the difference between CBO’s estimates of $30.6 billion and estimates based on more recent information of up to $196.5 billion is large enough to warrant a deeper dive by policymakers. Treasury will be writing important guidelines in the coming months that will define eligibility. My hope is that these new figures spur discussion around the costs of the “Advanced Manufacturing Production Credit” in the IRA.
|
Nestled between the snowy ranges of Mount Rainier and Glacier Peak, a significant glacier in Washington state has disappeared after existing full of ice and snowpack for millennia, according to a researcher who has tracked the glacier for years.
In this swath of mountain range in the Washington Cascades east of Seattle, the climate crisis dealt the final blow to the Hinman Glacier, the largest in the region, according to Mauri Pelto, a glaciologist with Nichols College. It’s not just the Northern Cascades that’s losing ice. Researchers recently found that up to half of the planet’s glaciers could be lost by the end of the century, even if the world’s ambitious global climate targets, including phasing out fossil fuels, are met.
To paint a picture of Hinman’s retreat, experts say an unofficially named “Hinman Lake” has replaced the former glacier, which contains traces of relict ice masses. As the lake filled from glacier melt, it became harder for hikers to traverse this part of the mountain range.
Pelto told CNN he has been visiting and observing Mount Hinman for 40 years. And in the summer of 2022, as temperatures soared and an unrelenting dry spell gripped the Northwest, Pelto led a team up the mountain only to see Hinman’s demise.
“It’s completely disappeared. This was the biggest glacier in this part of the mountain range — it was exceptional,” Pelto told CNN. The glacier could reform, he said, “but as we continue to warm into the future that will be even less hospitable.”
Roughly 50 miles east of Seattle, deep in the Alpine Lakes Wilderness, Hinman and its neighboring glaciers have been critical to the Pacific Northwest’s salmon population as well as other aquatic species. During the warmest and driest periods, Hinman fed the Skykomish River with a considerable amount of cool glacier runoff. But as the climate crisis advanced, the flow dropped significantly.
This decline in summer streamflow from the glaciers and warming freshwater reduces the not only the salmons’ quality of life but also that of people who rely on the river, experts say. Since the 1950s, the primary glaciers that feed the Skykomish River basin have lost around 55% of their surface area. Last year, Pelto and his team measured that the Columbia Glacier declined in area by 25%, Foss by 70%, Lynch by 40% and Hinman by 95%.
David Shean, professor of civil and environmental engineering who focuses on glaciers at the University of Washington, said he and collaborators with the US Geological Survey have been working to quantify the changes, including direct measurements of ice volume and mass change over time to tell the full story.
As glaciers thin and retreat, he said, they can form stagnant ice patches in alcoves that are less susceptible to extreme temperatures. But these ice patches are often too thin to flow downhill, which is an important criteria for an ice mass to be classified as a “glacier.”
Shean noted that not all of the lingering ice in those alcoves has yet vanished. But he also said “it may no longer technically qualify as a ‘glacier’ because it’s not flowing, and the residual ice will likely disappear completely in the coming decade or more.”
Many glaciers were formed during the last Ice Age. And while glaciologists including Pelto aren’t sure how far back the Hinman Glacier was created, he found strong evidence that Hinman was older than Mount Mazama eruption, which created Oregon’s Crater Lake, 7,000 years ago.
There’s still some hope, Pelto said. For a glacier to form and persist, it needs more average accumulation of snowfall. Hinman would need to see a bounty of above-average snowfall in the coming years for it to reform. But with the rate at which planet-warming pollution is accelerating, these icy landscapes as we know them may no longer be the same.
“The rate of loss in the past few decades is higher than earlier during the 20th century,” Shean said, noting that smaller glaciers are particularly hard hit. “We’ve seen more ice loss in the past 50 to 70 years for the smaller, lower elevation glaciers in the Washington North Cascades, compared to the larger, higher elevation glaciers, like those on Mt. Rainier, for example.”
Since Pelto started noticing the Hinman glacier’s decline in 2005, he decided he wanted to document it beyond science and bring in artists, including his daughter Jill, who can capture the changing landscape through painting.
“I really feel like the loss of [glaciers] from the landscape does tap into people’s emotions, and art does that better than science data,” Pelto said. “And so I’ve tried to bring artists out every summer.”
|
<urn:uuid:5544063f-43be-4477-9a86-d8704f505a66>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00476.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9653353095054626,
"pii_count": 0,
"score": 3.265625,
"token_count": 1049,
"url": "https://www.cnn.com/2023/02/01/us/washington-hinman-glacier-disappear-climate/index.html"
}
|
Nestled between the snowy ranges of Mount Rainier and Glacier Peak, a significant glacier in Washington state has disappeared after existing full of ice and snowpack for millennia, according to a researcher who has tracked the glacier for years.
In this swath of mountain range in the Washington Cascades east of Seattle, the climate crisis dealt the final blow to the Hinman Glacier, the largest in the region, according to Mauri Pelto, a glaciologist with Nichols College. It’s not just the Northern Cascades that’s losing ice. Researchers recently found that up to half of the planet’s glaciers could be lost by the end of the century, even if the world’s ambitious global climate targets, including phasing out fossil fuels, are met.
To paint a picture of Hinman’s retreat, experts say an unofficially named “Hinman Lake” has replaced the former glacier, which contains traces of relict ice masses. As the lake filled from glacier melt, it became harder for hikers to traverse this part of the mountain range.
Pelto told CNN he has been visiting and observing Mount Hinman for 40 years. And in the summer of 2022, as temperatures soared and an unrelenting dry spell gripped the Northwest, Pelto led a team up the mountain only to see Hinman’s demise.
“It’s completely disappeared. This was the biggest glacier in this part of the mountain range — it was exceptional,” Pelto told CNN. The glacier could reform, he said, “but as we continue to warm into the future that will be even less hospitable.”
Roughly 50 miles east of Seattle, deep in the Alpine Lakes Wilderness, Hinman and its neighboring glaciers have been critical to the Pacific Northwest’s salmon population as well as other aquatic species. During the warmest and driest periods, Hinman fed the Skykomish River with a considerable amount of cool glacier runoff. But as the climate crisis advanced, the flow dropped significantly.
This decline in summer streamflow from the glaciers and warming freshwater reduces the not only the salmons’ quality of life but also that of people who rely on the river, experts say. Since the 1950s, the primary glaciers that feed the Skykomish River basin have lost around 55% of their surface area. Last year, Pelto and his team measured that the Columbia Glacier declined in area by
|
25%, Foss by 70%, Lynch by 40% and Hinman by 95%.
David Shean, professor of civil and environmental engineering who focuses on glaciers at the University of Washington, said he and collaborators with the US Geological Survey have been working to quantify the changes, including direct measurements of ice volume and mass change over time to tell the full story.
As glaciers thin and retreat, he said, they can form stagnant ice patches in alcoves that are less susceptible to extreme temperatures. But these ice patches are often too thin to flow downhill, which is an important criteria for an ice mass to be classified as a “glacier.”
Shean noted that not all of the lingering ice in those alcoves has yet vanished. But he also said “it may no longer technically qualify as a ‘glacier’ because it’s not flowing, and the residual ice will likely disappear completely in the coming decade or more.”
Many glaciers were formed during the last Ice Age. And while glaciologists including Pelto aren’t sure how far back the Hinman Glacier was created, he found strong evidence that Hinman was older than Mount Mazama eruption, which created Oregon’s Crater Lake, 7,000 years ago.
There’s still some hope, Pelto said. For a glacier to form and persist, it needs more average accumulation of snowfall. Hinman would need to see a bounty of above-average snowfall in the coming years for it to reform. But with the rate at which planet-warming pollution is accelerating, these icy landscapes as we know them may no longer be the same.
“The rate of loss in the past few decades is higher than earlier during the 20th century,” Shean said, noting that smaller glaciers are particularly hard hit. “We’ve seen more ice loss in the past 50 to 70 years for the smaller, lower elevation glaciers in the Washington North Cascades, compared to the larger, higher elevation glaciers, like those on Mt. Rainier, for example.”
Since Pelto started noticing the Hinman glacier’s decline in 2005, he decided he wanted to document it beyond science and bring in artists, including his daughter Jill, who can capture the changing landscape through painting.
“I really feel like the loss of [glaciers] from the landscape does tap into people’s emotions, and art does that better than science data,” Pelto said. “And so I’ve tried to bring artists out every summer.”
|
One of the potential difficulties with the concept of sustainability is its current context. It has become a value-laded term, employed by different, usually benevolent, interest groups. One benevolent interest is to safeguard the planet for future generations, with an underlying assumption that current human behaviour is making our planet unsustainable. “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” (The Institute for Sustainable Development). This is the definition, often quoted, part of the modern origin story for the word that has become increasingly present in news headlines around the world alongside its downbeat sibling, “unsustainability”. From “the planet is dying” to “there is no sustainability problem” that line is crowded with different views. In recent years the balance has tipped towards the sustainability of our earth (and our lives on it) becoming the issue of our times.
From Sustainability to Restrainability
So, things can be ‘sustainable’ or ‘unsustainable’ or somewhere along a line between those two ideas. Evidence is then brought forward, depending on different interests, to claim a position along that binary line.
In recent years the evidence for the current global model of economics (and commerce) being basically unsustainable is presented as largely irrefutable. Governments and international organisations such as the United Nations have accepted the evidence for a currently unsustainable world emanating from tens of thousands of scientific studies and reports pulled together into authoritative digests.
Groups with interests against unsustainability have usually included the oil industry and large globally focused businesses; groups with interests in favour of unsustainability are diverse including small island states (who are already being flooded due to sea level rise, countries where there is already a lack of rainfall and rising famine, as well as the green lobby and the alternative energy sector. They have recently been joined (with more or less commitment)by various governments around the globe.
Sustainability as a concept is presented in the mainstream media as a binary line of sustainable or unsustainable. international organisations such as the United Nations have developed their own models of sustainability with its SDGs (sustainable development goals) Many of these have moved the definition outside of its traditional roots of being largely about Earth’s resources, pollution and climate change, to take in gender equality and education, among others. The UN’s 17 sustainable development goals are certainly not just about rising temperatures and sea levels.
The history of the term sustainable development is often rooted back to the 1960s. “The remit of the Brundtland Report was to investigate the numerous concerns that had been raised in previous decades, namely, that human activity was having severe and negative impacts on the planet, and that patterns of growth and development would be unsustainable if they continued unchecked. Key works that highlighted this thinking included Rachel Carson’s Silent Spring (1962), Garret Hardin’s Tragedy of the Commons (1968), the Blueprint for Survival by the Ecologist magazine (1972) and the Club of Rome’s Limits to Growth report (1972).”
A far less explored concept lies at the heart of the sustainability crises we now find ourselves in with the figure of 1.5 degrees the most quoted as a ceiling to at least avoid the worst of the impacts of climate change now showing itself on the planetary stage. The world is heating up, melting ice caps, raising sea levels, causing drought, flooding, famine and catastrophic changes to weather systems. This concept, when understood and explored points to how the sustainability crises could have been avoided or limited, and must now be approached as an urgent crisis, a “wicked” problem. That word is “restrainability”. The ability to hold back from full (and even over-) commitment to a process can ensure that risk of harm is minimised and possibly avoided altogether. If it is too late and a worsening crisis emerges, restrainability can limit further harm and even enable recovery and healing. Often a better outcome can be achieved, with less resource commitment when restraint is shown. and, of course, showing restraint can also lead a process to full short, fail and also cause harm.
Words, words words …
in common usage, the word ‘restrain’ is used in many different contexts. The police may restrain a violent person from causing harm to themselves or others. We sometimes say “I just couldn’t restrain my urge to tell her what I thought of her” or to “reach for that extra slice of gateau”. The ability to restrain involves a level of real-time discernment, the ability to hold back, to go for 50% instead of 100, to delay action or curb the urge to act entirely. The purpose of restraint is to limit or prevent a negative outcome, either for oneself or those around us. In terms of the current widely shared view of sustainability, restrainability is a fundamental ability we have to a more or less degree. It is lack of restraint on the part of many that has contributed to our climate emergency, itself at least partly the result of compulsive growth. Lack of restraint can become toxic.
An inability to show restraint can arise through an addictive nature, through grasping and greed and the wish to maximise gain. It can also arise through ignorance, where a lack of information, knowledge, context or awareness of consequences of one’s actions can be influential factors. Also being unwilling or unable to act with restraint can lead to a tipping point where the lack of restraint has caused a more or less dramatic or impactful outcome. we drain the lake dry, fish the river to extinction, talk the room into silence, overwork until we collapse.
When we lack restraint
Our lack of restraint in using oil, coal and natural gas has led to severe climate impacts. In winner take all business practices, a lack of restraint can create poverty, social breakdown and many other social problems and outcomes; the rich get richer, the poor get poorer and, perhaps ironically, both end up getting depressed as a result.
Without any restraint, social breakdown can occur as the “losers” (financially) eventually turn on the so-called winners whether that be through the rising up of the masses in angry revolution or workers simplyminimising or disengaging from their work.
For the most part, business owners and shareholders have understood the consequences and risks of “winner takes all” and exercised necessary restraint. However, in terms of our planet’s current overall health, we have clearly demonstrated poor collective restrainability as a human race. We have been poor stewards despite decades of emerging knowledge and calls to exercise collective restraint on planet Earth. We didn’t show restraint when we could have been in preventive mode, and we are struggling to agree internationally how to show restraint when the problems are right there before us.
When the first signs of a problem emerge, exercising restraint can, in some cases, be the best course of action. We turn down the volume on our music in order to avoid a bang on the wall from the neighbours. We change our diet and eat more healthily before we get a cholesterol problem (or worse). We slow the car down as the road becomes more hazardous with sudden rainfall. Restraint in advance, being preventive can diminish, even eliminate a growing problem. We can act instinctively (often our survival instinct comes into play, if we are in touch with it) or we can act on prior and emerging knowledge, information and advice from other people. Restraint kicks in to prevent a problem getting worse, or even developing in the first place. Restraint acts in creative opposition to going “too far” and over the cliff. Restraint can stop the inflation turning into an explosion, the riskiness becoming a crash or a fall.
We can go back even further in time. Before a specific problem has even shown itself we can act with restraint. This can also be instinctive or it can be through the value of history and hindsight. Stories from the past and from our peers can inspire is to consider risks and possibilities in advance of any overt showing of a problem. we can prevent another conflict by engaging in pre-emptive talks and signing cooperative agreements. we can change the controls on a product to make it even safer.
Showing restraint can be a personal act, almost in each moment of our lives. We slow down as we walk along a garden path in order to avoid treading on small insects and other tiny creatures. We eat more slowly to avoid indigestion. We make something last longer in order to savour it for longer. We create a simpler design for a product.
We hold back on being too direct with a friend in order to avoid hurting them. We save some of our money for a more difficult month to come, or we hold back from buying a new product straight away in order to wait for the upgrade, or the winter sales. Restraint is often goal-based and can be used to ensure better outcomes and avoidance of poor ones. It’s a natural behaviour, and some of us are better as it than others, based on our parenting, education, peer pressure, the goals and pressures of the organisations and businesses we work in.
Restrainability needs more attention right now
Without restraint we are greedy, grasping, fearfully short-term, and often closed to challenge and new information which might signal the need for restraint. With restraint we are often more discerning, future-aware, opening to new knowledge and information and more in control of ourselves and our behaviour. When restraint becomes compulsive we can be equally dangerous, avoiding opportunity and actions that can authentically benefit ourselves and our communities. Restraint itself can be applied in an unrestrained way to our cost!
In our current global crisis, restrainability needs more attention. Giving the concept more prominence focuses us on our own behaviour and the impact a lack of skills in acting with restraint is having on our planet.
Restraint in advance
Without the exercise of restraint in advance, of a problem appearing or worsening in the present and without being open to learning from the past, unsustainability gets worse. The problem-causing behaviours continue and assumptions lie unchallenged or questioned. And this is where the world sits today in terms of our sustainability crisis. Restrainability remains largely low, and happens too little and too late
What the world currently needs is restrainable development, not only our knowing when, why and how to hold back when making commitments of the Earth’s resources to human activity, but also through developing systems and processes that have built-in restrainability.
Tipping over the edge
We should design nothing that tips us irreversibly over the edge towards damage and disaster. A new definition of sustainable product and process innovation will be one that describes systems that have inherent, in-built restrainability. Some already do, like volume controls, like emission chimneys, but we have much legacy system and process that does not; we are still designing and marketing too many products that are largely unrecyclable and we are coding some products such as those based on artificial intelligence right now that may well get out of control. What if we design a robot than is inherently both unrestrainable (by us) and unable to restrain itself? we are still acting as if we are in a world in which restrainability is a low priority skill and behaviour set.
Restrainability became unfashionable during the era of growth. Unlimited growth and restrainability are not generally compatible with each other. As marketing and advertising promoted growth as a virtue in what became the ‘whoppa’-sized and ‘all you can eat’ culture of the ’80s and beyond annual upgrading to new smartphones and other gadgets alongside long distance holidays and ‘buy two get one free products’, exercising restraint was often presented a boring, even party pooping, just as abstention was confused (deliberately) in advertising and marketing with austerity and misery. exercising restraint was being a ‘meanie’.
Growth without limit was fashionable, obvious to all, even courageous in TV food shows that pit “man versus food”. The new Mile High Club was who could eat five Big Macs without vomitting. Double digit growth in business became (and often still is), not even best practice, but the expected norm. Restraint was portrayed as being weak-minded and over-cautious, lacking ambition and holding the business (and the planet) back.
We might restrain dangerous criminals, we might even restrain hyperactive dogs, but restraining business practice and the quest for limitless growth was irresponsibly antisocial and anti-business. Google the word “restraint” and that’s pretty much all you’ll find about a word that lies at the core of what we need to do to save our planet.
Restraining was something queens, kings and emperors did back in history when they stayed the hand of execution out of mercy. In the days of growth, restraint was hardly mentioned, almost taboo and certainly confined to the fringes of the spoilers and the scaredy-cats of a fun-loving society.
Renewing restrainability for modern times
So for many (and certainly many of the emerging generation on Earth), restraint, rarely used in recent decades in mainstream conversation, now feels like an archaic, official, even medical or military word, with a reek of parental language and telling us what (and what not) to do at a time when we (and certainly in the West) feel free to do pretty much whatever we want). it’s a bit culturally clumsy to use the word which is similar to how ‘sustainability’ sounded to many when it came into wider use as far back as the ’60s and ’70s and really made a more prominent appearance after 2000.
As a result. restrainability feels negative, concerned with the bit of the glass that is half empty (or overflowingly too full). It’s a bit of a ‘downer’ word, intent on stopping the fun and spoiling the party that was never supposed to end.
Similarly, many people , when they hear the word “sustainability’ these days they associate it with disaster scenarios, cutting down on the treats and indulgences, and even stopping stuff that makes us comfortable and abundant-feeling.
Developing a positive view
Yet sustainability in the spirit of its original appearance (and quoted above) was a positively focused thing, all about legacy and leaving the world in a better state for our kids and grandkids to inherit. It was (and is) about enjoying the process of sustainable living, of nurturing and living with our planet rather than treating it as an enemy. Acting with restraint can be a joyful thing as it opens space for possibility, creates latent resource, allows processes to settle, engenders healing and recover, can lower risk, can be more satisfyingly efficient, can result in benefit other people, animals and all aspects of nature to be less negatively impacted. It can even foster a longer term and, get this – more sustainable – growth.
When we show restraint we hold back from what we may have been about to say. Or we say it differently, perhaps with more subtlety and less pressure or force. Restraint can open up space for different kinds of conversation, a more considered response, space to pause, breathe reflect and even be silent and just ponder. Restraint can often reveal that a pause lets the land recover and that longer term yields of crops in our fields are better in the longer run. Restraint is a founding stone of sustainable farming. Restraint can make us more patient, able to see a longer view and to consider other paths to the same or a different outcome.It can apply the brakes, reduce the acceleration and let us take in more of the landscape of possibility.
Showing restraint in setting hard and often harsh, inflexible laws for people to follow can open space for people to learn and become more self-responsible. Freely willed following of guidance can often lead to better and more effective and efficient outcomes than harshly enforced rules and laws. Restrainability has always been a latent skill in all of us. To restrain is usually a choice. The word is used in medical and legal settings and, though a fundamental part of such fields of practice, the ability to self-restrain is a different use of the word.It is time for this key aspect of restrainability to be revived for, without free decision to practice restraint, our planet is truly in trouble. we must decouple ourselves fro the distorted view that restraint is always an imposed, forced thing. Of course it isn’t.
Restrainability makes use of the tools of dialogue and conversation and only rarely applies n iron grip on holding something back. Why? because dogmatic, fixed and unchanging restraint is, paradoxically, a form of its opposite. Punishment is aimed at compliance with varying degrees of success and often less and less restraint in enforcement and sanction. Taken to its extreme, unrestrained law enforcement becomes the tools of life incarceration and execution. if we become to restraining in our restraint it no longer becomes a contigent skill. Education and dialogue often can achieve the same compliance and usually a better volunteered, conscious participation in social norms and behaviours. Less can be more (note can). Ultimately, to exercise high ability restraint requires us to be flexible, agile and restrained in our restraint!
The benefits and urgency of a restrainable world
Restraint is usually less resource-intensive over time, leaner and grows a sense of personal and social responsibility in those exercising restraint. It isn’t the same as exiting, becoming indifferent, detaching for their own sakes. Restraint is an ability, developed with experience and practice – it is an ability to restrain oneself in different situations for a better, more sustainable outcome. Which is why sustainability is less important as a current set of environmentally focused practices, than a desired outcome or ongoing state. Restrainability is one of the keys to sustainability.
Our world is currently and desperately seeking to become re-balanced, to recover and heal. Restrainability is the priority to be taught in our schools. We also need to develop and reward a wider practice of it now, in our daily and working lives.
Interesting. Restrainability? It’s an innovating concept but why not go for addressing the root problems with Participatory AI-enabled systems regeneration? As the young climate activists say, Systems Change, Not Climate Change…!
Comments are closed.
|
<urn:uuid:609e718c-441b-47e7-9c7e-281afde3c210>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00484.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9600737690925598,
"pii_count": 0,
"score": 3.203125,
"token_count": 3872,
"url": "https://rationalmadness.wordpress.com/2023/03/05/introducing-restrainability/"
}
|
One of the potential difficulties with the concept of sustainability is its current context. It has become a value-laded term, employed by different, usually benevolent, interest groups. One benevolent interest is to safeguard the planet for future generations, with an underlying assumption that current human behaviour is making our planet unsustainable. “Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.” (The Institute for Sustainable Development). This is the definition, often quoted, part of the modern origin story for the word that has become increasingly present in news headlines around the world alongside its downbeat sibling, “unsustainability”. From “the planet is dying” to “there is no sustainability problem” that line is crowded with different views. In recent years the balance has tipped towards the sustainability of our earth (and our lives on it) becoming the issue of our times.
From Sustainability to Restrainability
So, things can be ‘sustainable’ or ‘unsustainable’ or somewhere along a line between those two ideas. Evidence is then brought forward, depending on different interests, to claim a position along that binary line.
In recent years the evidence for the current global model of economics (and commerce) being basically unsustainable is presented as largely irrefutable. Governments and international organisations such as the United Nations have accepted the evidence for a currently unsustainable world emanating from tens of thousands of scientific studies and reports pulled together into authoritative digests.
Groups with interests against unsustainability have usually included the oil industry and large globally focused businesses; groups with interests in favour of unsustainability are diverse including small island states (who are already being flooded due to sea level rise, countries where there is already a lack of rainfall and rising famine, as well as the green lobby and the alternative energy sector. They have recently been joined (with more or less commitment)by various governments around the globe.
Sustainability as a concept is presented in the mainstream media as a binary line of sustainable or unsustainable. international organisations such as the United Nations have developed their own models of sustainability with its SDGs (sustainable development goals) Many of these have moved the definition outside of its traditional roots of being largely about Earth’s resources, pollution and climate change, to take in gender equality and education, among others. The UN’s 17 sustainable development goals are certainly not just about rising temperatures and sea levels.
The history of the term sustainable development
|
is often rooted back to the 1960s. “The remit of the Brundtland Report was to investigate the numerous concerns that had been raised in previous decades, namely, that human activity was having severe and negative impacts on the planet, and that patterns of growth and development would be unsustainable if they continued unchecked. Key works that highlighted this thinking included Rachel Carson’s Silent Spring (1962), Garret Hardin’s Tragedy of the Commons (1968), the Blueprint for Survival by the Ecologist magazine (1972) and the Club of Rome’s Limits to Growth report (1972).”
A far less explored concept lies at the heart of the sustainability crises we now find ourselves in with the figure of 1.5 degrees the most quoted as a ceiling to at least avoid the worst of the impacts of climate change now showing itself on the planetary stage. The world is heating up, melting ice caps, raising sea levels, causing drought, flooding, famine and catastrophic changes to weather systems. This concept, when understood and explored points to how the sustainability crises could have been avoided or limited, and must now be approached as an urgent crisis, a “wicked” problem. That word is “restrainability”. The ability to hold back from full (and even over-) commitment to a process can ensure that risk of harm is minimised and possibly avoided altogether. If it is too late and a worsening crisis emerges, restrainability can limit further harm and even enable recovery and healing. Often a better outcome can be achieved, with less resource commitment when restraint is shown. and, of course, showing restraint can also lead a process to full short, fail and also cause harm.
Words, words words …
in common usage, the word ‘restrain’ is used in many different contexts. The police may restrain a violent person from causing harm to themselves or others. We sometimes say “I just couldn’t restrain my urge to tell her what I thought of her” or to “reach for that extra slice of gateau”. The ability to restrain involves a level of real-time discernment, the ability to hold back, to go for 50% instead of 100, to delay action or curb the urge to act entirely. The purpose of restraint is to limit or prevent a negative outcome, either for oneself or those around us. In terms of the current widely shared view of sustainability, restrainability is a fundamental ability we have to a more or less degree. It is lack of restraint on the part of many that has contributed to our climate emergency, itself at least partly the result of compulsive growth. Lack of restraint can become toxic.
An inability to show restraint can arise through an addictive nature, through grasping and greed and the wish to maximise gain. It can also arise through ignorance, where a lack of information, knowledge, context or awareness of consequences of one’s actions can be influential factors. Also being unwilling or unable to act with restraint can lead to a tipping point where the lack of restraint has caused a more or less dramatic or impactful outcome. we drain the lake dry, fish the river to extinction, talk the room into silence, overwork until we collapse.
When we lack restraint
Our lack of restraint in using oil, coal and natural gas has led to severe climate impacts. In winner take all business practices, a lack of restraint can create poverty, social breakdown and many other social problems and outcomes; the rich get richer, the poor get poorer and, perhaps ironically, both end up getting depressed as a result.
Without any restraint, social breakdown can occur as the “losers” (financially) eventually turn on the so-called winners whether that be through the rising up of the masses in angry revolution or workers simplyminimising or disengaging from their work.
For the most part, business owners and shareholders have understood the consequences and risks of “winner takes all” and exercised necessary restraint. However, in terms of our planet’s current overall health, we have clearly demonstrated poor collective restrainability as a human race. We have been poor stewards despite decades of emerging knowledge and calls to exercise collective restraint on planet Earth. We didn’t show restraint when we could have been in preventive mode, and we are struggling to agree internationally how to show restraint when the problems are right there before us.
When the first signs of a problem emerge, exercising restraint can, in some cases, be the best course of action. We turn down the volume on our music in order to avoid a bang on the wall from the neighbours. We change our diet and eat more healthily before we get a cholesterol problem (or worse). We slow the car down as the road becomes more hazardous with sudden rainfall. Restraint in advance, being preventive can diminish, even eliminate a growing problem. We can act instinctively (often our survival instinct comes into play, if we are in touch with it) or we can act on prior and emerging knowledge, information and advice from other people. Restraint kicks in to prevent a problem getting worse, or even developing in the first place. Restraint acts in creative opposition to going “too far” and over the cliff. Restraint can stop the inflation turning into an explosion, the riskiness becoming a crash or a fall.
We can go back even further in time. Before a specific problem has even shown itself we can act with restraint. This can also be instinctive or it can be through the value of history and hindsight. Stories from the past and from our peers can inspire is to consider risks and possibilities in advance of any overt showing of a problem. we can prevent another conflict by engaging in pre-emptive talks and signing cooperative agreements. we can change the controls on a product to make it even safer.
Showing restraint can be a personal act, almost in each moment of our lives. We slow down as we walk along a garden path in order to avoid treading on small insects and other tiny creatures. We eat more slowly to avoid indigestion. We make something last longer in order to savour it for longer. We create a simpler design for a product.
We hold back on being too direct with a friend in order to avoid hurting them. We save some of our money for a more difficult month to come, or we hold back from buying a new product straight away in order to wait for the upgrade, or the winter sales. Restraint is often goal-based and can be used to ensure better outcomes and avoidance of poor ones. It’s a natural behaviour, and some of us are better as it than others, based on our parenting, education, peer pressure, the goals and pressures of the organisations and businesses we work in.
Restrainability needs more attention right now
Without restraint we are greedy, grasping, fearfully short-term, and often closed to challenge and new information which might signal the need for restraint. With restraint we are often more discerning, future-aware, opening to new knowledge and information and more in control of ourselves and our behaviour. When restraint becomes compulsive we can be equally dangerous, avoiding opportunity and actions that can authentically benefit ourselves and our communities. Restraint itself can be applied in an unrestrained way to our cost!
In our current global crisis, restrainability needs more attention. Giving the concept more prominence focuses us on our own behaviour and the impact a lack of skills in acting with restraint is having on our planet.
Restraint in advance
Without the exercise of restraint in advance, of a problem appearing or worsening in the present and without being open to learning from the past, unsustainability gets worse. The problem-causing behaviours continue and assumptions lie unchallenged or questioned. And this is where the world sits today in terms of our sustainability crisis. Restrainability remains largely low, and happens too little and too late
What the world currently needs is restrainable development, not only our knowing when, why and how to hold back when making commitments of the Earth’s resources to human activity, but also through developing systems and processes that have built-in restrainability.
Tipping over the edge
We should design nothing that tips us irreversibly over the edge towards damage and disaster. A new definition of sustainable product and process innovation will be one that describes systems that have inherent, in-built restrainability. Some already do, like volume controls, like emission chimneys, but we have much legacy system and process that does not; we are still designing and marketing too many products that are largely unrecyclable and we are coding some products such as those based on artificial intelligence right now that may well get out of control. What if we design a robot than is inherently both unrestrainable (by us) and unable to restrain itself? we are still acting as if we are in a world in which restrainability is a low priority skill and behaviour set.
Restrainability became unfashionable during the era of growth. Unlimited growth and restrainability are not generally compatible with each other. As marketing and advertising promoted growth as a virtue in what became the ‘whoppa’-sized and ‘all you can eat’ culture of the ’80s and beyond annual upgrading to new smartphones and other gadgets alongside long distance holidays and ‘buy two get one free products’, exercising restraint was often presented a boring, even party pooping, just as abstention was confused (deliberately) in advertising and marketing with austerity and misery. exercising restraint was being a ‘meanie’.
Growth without limit was fashionable, obvious to all, even courageous in TV food shows that pit “man versus food”. The new Mile High Club was who could eat five Big Macs without vomitting. Double digit growth in business became (and often still is), not even best practice, but the expected norm. Restraint was portrayed as being weak-minded and over-cautious, lacking ambition and holding the business (and the planet) back.
We might restrain dangerous criminals, we might even restrain hyperactive dogs, but restraining business practice and the quest for limitless growth was irresponsibly antisocial and anti-business. Google the word “restraint” and that’s pretty much all you’ll find about a word that lies at the core of what we need to do to save our planet.
Restraining was something queens, kings and emperors did back in history when they stayed the hand of execution out of mercy. In the days of growth, restraint was hardly mentioned, almost taboo and certainly confined to the fringes of the spoilers and the scaredy-cats of a fun-loving society.
Renewing restrainability for modern times
So for many (and certainly many of the emerging generation on Earth), restraint, rarely used in recent decades in mainstream conversation, now feels like an archaic, official, even medical or military word, with a reek of parental language and telling us what (and what not) to do at a time when we (and certainly in the West) feel free to do pretty much whatever we want). it’s a bit culturally clumsy to use the word which is similar to how ‘sustainability’ sounded to many when it came into wider use as far back as the ’60s and ’70s and really made a more prominent appearance after 2000.
As a result. restrainability feels negative, concerned with the bit of the glass that is half empty (or overflowingly too full). It’s a bit of a ‘downer’ word, intent on stopping the fun and spoiling the party that was never supposed to end.
Similarly, many people , when they hear the word “sustainability’ these days they associate it with disaster scenarios, cutting down on the treats and indulgences, and even stopping stuff that makes us comfortable and abundant-feeling.
Developing a positive view
Yet sustainability in the spirit of its original appearance (and quoted above) was a positively focused thing, all about legacy and leaving the world in a better state for our kids and grandkids to inherit. It was (and is) about enjoying the process of sustainable living, of nurturing and living with our planet rather than treating it as an enemy. Acting with restraint can be a joyful thing as it opens space for possibility, creates latent resource, allows processes to settle, engenders healing and recover, can lower risk, can be more satisfyingly efficient, can result in benefit other people, animals and all aspects of nature to be less negatively impacted. It can even foster a longer term and, get this – more sustainable – growth.
When we show restraint we hold back from what we may have been about to say. Or we say it differently, perhaps with more subtlety and less pressure or force. Restraint can open up space for different kinds of conversation, a more considered response, space to pause, breathe reflect and even be silent and just ponder. Restraint can often reveal that a pause lets the land recover and that longer term yields of crops in our fields are better in the longer run. Restraint is a founding stone of sustainable farming. Restraint can make us more patient, able to see a longer view and to consider other paths to the same or a different outcome.It can apply the brakes, reduce the acceleration and let us take in more of the landscape of possibility.
Showing restraint in setting hard and often harsh, inflexible laws for people to follow can open space for people to learn and become more self-responsible. Freely willed following of guidance can often lead to better and more effective and efficient outcomes than harshly enforced rules and laws. Restrainability has always been a latent skill in all of us. To restrain is usually a choice. The word is used in medical and legal settings and, though a fundamental part of such fields of practice, the ability to self-restrain is a different use of the word.It is time for this key aspect of restrainability to be revived for, without free decision to practice restraint, our planet is truly in trouble. we must decouple ourselves fro the distorted view that restraint is always an imposed, forced thing. Of course it isn’t.
Restrainability makes use of the tools of dialogue and conversation and only rarely applies n iron grip on holding something back. Why? because dogmatic, fixed and unchanging restraint is, paradoxically, a form of its opposite. Punishment is aimed at compliance with varying degrees of success and often less and less restraint in enforcement and sanction. Taken to its extreme, unrestrained law enforcement becomes the tools of life incarceration and execution. if we become to restraining in our restraint it no longer becomes a contigent skill. Education and dialogue often can achieve the same compliance and usually a better volunteered, conscious participation in social norms and behaviours. Less can be more (note can). Ultimately, to exercise high ability restraint requires us to be flexible, agile and restrained in our restraint!
The benefits and urgency of a restrainable world
Restraint is usually less resource-intensive over time, leaner and grows a sense of personal and social responsibility in those exercising restraint. It isn’t the same as exiting, becoming indifferent, detaching for their own sakes. Restraint is an ability, developed with experience and practice – it is an ability to restrain oneself in different situations for a better, more sustainable outcome. Which is why sustainability is less important as a current set of environmentally focused practices, than a desired outcome or ongoing state. Restrainability is one of the keys to sustainability.
Our world is currently and desperately seeking to become re-balanced, to recover and heal. Restrainability is the priority to be taught in our schools. We also need to develop and reward a wider practice of it now, in our daily and working lives.
Interesting. Restrainability? It’s an innovating concept but why not go for addressing the root problems with Participatory AI-enabled systems regeneration? As the young climate activists say, Systems Change, Not Climate Change…!
Comments are closed.
|
National Hispanic Heritage Month highlights cultural diversity of Spanish-speaking Americans
Hispanic history and culture take center stage across the U.S. for National Hispanic Heritage Month, which is celebrated annually from Sept. 15 to Oct. 15. The celebration recognizes the contributions of Hispanic Americans, the fastest-growing racial or ethnic minority, according to the Census.
It includes people whose ancestors come from Spain, Mexico, the Caribbean and Central and South America.
With a U.S. population of over 63 million people, there will be a plethora of Hispanic Heritage Month celebrations all over the country starting Friday.
How did it come to be?
Before there was National Hispanic Heritage Month, there was Hispanic Heritage Week, created through legislation sponsored by Mexican American Rep. Edward R. Roybal of Los Angeles and signed into law in 1968 by President Lyndon B. Johnson. The annual celebration was to be held the week that included Sept. 15-16.
The weeklong commemoration was expanded to a month two decades later, with legislation signed into law by President Ronald Reagan.
Sept. 15 was chosen as the starting point, to coincide with the anniversary of the "Cry of Dolores," or "Grito de Dolores," issued in 1810 from a town in central Mexico that launched that country's war for independence from Spain.
The Central American nations of Guatemala, Honduras, El Salvador, Nicaragua and Costa Rica celebrate their independence on Sept. 15, and Mexico marks its national day on Sept. 16, the day after the cry for independence.
Also during National Hispanic Heritage Month, the South American nation of Chile observes its independence day on Sept. 18. Indigenous Peoples' Day, previously known as Columbus Day, is observed in the U.S. on the second Monday of October.
The four-week period is about honoring how Hispanic populations have shaped the U.S. in the past and present.
What is the socioeconomic outlook for Latinos?
Latinos account for the fastest-growing portion of the national Gross Domestic Product, said Luisa Godinez-Puig, an equity scholar with nonprofit research organization Urban Institute. The GDP of Latinos was an estimated $2.8 trillion in 2020, compared with $2.1 trillion in 2015 and $1.7 trillion in 2010.
It's a contradiction to the fact that Latinos still face obstacles with personal finances. Surveys show median wealth is five times lower for Latino families than their white counterparts. Less than half of Hispanic families own their own homes, Godinez-Puig added.
"Their needs are not being met," Godinez-Puig said. "Not having access to banking products, not having access to investment, not having access to credit — all of these things impact the wealth, building opportunities of families, which are passed from generation to generation."
What is the origin of the term Hispanic?
Hispanic was a term created by the federal government for people descended from Spanish-speaking cultures, said Anita Huízar-Hernández, associate director of Arizona State University's Hispanic Research Center. But for some, the label has a connotation of political conservatism and emphasizes a connection to Spain. It sometimes gets mistakenly interchanged with "Latino" or "Latinx."
"I think the most important thing to acknowledge is that people have their own definition of those terms," Huízar-Hernández said. "They don't always match. And that is very dependent on your generation, it's dependent on the part of the country that you live in."
For some, Latino reflects their ties to Latin America. So you may see some celebrations referred to as Latinx or Latin Heritage Month.
Latino Americans are not a monolith. There are a number of identifiers for Latin Americans, depending largely on personal preference. Mexican Americans who grew up during the 1960s Civil Rights era may identify as Chicano. Others may go by their family's nation of origin such as Colombian American or Salvadoran American.
All those various cultures have unique differences when it comes to music, food, art and other cultural touchstones.
"That is what makes this community complex, difficult to describe, difficult to create just one term about and yet, a really dynamic part of the story of lots of places, including the U.S.," Huizar-Hernandez said.
What are some celebrations happening?
From Los Angeles to Tampa, Florida, there is no shortage of festivities. Most are touting traditional Latin foods and entertainment like mariachi bands, folklórico dancing and salsa lessons. The revelry doesn't just showcase Mexican culture but Puerto Rican, Colombian and a host of others.
There are also events spotlighting Hispanic Americans in various industries like Suave Fest, a Latino Craft Beer Festival in Denver on Sept. 30, and the New York Latino Film Festival, which runs Friday through Sept. 24.
The Smithsonian Institution in Washington, D.C., is offering a slate of activities elevating Hispanic heritage. Its National Portrait Gallery will be teaming up with Lil' Libros, a bilingual children's book publisher for the second Fotos & Recuerdos Festival. There will be story times, arts and craft workshops and gallery tours.
The U.S. Postal Service has put its own stamp on the occasion. Earlier this month, the agency released new Forever stamps featuring piñatas. The handmade party favorites are typically associated with parties where blindfolded guests try to crack one open to get to the candy inside. But their history can be traced to the 16th century.
Corporations also partake in the celebrations. Walt Disney World Resort has chosen this week to debut live-action versions at Magic Kingdom Park of Mirabel and Bruno, two popular characters from the animated hit "Encanto." The movie, about a family in Colombia with extraordinary abilities, is credited with showing a large audience the nuances of Colombian culture.
"It's not Mexico and you can see them eating different foods and you can see like the music is different," said Huízar-Hernández, who is Mexican American. "Movies, TV, that's the way a lot of people learn."
A future source of learning will be the highly anticipated Smithsonian National Museum of the American Latino on the National Mall. In 2020, Congress passed bipartisan legislation to establish the museum.
The legislation also called for an American Latino History and Culture grant program. This summer Urban Institute researchers, like Godinez-Puig, have been working with the Institute of Museum and Library Services on how to dole out federal grants for American Latino museums and their programming. This initiative reflects the growing number of places occupied by Latino culture, she said.
"As s a Latina woman, myself, I'm just very excited to see that we ... not just only talk about where we are lagging, but also celebrate the variety of cultures that we have within our community and celebrate the hard-working people that contribute a lot to the American society," Godinez-Puig said. "Because they do."
Associated Press writer Anita Snow contributed to this report.
|
<urn:uuid:2a7fddcd-36d4-4114-9456-583af38e7d72>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506045.12/warc/CC-MAIN-20230921210007-20230922000007-00526.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.9564087986946106,
"pii_count": 0,
"score": 3.546875,
"token_count": 1461,
"url": "https://www.kpbs.org/news/national/2023/09/15/national-hispanic-heritage-month-highlights-cultural-diversity-of-spanish-speaking-americans"
}
|
National Hispanic Heritage Month highlights cultural diversity of Spanish-speaking Americans
Hispanic history and culture take center stage across the U.S. for National Hispanic Heritage Month, which is celebrated annually from Sept. 15 to Oct. 15. The celebration recognizes the contributions of Hispanic Americans, the fastest-growing racial or ethnic minority, according to the Census.
It includes people whose ancestors come from Spain, Mexico, the Caribbean and Central and South America.
With a U.S. population of over 63 million people, there will be a plethora of Hispanic Heritage Month celebrations all over the country starting Friday.
How did it come to be?
Before there was National Hispanic Heritage Month, there was Hispanic Heritage Week, created through legislation sponsored by Mexican American Rep. Edward R. Roybal of Los Angeles and signed into law in 1968 by President Lyndon B. Johnson. The annual celebration was to be held the week that included Sept. 15-16.
The weeklong commemoration was expanded to a month two decades later, with legislation signed into law by President Ronald Reagan.
Sept. 15 was chosen as the starting point, to coincide with the anniversary of the "Cry of Dolores," or "Grito de Dolores," issued in 1810 from a town in central Mexico that launched that country's war for independence from Spain.
The Central American nations of Guatemala, Honduras, El Salvador, Nicaragua and Costa Rica celebrate their independence on Sept. 15, and Mexico marks its national day on Sept. 16, the day after the cry for independence.
Also during National Hispanic Heritage Month, the South American nation of Chile observes its independence day on Sept. 18. Indigenous Peoples' Day, previously known as Columbus Day, is observed in the U.S. on the second Monday of October.
The four-week period is about honoring how Hispanic populations have shaped the U.S. in the past and present.
What is the socioeconomic outlook for Latinos?
Latinos account for the fastest-growing portion of the national Gross Domestic Product, said Luisa Godinez-Puig, an equity scholar with nonprofit research organization Urban Institute. The GDP of Latinos was an estimated $2.8 trillion in 2020, compared with $2.1 trillion in 2015 and $1.7 trillion in 2010.
It's a contradiction to the fact that Latinos still face obstacles with personal finances. Surveys show median wealth is five times lower
|
for Latino families than their white counterparts. Less than half of Hispanic families own their own homes, Godinez-Puig added.
"Their needs are not being met," Godinez-Puig said. "Not having access to banking products, not having access to investment, not having access to credit — all of these things impact the wealth, building opportunities of families, which are passed from generation to generation."
What is the origin of the term Hispanic?
Hispanic was a term created by the federal government for people descended from Spanish-speaking cultures, said Anita Huízar-Hernández, associate director of Arizona State University's Hispanic Research Center. But for some, the label has a connotation of political conservatism and emphasizes a connection to Spain. It sometimes gets mistakenly interchanged with "Latino" or "Latinx."
"I think the most important thing to acknowledge is that people have their own definition of those terms," Huízar-Hernández said. "They don't always match. And that is very dependent on your generation, it's dependent on the part of the country that you live in."
For some, Latino reflects their ties to Latin America. So you may see some celebrations referred to as Latinx or Latin Heritage Month.
Latino Americans are not a monolith. There are a number of identifiers for Latin Americans, depending largely on personal preference. Mexican Americans who grew up during the 1960s Civil Rights era may identify as Chicano. Others may go by their family's nation of origin such as Colombian American or Salvadoran American.
All those various cultures have unique differences when it comes to music, food, art and other cultural touchstones.
"That is what makes this community complex, difficult to describe, difficult to create just one term about and yet, a really dynamic part of the story of lots of places, including the U.S.," Huizar-Hernandez said.
What are some celebrations happening?
From Los Angeles to Tampa, Florida, there is no shortage of festivities. Most are touting traditional Latin foods and entertainment like mariachi bands, folklórico dancing and salsa lessons. The revelry doesn't just showcase Mexican culture but Puerto Rican, Colombian and a host of others.
There are also events spotlighting Hispanic Americans in various industries like Suave Fest, a Latino Craft Beer Festival in Denver on Sept. 30, and the New York Latino Film Festival, which runs Friday through Sept. 24.
The Smithsonian Institution in Washington, D.C., is offering a slate of activities elevating Hispanic heritage. Its National Portrait Gallery will be teaming up with Lil' Libros, a bilingual children's book publisher for the second Fotos & Recuerdos Festival. There will be story times, arts and craft workshops and gallery tours.
The U.S. Postal Service has put its own stamp on the occasion. Earlier this month, the agency released new Forever stamps featuring piñatas. The handmade party favorites are typically associated with parties where blindfolded guests try to crack one open to get to the candy inside. But their history can be traced to the 16th century.
Corporations also partake in the celebrations. Walt Disney World Resort has chosen this week to debut live-action versions at Magic Kingdom Park of Mirabel and Bruno, two popular characters from the animated hit "Encanto." The movie, about a family in Colombia with extraordinary abilities, is credited with showing a large audience the nuances of Colombian culture.
"It's not Mexico and you can see them eating different foods and you can see like the music is different," said Huízar-Hernández, who is Mexican American. "Movies, TV, that's the way a lot of people learn."
A future source of learning will be the highly anticipated Smithsonian National Museum of the American Latino on the National Mall. In 2020, Congress passed bipartisan legislation to establish the museum.
The legislation also called for an American Latino History and Culture grant program. This summer Urban Institute researchers, like Godinez-Puig, have been working with the Institute of Museum and Library Services on how to dole out federal grants for American Latino museums and their programming. This initiative reflects the growing number of places occupied by Latino culture, she said.
"As s a Latina woman, myself, I'm just very excited to see that we ... not just only talk about where we are lagging, but also celebrate the variety of cultures that we have within our community and celebrate the hard-working people that contribute a lot to the American society," Godinez-Puig said. "Because they do."
Associated Press writer Anita Snow contributed to this report.
|
There are 38 resident bumblebee species in Finland and they have been monitored since 2019. Some of these are of particular interest, being Lapland specialists, and in 2018 a popular book about bumblebees, the first ever to look at the species in Finland, cemented a growing public interest.
This interest in bumblebees is perhaps indicative of a wider interest in pollinators generally in Finland. In 2022 this growing concern resulted in a national pollinator strategy.
Having acknowledged that bumblebee monitoring in Finland is in its infancy it is necessary to qualify that, with an acknowledgement that interest in bumblebees in Finland has a much longer history. As early as 1928 Olavi Hulkkonen, who worked as an assistant in Helsinki University’s botany department was responsible for one of the earliest publications on bumblebees. Sadly he died in his early thirties. His work to some extent was continued by Karl Johannes Valle who was an entomologist and keen bumblebee watcher. He studied bumblebees until the late 1950s, mainly around the nation’s capital, Helsinki.
With a population of 5.5m, almost exactly the same as Scotland, Finland is famed for its woodlands and lakes, indeed lakes cover almost 10% of Finland.
Finland’s national pollinator strategy shares largely the same goals as the Scottish version, which was used as one source of inspiration in designing it. The strategy was formed by a steering group including the most essential stakeholder groups (administration, farmers, conservation organizations etc.), led by the Ministry of Environment. The strategy includes several measures to prevent a further decline in pollinator numbers, as well as a commitment to improve the nationwide monitoring of pollinators. Finland has already had an on-going monitoring scheme for both moths and butterflies since the 90s. As a result of the goals in the pollinator strategy, new monitoring has also been started on solitary bees and hover flies. There are strong similarities between this and the UK PoMS work.
The bumblebee monitoring in Finland was started in 2019 as a two-year citizen-science project. The expectations were rather low; a result of around 10-20 monitoring sites was thought to be realistic. Instead, the coordinators were astonished by an unprecedented popular and media interest on bumblebees and their monitoring. The number of sites reached 70 in the first year alone, and continued to increase. As the pilot proved to be a resounding success, it was continued for further years.
The coverage in bumblebee monitoring, rather like in Scotland, tended to be geographically concentrated. Whereas the central belt sees most Scottish monitoring, it was the south of Finland that saw the greatest number of records submitted.
As few volunteers had previous experience in identifying bumblebees, they were allowed to record them on a level which suited their knowledge. Thus for some recorders it was sufficient to note at least a part of their observations simply as bumblebee spp, whereas others gave either species group or indeed individual species information.
Most of the bumblebee transects walked have been around 500 to 1000m long. Records were submitted online. In the first three years 125 transects were covered, and in total over 55,000 bumblebees were recorded. As the coordinators expected, the proportion of bumblebee individuals identified on species level has steadily increasing during the first four years of the monitoring. This proves that although most of the recorders were amateurs to begin with, they have increased their skills through practice and produce more detailed information each year.
The species with the most individual records was the white-tailed bumblebee (Bombus lucorum), followed close by the common carder (B. pascuorum). The tree bumblebee (B. hypnorum) provide the third highest number of individual precise records. The number of species per site has usually been around ten, with some over fifteen at the most diverse locations. The monitoring has already shown that some species that have only recently colonized Finland, e.g. B. schrencki and B. semenoviellus, have already spread quite widely in the country.
We wish our Finnish counterparts good luck with their pollinator strategy, and look forward to carrying further news of their monitoring efforts over the years.
Bumblebee species on red clover in central Finland by PENTTI HÄNNINEN
With sincere thanks to Janne Heliola of the Finnish Environment Institute for all of his help in compiling this article.
|
<urn:uuid:6ab2b739-9602-40f2-9a53-b29693b9e442>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00238.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9786750078201294,
"pii_count": 0,
"score": 3.09375,
"token_count": 938,
"url": "https://scottishpollinators.wordpress.com/2023/03/02/finland-forges-ahead/"
}
|
There are 38 resident bumblebee species in Finland and they have been monitored since 2019. Some of these are of particular interest, being Lapland specialists, and in 2018 a popular book about bumblebees, the first ever to look at the species in Finland, cemented a growing public interest.
This interest in bumblebees is perhaps indicative of a wider interest in pollinators generally in Finland. In 2022 this growing concern resulted in a national pollinator strategy.
Having acknowledged that bumblebee monitoring in Finland is in its infancy it is necessary to qualify that, with an acknowledgement that interest in bumblebees in Finland has a much longer history. As early as 1928 Olavi Hulkkonen, who worked as an assistant in Helsinki University’s botany department was responsible for one of the earliest publications on bumblebees. Sadly he died in his early thirties. His work to some extent was continued by Karl Johannes Valle who was an entomologist and keen bumblebee watcher. He studied bumblebees until the late 1950s, mainly around the nation’s capital, Helsinki.
With a population of 5.5m, almost exactly the same as Scotland, Finland is famed for its woodlands and lakes, indeed lakes cover almost 10% of Finland.
Finland’s national pollinator strategy shares largely the same goals as the Scottish version, which was used as one source of inspiration in designing it. The strategy was formed by a steering group including the most essential stakeholder groups (administration, farmers, conservation organizations etc.), led by the Ministry of Environment. The strategy includes several measures to prevent a further decline in pollinator numbers, as well as a commitment to improve the nationwide monitoring of pollinators. Finland has already had an on-going monitoring scheme for both moths and butterflies since the 90s. As a result of the goals in the pollinator strategy, new monitoring has also been started on solitary bees and hover flies. There are strong similarities between this and the UK PoMS work.
The bumblebee monitoring in Finland was started in 2019 as a two-year citizen-science project. The expectations were rather low; a result of around 10-20 monitoring sites was thought to be realistic. Instead, the coordinators were astonished by an unprecedented popular and media interest on bumblebees and their monitoring. The number
|
of sites reached 70 in the first year alone, and continued to increase. As the pilot proved to be a resounding success, it was continued for further years.
The coverage in bumblebee monitoring, rather like in Scotland, tended to be geographically concentrated. Whereas the central belt sees most Scottish monitoring, it was the south of Finland that saw the greatest number of records submitted.
As few volunteers had previous experience in identifying bumblebees, they were allowed to record them on a level which suited their knowledge. Thus for some recorders it was sufficient to note at least a part of their observations simply as bumblebee spp, whereas others gave either species group or indeed individual species information.
Most of the bumblebee transects walked have been around 500 to 1000m long. Records were submitted online. In the first three years 125 transects were covered, and in total over 55,000 bumblebees were recorded. As the coordinators expected, the proportion of bumblebee individuals identified on species level has steadily increasing during the first four years of the monitoring. This proves that although most of the recorders were amateurs to begin with, they have increased their skills through practice and produce more detailed information each year.
The species with the most individual records was the white-tailed bumblebee (Bombus lucorum), followed close by the common carder (B. pascuorum). The tree bumblebee (B. hypnorum) provide the third highest number of individual precise records. The number of species per site has usually been around ten, with some over fifteen at the most diverse locations. The monitoring has already shown that some species that have only recently colonized Finland, e.g. B. schrencki and B. semenoviellus, have already spread quite widely in the country.
We wish our Finnish counterparts good luck with their pollinator strategy, and look forward to carrying further news of their monitoring efforts over the years.
Bumblebee species on red clover in central Finland by PENTTI HÄNNINEN
With sincere thanks to Janne Heliola of the Finnish Environment Institute for all of his help in compiling this article.
|
More than 11,000 have died in the Türkiye-Syria earthquake. How does it compare to the world's deadliest quakes?
The death toll across Türkiye and war-ravaged northern Syria has exceeded 11,000, making Monday's earthquake the deadliest in more than a decade.
The World Health Organization has warned it expects the number of fatalities to rise significantly, with thousands trapped under buildings and cold weather hampering rescue efforts.
So, how does the event and death toll compare to other earthquakes across the world?
Where have the deadliest earthquakes hit?
It is Türkiye's deadliest earthquake since 1999, when a magnitude-7.6 earthquake hit Izmit, killing around 18,000 people.
One of the deadliest earthquakes in history happened off the coast of Indonesia on Boxing Day in 2004.
The magnitude-9.1 earthquake and resulting tsunami that swept across the Indian Ocean killed about 230,000 people.
The other deadliest earthquake in the past 25 years hit Haiti on January 12, 2010.
About 220,000 people were reportedly killed in the magnitude-7 quake.
It destroyed more than 300,000 buildings in Port-au-Prince and across the country's south-west.
However, the reported death toll varies from 100,000 to the government's estimation of more than 300,000.
Other deadly events include a magnitude-7.6 earthquake in Pakistan in 2005 that killed more than 80,000 people and a magnitude-7.9 quake that struck China in 2008, causing more than 87,500 fatalities.
A magnitude-9.0 quake off the north-east coast of Japan in 2011 triggered a tsunami which killed nearly 20,000 people.
Before Monday's earthquake, the deadliest in recent years was a magnitude-7.6 quake in Nepal in 2015 which killed 8,800 people.
Highest magnitude earthquakes on record
The most powerful earthquake on record hit southern Chile in 1960 with a magnitude of 9.5, according to the United States Geological Survey.
The earthquake and the resulting tsunami claimed 1,655 lives.
Four years later, a magnitude-9.2 quake hit Southern Alaska.
The Japanese quake and tsunami in 2011 and Indian Ocean earthquake and tsunami in 2004 both reached a magnitude of 9.1.
How is magnitude measured?
To determine a size of an earthquake, the amplitude of the seismic waves and the distance of the seismograph is measured, according to Geoscience Australia.
They are then put in a formula to be converted to magnitude.
The magnitude measures the size of the earthquake by the energy released at the source.
Each whole number in the scale corresponds to the release of about 31 times more energy.
A magnitude-2 quake is typically said to be the smallest commonly felt by humans.
How does Monday's earthquake compare?
On average, there are fewer than 20 quakes over magnitude-7.0 in any year, making the earthquake in Türkiye and Syria severe.
A magnitude-6.2 earthquake that hit central Italy in 2016 killed some 300 people.
In comparison, the Türkiye-Syria earthquake released 250 times as much energy with a magnitude of 7.8.
That's according to Joanna Faure Walker, who is head of the University College London's institute for risk and disaster reduction.
Only two of the deadliest earthquakes from 2013 to 2022 were of the same magnitude as Monday's quake.
What are the challenges facing earthquake responses?
Every earthquake response comes with it its own set of challenges and issues.
This is evident as rescue teams in Türkiye and Syria battle severe weather conditions causing freezing temperatures and snow.
The conditions will also be affecting those who were left homeless or without shelter.
In 2015, Nepal's high altitude and mountainous areas provided response teams with huge issues, especially getting help to remote areas.
Monsoonal rains and subsequent landslides also created huge problems for the rescue effort.
The response to the Haiti earthquake in 2010 saw an influx of inexperienced first responders into the capital Port-au-Prince, which complicated relief efforts.
It was also later revealed that workers from the charity Oxfam were exploiting earthquake survivors for sex.
And the sheer size of the affected areas in the Boxing Day earthquake and tsunami in 2004 overwhelmed aid agencies.
Subsequent reports said charity rivalries, inappropriate aid and managing the huge amounts of money donated also hampered relief efforts.
ABC with wires
|
<urn:uuid:42fd70f4-c98a-4d33-a36e-801e295255eb>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00368.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9535049796104431,
"pii_count": 0,
"score": 3.03125,
"token_count": 917,
"url": "https://www.abc.net.au/news/2023-02-09/tuerkiye-syria-earthquake-comparison/101948616"
}
|
More than 11,000 have died in the Türkiye-Syria earthquake. How does it compare to the world's deadliest quakes?
The death toll across Türkiye and war-ravaged northern Syria has exceeded 11,000, making Monday's earthquake the deadliest in more than a decade.
The World Health Organization has warned it expects the number of fatalities to rise significantly, with thousands trapped under buildings and cold weather hampering rescue efforts.
So, how does the event and death toll compare to other earthquakes across the world?
Where have the deadliest earthquakes hit?
It is Türkiye's deadliest earthquake since 1999, when a magnitude-7.6 earthquake hit Izmit, killing around 18,000 people.
One of the deadliest earthquakes in history happened off the coast of Indonesia on Boxing Day in 2004.
The magnitude-9.1 earthquake and resulting tsunami that swept across the Indian Ocean killed about 230,000 people.
The other deadliest earthquake in the past 25 years hit Haiti on January 12, 2010.
About 220,000 people were reportedly killed in the magnitude-7 quake.
It destroyed more than 300,000 buildings in Port-au-Prince and across the country's south-west.
However, the reported death toll varies from 100,000 to the government's estimation of more than 300,000.
Other deadly events include a magnitude-7.6 earthquake in Pakistan in 2005 that killed more than 80,000 people and a magnitude-7.9 quake that struck China in 2008, causing more than 87,500 fatalities.
A magnitude-9.0 quake off the north-east coast of Japan in 2011 triggered a tsunami which killed nearly 20,000 people.
Before Monday's earthquake, the deadliest in recent years was a magnitude-7.6 quake in Nepal in 2015 which killed 8,800 people.
Highest magnitude earthquakes on record
The most powerful earthquake on record hit southern Chile in 1960 with a magnitude of 9.5, according to the United States Geological Survey.
The earthquake and the resulting tsunami claimed 1,655 lives.
Four years later,
|
a magnitude-9.2 quake hit Southern Alaska.
The Japanese quake and tsunami in 2011 and Indian Ocean earthquake and tsunami in 2004 both reached a magnitude of 9.1.
How is magnitude measured?
To determine a size of an earthquake, the amplitude of the seismic waves and the distance of the seismograph is measured, according to Geoscience Australia.
They are then put in a formula to be converted to magnitude.
The magnitude measures the size of the earthquake by the energy released at the source.
Each whole number in the scale corresponds to the release of about 31 times more energy.
A magnitude-2 quake is typically said to be the smallest commonly felt by humans.
How does Monday's earthquake compare?
On average, there are fewer than 20 quakes over magnitude-7.0 in any year, making the earthquake in Türkiye and Syria severe.
A magnitude-6.2 earthquake that hit central Italy in 2016 killed some 300 people.
In comparison, the Türkiye-Syria earthquake released 250 times as much energy with a magnitude of 7.8.
That's according to Joanna Faure Walker, who is head of the University College London's institute for risk and disaster reduction.
Only two of the deadliest earthquakes from 2013 to 2022 were of the same magnitude as Monday's quake.
What are the challenges facing earthquake responses?
Every earthquake response comes with it its own set of challenges and issues.
This is evident as rescue teams in Türkiye and Syria battle severe weather conditions causing freezing temperatures and snow.
The conditions will also be affecting those who were left homeless or without shelter.
In 2015, Nepal's high altitude and mountainous areas provided response teams with huge issues, especially getting help to remote areas.
Monsoonal rains and subsequent landslides also created huge problems for the rescue effort.
The response to the Haiti earthquake in 2010 saw an influx of inexperienced first responders into the capital Port-au-Prince, which complicated relief efforts.
It was also later revealed that workers from the charity Oxfam were exploiting earthquake survivors for sex.
And the sheer size of the affected areas in the Boxing Day earthquake and tsunami in 2004 overwhelmed aid agencies.
Subsequent reports said charity rivalries, inappropriate aid and managing the huge amounts of money donated also hampered relief efforts.
ABC with wires
|
UTSW researcher explores a "cooler" approach to treating brain cancer
Why is glioblastoma difficult to treat?
One of the things with cancers, especially something like glioblastoma, is that it is invasive. When you're working in the brain, you're always going to have some cells left over because you can only reset what you can see and you can only radiate, which you can kind of that region.
And then when you apply chemotherapy, you're going to kill only those cells that are sensitive to chemotherapy. And so any cells that already have some resistance, they'll become the predominant cell. And then when they become the predominant cells and they’ll recur.
Your research has focused on cooling or lowering the temperatures of cancer cells. So what exactly does that accomplish?
Tumor cells are constantly using resources and producing things. And because of this, they're also highly replicative, meaning they create lots of copies of themselves and that's how they grow and spread.
With hypothermia, the idea was something we can do that would reduce the metabolism, reduce the cell division, and reduce everything of the tumor cells that would still be safe. That way you're not just targeting one or two targets or one or two molecules. You're affecting multiple pathways at the same time in the tumor.
And so it's harder for the tumor to, you know, evolve around it or develop resistance around it. And at the same time, as long as, you know, the temperature is a good, safe region, it should be safe for the other cells of the brain as well since there's not as much cell division in the brain.
How exactly do you cool the cells?
We were able to make a device that uses something called a thermoelectric plate. It's similar to a heat pump where you apply power or electricity to this plate and it pulls heat. And so then we had a heat sink and a fan that would blow that into the surrounding air.
And so that's how we did it in rats. We had this probe that would go in the tumor, and then we had a plate that would pull that heat that was powered, and then get thrown out into the air with a heatsink and a fan.
But in patients, of course, you can't do that and you cannot imagine a patient wearing a big heat sink or a fan on their head.
And so what I'm working on now is instead of using a fan to blow the heat out into the air, I'm distributing that heat through the body, through the skin, with a different kind of system so that ultimately this could be a fully implantable system for patients.
The human brain could withstand that?
You know, people have been exploring cooling the brain for a really long time. The first part of even trying to cool tumor cells was one line in a paper that was back in 1959.
In the sixties or so, people started exploring hypothermia. And then in the nineties especially, they started using brain cooling to help end stroke and brain injuries.
There's still been a lot of debate as to how protective it is, but there is some evidence to suggest certain levels of hypothermia might be protective.
As far as you know at this point, could this be done on any case of glioblastoma or has the study run into limitations of any type?
Chances are whenever we start any trials, it's going to be we're going to first do standard-of-care therapy, would do surgery, chemotherapy, and radiation.
Then chances are from there, when the tumor most likely recurs, at least early on, that when we would intervene with this device, when you know, when there are no other options. Ultimately, of course, if things get better, we could start sooner, and probably early on we would be targeting tumors that are more superficial and more contained.
Eventually, as this device becomes more and more patient-centric, we envision that we'd be able to get to deeper tumors and deeper regions where we might not be able to reject those tumors, but we might be able to put a probe down in that deeper region. But those are far in the future.
Copyright 2023 KERA. To see more, visit KERA.
|
<urn:uuid:5082efb0-d8d3-445f-8b8d-b50099629582>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00696.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9741962552070618,
"pii_count": 0,
"score": 3.21875,
"token_count": 895,
"url": "https://www.tpr.org/bioscience-medicine/2023-02-06/utsw-researcher-explores-a-cooler-approach-to-treating-brain-cancer"
}
|
UTSW researcher explores a "cooler" approach to treating brain cancer
Why is glioblastoma difficult to treat?
One of the things with cancers, especially something like glioblastoma, is that it is invasive. When you're working in the brain, you're always going to have some cells left over because you can only reset what you can see and you can only radiate, which you can kind of that region.
And then when you apply chemotherapy, you're going to kill only those cells that are sensitive to chemotherapy. And so any cells that already have some resistance, they'll become the predominant cell. And then when they become the predominant cells and they’ll recur.
Your research has focused on cooling or lowering the temperatures of cancer cells. So what exactly does that accomplish?
Tumor cells are constantly using resources and producing things. And because of this, they're also highly replicative, meaning they create lots of copies of themselves and that's how they grow and spread.
With hypothermia, the idea was something we can do that would reduce the metabolism, reduce the cell division, and reduce everything of the tumor cells that would still be safe. That way you're not just targeting one or two targets or one or two molecules. You're affecting multiple pathways at the same time in the tumor.
And so it's harder for the tumor to, you know, evolve around it or develop resistance around it. And at the same time, as long as, you know, the temperature is a good, safe region, it should be safe for the other cells of the brain as well since there's not as much cell division in the brain.
How exactly do you cool the cells?
We were able to make a device that uses something called a thermoelectric plate. It's similar to a heat pump where you apply power or electricity to this plate and it pulls heat. And so then we had a heat sink and a fan that would blow that into the surrounding air.
And so that's how we did it in rats. We had this probe that would go in the tumor, and then we had a plate that would pull that heat that was powered, and then get thrown out into the air with a heatsink and a fan.
But in patients, of course, you can't do that and you cannot imagine a patient wearing a big heat sink or a fan on their head.
And so what I'm working on now is instead of using a fan to blow the heat out into the air, I'm distributing that heat
|
through the body, through the skin, with a different kind of system so that ultimately this could be a fully implantable system for patients.
The human brain could withstand that?
You know, people have been exploring cooling the brain for a really long time. The first part of even trying to cool tumor cells was one line in a paper that was back in 1959.
In the sixties or so, people started exploring hypothermia. And then in the nineties especially, they started using brain cooling to help end stroke and brain injuries.
There's still been a lot of debate as to how protective it is, but there is some evidence to suggest certain levels of hypothermia might be protective.
As far as you know at this point, could this be done on any case of glioblastoma or has the study run into limitations of any type?
Chances are whenever we start any trials, it's going to be we're going to first do standard-of-care therapy, would do surgery, chemotherapy, and radiation.
Then chances are from there, when the tumor most likely recurs, at least early on, that when we would intervene with this device, when you know, when there are no other options. Ultimately, of course, if things get better, we could start sooner, and probably early on we would be targeting tumors that are more superficial and more contained.
Eventually, as this device becomes more and more patient-centric, we envision that we'd be able to get to deeper tumors and deeper regions where we might not be able to reject those tumors, but we might be able to put a probe down in that deeper region. But those are far in the future.
Copyright 2023 KERA. To see more, visit KERA.
|
The rate of people dying from cancer in the United States has continuously declined over the past three decades, according to a new report from the American Cancer Society.
The US cancer death rate has fallen 33% since 1991, which corresponds to an estimated 3.8 million deaths averted, according to the report, published Thursday in CA: A Cancer Journal for Clinicians. The rate of lives lost to cancer continued to shrink in the most recent year for which data is available, between 2019 and 2020, by 1.5%.
The 33% decline in cancer mortality is “truly formidable,” said Karen Knudsen, chief executive officer of the American Cancer Society.
The report attributes this steady progress to improvements in cancer treatment, drops in smoking and increases in early detection.
“New revelations for prevention, for early detection and for treatment have resulted in true, meaningful gains in many of the 200 diseases that we call cancer,” Knudsen said.
In their report, researchers from the American Cancer Society also pointed to HPV vaccinations as connected to reductions in cancer deaths. HPV, or human papillomavirus, infections can cause cervical cancer and other cancer types, and vaccination has been linked with a decrease in new cervical cancer cases.
Among women in their early 20s, there was a 65% drop in cervical cancer rates from 2012 through 2019, “which totally follows the time when HPV vaccines were put into use,” said Dr. William Dahut, the society’s chief scientific officer.
“There are other cancers that are HPV-related – whether that’s head and neck cancers or anal cancers – so there’s optimism this will have importance beyond this,” he said.
The lifetime probability of being diagnosed with any invasive cancer is estimated to be 40.9% for men and 39.1% for women in the US, according to the new report.
The report also includes projections for 2023, estimating that there could be nearly 2 million new cancer cases – the equivalent of about 5,000 cases a day – and more than 600,000 cancer deaths in the United States this year.
During the early days of the Covid-19 pandemic, many people skipped regular medical exams, and some doctors have seen a rise in advanced cancer cases in the wake of pandemic-delayed screenings and treatment.
The American Cancer Society researchers were not able to track “that reduction in screening that we know we all observed across the country during the pandemic,” Knudsen said. “This time next year, I believe our report will give some initial insight into what the impact was in the pandemic of cancer incidence and cancer mortality.”
‘The continuation of good news’
The new report includes data from national programs and registries, including those at the National Cancer Institute, the US Centers for Disease Control and Prevention and the North American Association of Central Cancer Registries.
Data showed that the US cancer death rate rose during most of the 20th century, largely due to an increase in lung cancer deaths related to smoking. Then, as smoking rates fell and improvements in early detection and treatments for some cancers increased, there was a decline in the cancer death rate from its peak in 1991.
Since then, the pace of the decline has slowly accelerated.
The new report found that the five-year relative survival rate for all cancers combined has increased from 49% for diagnoses in the mid-1970s to 68% for diagnoses during 2012-18.
The cancer types that now have the highest survival rates are thyroid at 98%, prostate at 97%, testis at 95% and melanoma at 94%, according to the report.
Current survival rates are lowest for cancers of the pancreas, at 12%.
The finding about a decreasing cancer death rate shows “the continuation of good news,” said Dr. Otis Brawley, an oncology professor at Johns Hopkins University who was not involved in the research.
“The biggest reason for the decline that started in 1991 was the prevalence of smoking in the United States started going down in 1965,” said Brawley, a former chief medical officer of the American Cancer Society.
“That’s the reason why we started having a decline in 1991, and that decline has continued because the prevalence of people smoking in the United States has continued to go down,” he said. “Now, in certain diseases, our ability to treat has improved, and there are some people who are not dying because of treatment.”
Some cancers are rising
Although the death rate for cancer has been on a steady decline, the new report also highlights that new cases of breast, uterine and prostate cancer have been “of concern” and rising in the United States.
Incidence rates of breast cancer in women have been increasing by about 0.5% per year since the mid-2000s, according to the report.
Uterine corpus cancer incidence has gone up about 1% per year since the mid-2000s among women 50 and older and nearly 2% per year since at least the mid-1990s in younger women.
The prostate cancer incidence rate rose 3% per year from 2014 through 2019, after two decades of decline.
Knudsen called prostate cancer “an outlier” since its previous decline in incidence has reversed, appearing to be driven by diagnoses of advanced disease.
On Thursday, the American Cancer Society announced the launch of the Impact initiative, geared toward improving prostate cancer incidence and death rates by funding new research programs and expanding support for patients, among other efforts.
“Unfortunately, prostate cancer remains the number one most frequently diagnosed malignancy amongst men in this country, with almost 290,000 men expected to be diagnosed with prostate cancer this year,” Knudsen said. Cancer diagnosed when it is confined to the prostate has a five-year survival rate of “upwards of 99%,” she said, but for metastatic prostate cancer, there is no durable cure.
“Prostate cancer is the second leading cause of cancer death for men in this country,” she said. “What we’re reporting is not only an increase in the incidence of prostate cancer across all demographics but a 5% year-over-year increase in diagnosis of men with more advanced disease. So we are not catching these cancers early when we have an opportunity to cure men of prostate cancer.”
‘It’s well past time for us to take health inequities seriously’
Breast, uterine and prostate cancers also have a wide racial disparity, in which communities of color have higher death rates and lower survival rates.
In 2020, the risk of overall cancer death was 12% higher in Black people compared with White people, according to the new report.
“Not every individual or every family is affected equally,” Knudsen said.
For instance, “Black men unfortunately have a 70% increase in incidence of prostate cancer compared to White men and a two- to four-fold increase in prostate cancer mortality as related to any other ethnic and racial group in the United States,” she said.
The data in the new report demonstrates “important and consistent” advances against cancer, Dr. Ernest Hawk, vice president of cancer prevention and population sciences at the University of Texas MD Anderson Cancer Center, said in an email.
“Cancer is preventable in many instances and detectable at an early stage with better outcomes in many others. When necessary, treatments are improving in both their efficacy and safety. That’s all great news,” Hawk wrote.
“However, it’s well past time for us to take health inequities seriously and make them a much greater national priority. Inequities in cancer risks, cancer care and cancer outcomes are intolerable, and we should not be complacent with these regular reminders of avoidable inequities,” he said. “With deliberate and devoted effort, I believe we can eliminate these disparities and make even greater progress to end cancer.”
The White House responds
The new report shows “great progress,” White House Cancer Moonshot Coordinator Dr. Danielle Carnival said in a statement released Thursday.
The White House’s “Cancer Moonshot” initiative, which President Biden relaunched last year, commits the nation to work toward reducing the death rate from cancer by at least 50% over the next 25 years.
Get CNN Health's weekly newsletter
Sign up here to get The Results Are In with Dr. Sanjay Gupta every Tuesday from the CNN Health team.
“The report showing the U.S. has cut cancer deaths by one-third over the last 30 years is great progress, which we’ve achieved through driving smoking rates down, improving early detection, and delivering better treatments for many cancers. It means millions of American families have been spared the immeasurable loss of a loved one,” Carnival said in the statement.
“The report also underscores that there’s more work to do to save more lives,” she said. “President Biden’s vision for ending cancer as we know it is building on the progress we’ve made with an all-hands-on-deck effort to develop new ways to prevent, detect, and treat cancer – and ensure that the tools we have and those we develop along the way reach all Americans.”
|
<urn:uuid:03b917ba-ec36-4392-a0a9-80bc5e9ab8a8>
|
{
"dump": "CC-MAIN-2023-06",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00797.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9651563167572021,
"pii_count": 0,
"score": 2.703125,
"token_count": 1973,
"url": "https://www.cnn.com/2023/01/12/health/us-cancer-deaths-decline-wellness/index.html"
}
|
The rate of people dying from cancer in the United States has continuously declined over the past three decades, according to a new report from the American Cancer Society.
The US cancer death rate has fallen 33% since 1991, which corresponds to an estimated 3.8 million deaths averted, according to the report, published Thursday in CA: A Cancer Journal for Clinicians. The rate of lives lost to cancer continued to shrink in the most recent year for which data is available, between 2019 and 2020, by 1.5%.
The 33% decline in cancer mortality is “truly formidable,” said Karen Knudsen, chief executive officer of the American Cancer Society.
The report attributes this steady progress to improvements in cancer treatment, drops in smoking and increases in early detection.
“New revelations for prevention, for early detection and for treatment have resulted in true, meaningful gains in many of the 200 diseases that we call cancer,” Knudsen said.
In their report, researchers from the American Cancer Society also pointed to HPV vaccinations as connected to reductions in cancer deaths. HPV, or human papillomavirus, infections can cause cervical cancer and other cancer types, and vaccination has been linked with a decrease in new cervical cancer cases.
Among women in their early 20s, there was a 65% drop in cervical cancer rates from 2012 through 2019, “which totally follows the time when HPV vaccines were put into use,” said Dr. William Dahut, the society’s chief scientific officer.
“There are other cancers that are HPV-related – whether that’s head and neck cancers or anal cancers – so there’s optimism this will have importance beyond this,” he said.
The lifetime probability of being diagnosed with any invasive cancer is estimated to be 40.9% for men and 39.1% for women in the US, according to the new report.
The report also includes projections for 2023, estimating that there could be nearly 2 million new cancer cases – the equivalent of about 5,000 cases a day – and more than 600,000 cancer deaths in the United States this year.
During the early days of the Covid-19 pandemic, many people skipped regular medical exams, and some doctors have seen a rise in advanced cancer cases in the wake of pandemic-delayed screenings and treatment.
The American Cancer Society
|
researchers were not able to track “that reduction in screening that we know we all observed across the country during the pandemic,” Knudsen said. “This time next year, I believe our report will give some initial insight into what the impact was in the pandemic of cancer incidence and cancer mortality.”
‘The continuation of good news’
The new report includes data from national programs and registries, including those at the National Cancer Institute, the US Centers for Disease Control and Prevention and the North American Association of Central Cancer Registries.
Data showed that the US cancer death rate rose during most of the 20th century, largely due to an increase in lung cancer deaths related to smoking. Then, as smoking rates fell and improvements in early detection and treatments for some cancers increased, there was a decline in the cancer death rate from its peak in 1991.
Since then, the pace of the decline has slowly accelerated.
The new report found that the five-year relative survival rate for all cancers combined has increased from 49% for diagnoses in the mid-1970s to 68% for diagnoses during 2012-18.
The cancer types that now have the highest survival rates are thyroid at 98%, prostate at 97%, testis at 95% and melanoma at 94%, according to the report.
Current survival rates are lowest for cancers of the pancreas, at 12%.
The finding about a decreasing cancer death rate shows “the continuation of good news,” said Dr. Otis Brawley, an oncology professor at Johns Hopkins University who was not involved in the research.
“The biggest reason for the decline that started in 1991 was the prevalence of smoking in the United States started going down in 1965,” said Brawley, a former chief medical officer of the American Cancer Society.
“That’s the reason why we started having a decline in 1991, and that decline has continued because the prevalence of people smoking in the United States has continued to go down,” he said. “Now, in certain diseases, our ability to treat has improved, and there are some people who are not dying because of treatment.”
Some cancers are rising
Although the death rate for cancer has been on a steady decline, the new report also highlights that new cases of breast, uterine and prostate cancer have been “of concern” and rising in the United States.
Incidence rates of breast cancer in women have been increasing by about 0.5% per year since the mid-2000s, according to the report.
Uterine corpus cancer incidence has gone up about 1% per year since the mid-2000s among women 50 and older and nearly 2% per year since at least the mid-1990s in younger women.
The prostate cancer incidence rate rose 3% per year from 2014 through 2019, after two decades of decline.
Knudsen called prostate cancer “an outlier” since its previous decline in incidence has reversed, appearing to be driven by diagnoses of advanced disease.
On Thursday, the American Cancer Society announced the launch of the Impact initiative, geared toward improving prostate cancer incidence and death rates by funding new research programs and expanding support for patients, among other efforts.
“Unfortunately, prostate cancer remains the number one most frequently diagnosed malignancy amongst men in this country, with almost 290,000 men expected to be diagnosed with prostate cancer this year,” Knudsen said. Cancer diagnosed when it is confined to the prostate has a five-year survival rate of “upwards of 99%,” she said, but for metastatic prostate cancer, there is no durable cure.
“Prostate cancer is the second leading cause of cancer death for men in this country,” she said. “What we’re reporting is not only an increase in the incidence of prostate cancer across all demographics but a 5% year-over-year increase in diagnosis of men with more advanced disease. So we are not catching these cancers early when we have an opportunity to cure men of prostate cancer.”
‘It’s well past time for us to take health inequities seriously’
Breast, uterine and prostate cancers also have a wide racial disparity, in which communities of color have higher death rates and lower survival rates.
In 2020, the risk of overall cancer death was 12% higher in Black people compared with White people, according to the new report.
“Not every individual or every family is affected equally,” Knudsen said.
For instance, “Black men unfortunately have a 70% increase in incidence of prostate cancer compared to White men and a two- to four-fold increase in prostate cancer mortality as related to any other ethnic and racial group in the United States,” she said.
The data in the new report demonstrates “important and consistent” advances against cancer, Dr. Ernest Hawk, vice president of cancer prevention and population sciences at the University of Texas MD Anderson Cancer Center, said in an email.
“Cancer is preventable in many instances and detectable at an early stage with better outcomes in many others. When necessary, treatments are improving in both their efficacy and safety. That’s all great news,” Hawk wrote.
“However, it’s well past time for us to take health inequities seriously and make them a much greater national priority. Inequities in cancer risks, cancer care and cancer outcomes are intolerable, and we should not be complacent with these regular reminders of avoidable inequities,” he said. “With deliberate and devoted effort, I believe we can eliminate these disparities and make even greater progress to end cancer.”
The White House responds
The new report shows “great progress,” White House Cancer Moonshot Coordinator Dr. Danielle Carnival said in a statement released Thursday.
The White House’s “Cancer Moonshot” initiative, which President Biden relaunched last year, commits the nation to work toward reducing the death rate from cancer by at least 50% over the next 25 years.
Get CNN Health's weekly newsletter
Sign up here to get The Results Are In with Dr. Sanjay Gupta every Tuesday from the CNN Health team.
“The report showing the U.S. has cut cancer deaths by one-third over the last 30 years is great progress, which we’ve achieved through driving smoking rates down, improving early detection, and delivering better treatments for many cancers. It means millions of American families have been spared the immeasurable loss of a loved one,” Carnival said in the statement.
“The report also underscores that there’s more work to do to save more lives,” she said. “President Biden’s vision for ending cancer as we know it is building on the progress we’ve made with an all-hands-on-deck effort to develop new ways to prevent, detect, and treat cancer – and ensure that the tools we have and those we develop along the way reach all Americans.”
|
Financial health refers to the state of your personal monetary affairs and encompasses the peace of mind brought by knowing you are equipped to handle emergencies, can afford leisure without guilt, and make choices that allow you to enjoy life.
Assessing Your Financial Health
Evaluate Your Income And Expenses
The foundation of any financial assessment begins with understanding your cash flow. This requires a comprehensive review of:
- Sources Of Income: These can range from regular paychecks to passive income streams like rental income or dividends. Regularly updating and evaluating these sources can ensure that you are maximizing your earning potential.
- Fixed And Variable Expenses: While fixed expenses, like rent or mortgage, remain constant, variable expenses, such as entertainment or dining out, can fluctuate. Breaking down and reviewing these expenses helps in identifying areas where you might be overspending or could save.
- Surplus Or Deficit: Once you have totaled your monthly income and expenses, you will either have a surplus, which can be channeled into savings or investments, or a deficit, indicating the need for budget adjustments.
Analyze Your Debt And Credit Score
Debt, when mismanaged, can become a roadblock to financial prosperity. You should know the types and amount of debt you have and understand your credit score.
Differentiating between high-interest credit card debt, personal loans, mortgages, and student loans is essential. Each debt type has its implications and requires distinct strategies for management.
Meanwhile, your credit score is a reflection of your creditworthiness. Periodic reviews can help identify any discrepancies or areas for improvement. Familiarizing with the factors affecting your score, such as credit utilization or payment history, is also essential.
Analyzing debt and credit score will indicate if you are on track or need interventions like consolidation or refinancing.
Review Your Savings And Emergency Fund
Savings act as a financial buffer, providing peace of mind and the means to achieve long-term objectives.
It is advised to have three to six months’ worth of expenses in an easily accessible emergency fund. This fund should be re-evaluated, especially after significant life events or changes in monthly expenses.
Apart from the emergency fund, it is essential to differentiate between savings for short-term goals, like a vacation or a car, and long-term objectives, such as retirement or purchasing a home.
Evaluating your current savings against this benchmark can indicate if you are prepared for unexpected financial downturns or opportunities.
Strategies For Improving Financial Health
Financial well-being is a continuous journey. Implementing effective strategies ensures that you achieve your desired financial goals and maintain a robust financial position in the long term.
You can pave the path toward sustainable financial health by focusing on realistic goal setting, structured planning, and expert guidance.
Set Realistic Financial Goals
Break down your objectives based on the time frame:
- Short-Term Goals (1-3 Years): These can include saving for a vacation, purchasing a car, or building an emergency fund. It is essential to have these goals as they provide immediate motivation and offer short-term financial security.
- Mid-Term Goals (4-10 Years): Examples include buying a home, funding higher education, or starting a business. They require a more prolonged commitment and often more considerable capital.
- Long-Term Goals (10+ Years): These revolve around retirement, legacy planning, or long-term investments. Establishing these goals early can capitalize on compound interest, smoothing the financial journey.
You may also follow the SMART Goal-Setting Method:
- Specific: Clearly define what you want to achieve.
- Measurable: Determine the metrics or amounts necessary to realize the goal.
- Achievable: Ensure the goal is realistic based on your current and expected financial situation.
- Relevant: Align the goal with your broader financial plan and life objectives.
- Time-Bound: Set a definitive time frame for achieving the goal, providing motivation and a sense of urgency.
Create A Comprehensive Financial Plan
A well-constructed financial plan encompasses every aspect of your life, from daily expenses to retirement planning, to balance your present needs and future aspirations.
For example, if homeownership is a primary goal, consider saving for a down payment by setting aside a specific amount each month or exploring investment avenues that offer higher returns over your desired time frame.
Long-term goals like retirement require understanding various retirement savings accounts, monitoring market trends, and adjusting your risk appetite as you near retirement age.
However, a financial plan is not a set-it-and-forget-it tool. It requires regular review and potential adjustments to adapt to changing circumstances and significant life events.
Adjustments may involve reallocating assets, revising your investment strategy, or redefining your goals.
Seek Professional Financial Advice
Financial advisors and other professionals have spent years acquiring specialized knowledge and firsthand experience dealing with diverse situations, making them an invaluable resource for improving or maintaining your financial health.
They assess your unique financial position, aspirations, and risk tolerance, enabling the creation of a tailored strategy, aiming to optimize your resources for current and future objectives.
Financial advisors may be fee-based or commission-driven. Each type has its merits and potential conflicts of interest, so it is crucial to understand the nature of the relationship upfront.
Aside from compensation structure, evaluate advisors’ qualifications, experience, and fiduciary responsibility. Ask about their certifications—for example, whether they are a Certified Financial Planner or Chartered Financial Analyst.
These credentials indicate their expertise and commitment to ongoing education in the field. Moreover, inquiring about their fiduciary responsibility will ensure they are obligated to act in your best interests rather than their own or their firm’s.
Lastly, the right financial advisor should be someone you trust and feel comfortable discussing personal financial matters with. Building a rapport and ensuring open communication lines are essential for a successful, long-lasting professional relationship.
It is a collaboration where both parties should feel engaged and invested in achieving the desired financial outcomes.
Financial health is a multifaceted concept that extends beyond mere numbers in a bank account. It encompasses a sense of security, preparedness for the future, and the freedom to enjoy life’s pleasures without financial strain.
Evaluating and improving one’s financial health involves a keen understanding of income, expenses, debt, and savings, coupled with setting clear, realistic goals and executing actionable strategies.
While the journey to financial well-being can appear daunting, professional financial advisors can provide valuable support.
Commitment and proactive management are the pillars of sound financial health, serving as lifelong tools in navigating the evolving terrain of personal finance.
|
<urn:uuid:707734ee-7bf9-4655-b0c1-f6f6802de2f7>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474617.27/warc/CC-MAIN-20240225135334-20240225165334-00570.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9205173850059509,
"pii_count": 0,
"score": 2.703125,
"token_count": 1387,
"url": "https://www.forbes.com/sites/truetamplin/2023/09/27/financial-health--definition-assessment-strategies--literacy/?sh=157746e68171"
}
|
Financial health refers to the state of your personal monetary affairs and encompasses the peace of mind brought by knowing you are equipped to handle emergencies, can afford leisure without guilt, and make choices that allow you to enjoy life.
Assessing Your Financial Health
Evaluate Your Income And Expenses
The foundation of any financial assessment begins with understanding your cash flow. This requires a comprehensive review of:
- Sources Of Income: These can range from regular paychecks to passive income streams like rental income or dividends. Regularly updating and evaluating these sources can ensure that you are maximizing your earning potential.
- Fixed And Variable Expenses: While fixed expenses, like rent or mortgage, remain constant, variable expenses, such as entertainment or dining out, can fluctuate. Breaking down and reviewing these expenses helps in identifying areas where you might be overspending or could save.
- Surplus Or Deficit: Once you have totaled your monthly income and expenses, you will either have a surplus, which can be channeled into savings or investments, or a deficit, indicating the need for budget adjustments.
Analyze Your Debt And Credit Score
Debt, when mismanaged, can become a roadblock to financial prosperity. You should know the types and amount of debt you have and understand your credit score.
Differentiating between high-interest credit card debt, personal loans, mortgages, and student loans is essential. Each debt type has its implications and requires distinct strategies for management.
Meanwhile, your credit score is a reflection of your creditworthiness. Periodic reviews can help identify any discrepancies or areas for improvement. Familiarizing with the factors affecting your score, such as credit utilization or payment history, is also essential.
Analyzing debt and credit score will indicate if you are on track or need interventions like consolidation or refinancing.
Review Your Savings And Emergency Fund
Savings act as a financial buffer, providing peace of mind and the means to achieve long-term objectives.
It is advised to have three to six months’ worth of expenses in an easily accessible emergency fund. This fund should be re-evaluated, especially after significant life events or changes in monthly expenses.
Apart from the emergency fund, it is essential to differentiate between savings for short-term goals, like a vacation or a car, and long-term objectives, such as retirement or purchasing a home.
Evaluating your current savings against this benchmark can indicate if you are prepared for unexpected financial downturns or opportunities.
Strategies For Improving Financial Health
Financial well-being
|
is a continuous journey. Implementing effective strategies ensures that you achieve your desired financial goals and maintain a robust financial position in the long term.
You can pave the path toward sustainable financial health by focusing on realistic goal setting, structured planning, and expert guidance.
Set Realistic Financial Goals
Break down your objectives based on the time frame:
- Short-Term Goals (1-3 Years): These can include saving for a vacation, purchasing a car, or building an emergency fund. It is essential to have these goals as they provide immediate motivation and offer short-term financial security.
- Mid-Term Goals (4-10 Years): Examples include buying a home, funding higher education, or starting a business. They require a more prolonged commitment and often more considerable capital.
- Long-Term Goals (10+ Years): These revolve around retirement, legacy planning, or long-term investments. Establishing these goals early can capitalize on compound interest, smoothing the financial journey.
You may also follow the SMART Goal-Setting Method:
- Specific: Clearly define what you want to achieve.
- Measurable: Determine the metrics or amounts necessary to realize the goal.
- Achievable: Ensure the goal is realistic based on your current and expected financial situation.
- Relevant: Align the goal with your broader financial plan and life objectives.
- Time-Bound: Set a definitive time frame for achieving the goal, providing motivation and a sense of urgency.
Create A Comprehensive Financial Plan
A well-constructed financial plan encompasses every aspect of your life, from daily expenses to retirement planning, to balance your present needs and future aspirations.
For example, if homeownership is a primary goal, consider saving for a down payment by setting aside a specific amount each month or exploring investment avenues that offer higher returns over your desired time frame.
Long-term goals like retirement require understanding various retirement savings accounts, monitoring market trends, and adjusting your risk appetite as you near retirement age.
However, a financial plan is not a set-it-and-forget-it tool. It requires regular review and potential adjustments to adapt to changing circumstances and significant life events.
Adjustments may involve reallocating assets, revising your investment strategy, or redefining your goals.
Seek Professional Financial Advice
Financial advisors and other professionals have spent years acquiring specialized knowledge and firsthand experience dealing with diverse situations, making them an invaluable resource for improving or maintaining your financial health.
They assess your unique financial position, aspirations, and risk tolerance, enabling the creation of a tailored strategy, aiming to optimize your resources for current and future objectives.
Financial advisors may be fee-based or commission-driven. Each type has its merits and potential conflicts of interest, so it is crucial to understand the nature of the relationship upfront.
Aside from compensation structure, evaluate advisors’ qualifications, experience, and fiduciary responsibility. Ask about their certifications—for example, whether they are a Certified Financial Planner or Chartered Financial Analyst.
These credentials indicate their expertise and commitment to ongoing education in the field. Moreover, inquiring about their fiduciary responsibility will ensure they are obligated to act in your best interests rather than their own or their firm’s.
Lastly, the right financial advisor should be someone you trust and feel comfortable discussing personal financial matters with. Building a rapport and ensuring open communication lines are essential for a successful, long-lasting professional relationship.
It is a collaboration where both parties should feel engaged and invested in achieving the desired financial outcomes.
Financial health is a multifaceted concept that extends beyond mere numbers in a bank account. It encompasses a sense of security, preparedness for the future, and the freedom to enjoy life’s pleasures without financial strain.
Evaluating and improving one’s financial health involves a keen understanding of income, expenses, debt, and savings, coupled with setting clear, realistic goals and executing actionable strategies.
While the journey to financial well-being can appear daunting, professional financial advisors can provide valuable support.
Commitment and proactive management are the pillars of sound financial health, serving as lifelong tools in navigating the evolving terrain of personal finance.
|
True paradigm shifts are rare, which helps to explain the buzz around ChatGPT, a chatbot driven by so-called generative artificial intelligence that promises to revolutionize the way people interact with computers. It’s become a global sensation since its November launch by giving seemingly sophisticated yet plain-language answers to almost any kind of question. Technology giants such as Microsoft Corp., Google and Baidu Inc. are betting heavily on this new technology, which has the potential to upend the lucrative search market, even as its wider use is turning up potentially serious flaws.
1. What is generative AI?
These systems use neural networks, which are loosely modeled on the structure of the human brain and learn to complete tasks in similar ways, chiefly through trial-and-error. During training, they’re fed vast amounts of information (for example, every New York Times bestseller published in 2022) and given a task to complete using that data, perhaps: “Write the blurb for a new novel.” Over time, they’re told which words and sentences make sense and which don’t, and subsequent attempts improve. It’s like a child learning to pronounce a difficult word under the instruction of a parent. Slowly, they learn and apply that ability to future efforts. What makes them so different to older computer systems is that the results are probabilistic, meaning responses will vary each time but will gradually get smarter, faster and more nuanced.
2. How does ChatGPT work?
ChatGPT is the latest iteration of GPT (Generative Pre-Trained Transformer), a family of text-generating AI programs developed by San Francisco-based laboratory OpenAI. GPTs are trained in a process called unsupervised learning, which involves finding patterns in a dataset without being given labeled examples or explicit instructions on what to look for. The most recent version, GPT-4, builds on its predecessor, GPT-3.5, which ingested text from across the web, including Wikipedia, news sites, books and blogs in an effort to make its answers relevant and well-informed. ChatGPT adds a conversational interface on top of the program. At their heart, systems like ChatGPT are generating convincing chains of words but have no inherent understanding of their significance, or whether they’re biased or misleading. All they know is that they sound like something a person would say.
3. Who is behind OpenAI?
It was co-founded as a nonprofit by programmer and entrepreneur Sam Altman to develop AI technology that “benefits all of humanity.” Early investors included LinkedIn co-founder Reid Hoffman’s charitable foundation, Khosla Ventures and Elon Musk, who ended his involvement in 2018. OpenAI shifted to create a for-profit entity in 2019, when Microsoft invested $1 billion.
4. What’s been the response to ChatGPT?
More than a million people signed up to use it following the launch in late November. Social media has been abuzz with users trying fun, low-stakes uses for the technology. Some have shared its responses to obscure trivia questions. Others marveled at its sophisticated historical arguments, college “essays,” pop song lyrics, poems about cryptocurrency, meal plans that meet specific dietary needs and solutions to programming challenges. The flurry of interest also raised the profile of OpenAI’s other products, including software that can beat humans at video games and a tool known as Dall-E that can generate images – from the photorealistic to the fantastical – based on text descriptions.
5. Who’s going to make money from all this?
Tech giants like Microsoft have spotted generative AI’s potential to upend the way people navigate the web. Instead of scouring dozens of articles on a topic and firing back a line of relevant text from a website, these systems can deliver a bespoke response. Microsoft deepened its relationship with OpenAI in January with a multiyear investment valued at $10 billion that gave it a part-claim on OpenAI’s future profits in exchange for the computing power of Microsoft’s Azure cloud network. In February, Microsoft integrated a cousin of ChatGPT into its search engine Bing. The announcement was a challenge to rival search giant Google, which responded by trailing a launch of its own conversational AI service, Bard. China’s Baidu was also planning to introduce an AI chatbot. However, questions remain about how to monetize search when there aren’t pages of results into which you can insert ads.
6. How’s the competition going?
OpenAI spent the months since unleashing ChatGPT refining the program based on feedback identifying problems with accuracy, bias and safety. ChatGPT-4 is, the lab says, “40% more likely” to produce factual responses and is also more creative and collaborative. In Bloomberg tests, it still struggled to compose a cinquain poem about meerkats and regurgitated gender stereotypes. Google’s Bard got off to a rocky start when it made a mistake during a public demonstration in February, which sparked concerns that the company had lost ground in the race for the future of search. Facebook parent Meta Platforms Inc. was hurrying to put together a generative AI product group from teams that were previously scattered throughout the company.
7. What other industries could benefit?
The economic potential of generative AI systems goes far beyond web search. They could allow companies to take their automated customer service to a new level of sophistication, producing a relevant answer the first time so users aren’t left waiting to speak to a human. They could also draft blog posts and other types of PR content for companies that would otherwise require the help of a copywriter.
8. What are generative AI’s limitations?
The answers it pieces together from second-hand information can sound so authoritative that users may assume it has verified their accuracy. What it’s really doing is spitting out text that reads well and sounds smart but might be incomplete, biased, partly wrong or, occasionally, nonsense. These systems are only as good as the data they are trained with. Stripped from useful context such as the source of the information, and with few of the typos and other imperfections that can often signal unreliable material, ChatGPT’s content could be a minefield for those who aren’t sufficiently well-versed in a subject to notice a flawed response. This issue led StackOverflow, a computer programming website with a forum for coding advice, to ban ChatGPT responses because they were often inaccurate.
9. What about ethical risks?
As machine intelligence becomes more sophisticated, so does its potential for trickery and mischief-making. Microsoft’s AI bot Tay was taken down in 2016 after some users taught it to make racist and sexist remarks. Another developed by Meta encountered similar issues in 2022. OpenAI has tried to train ChatGPT to refuse inappropriate requests, limiting its ability to spout hate speech and misinformation. Altman, OpenAI’s chief executive officer, has encouraged people to “thumbs down” distasteful or offensive responses to improve the system. But some users have found work-arounds. Generative AI systems might not pick up on gender and racial biases that a human would notice in books and other texts. They are also a potential weapon for deceit. College teachers worry about students getting chatbots to do their homework. Lawmakers may be inundated with letters apparently from constituents complaining about proposed legislation and have no idea if they’re genuine or generated by a chatbot used by a lobbying firm.
--With assistance from Alex Webb and Nate Lanxon.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.
|
<urn:uuid:5dee23f7-973f-4f3d-a219-6ea3ff0a864f>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653930.47/warc/CC-MAIN-20230607143116-20230607173116-00512.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9587734341621399,
"pii_count": 0,
"score": 3.390625,
"token_count": 1630,
"url": "https://www.washingtonpost.com/business/2023/03/15/the-tech-behind-those-amazing-flawed-new-chatbots-quicktake/24210aca-c342-11ed-82a7-6a87555c1878_story.html"
}
|
True paradigm shifts are rare, which helps to explain the buzz around ChatGPT, a chatbot driven by so-called generative artificial intelligence that promises to revolutionize the way people interact with computers. It’s become a global sensation since its November launch by giving seemingly sophisticated yet plain-language answers to almost any kind of question. Technology giants such as Microsoft Corp., Google and Baidu Inc. are betting heavily on this new technology, which has the potential to upend the lucrative search market, even as its wider use is turning up potentially serious flaws.
1. What is generative AI?
These systems use neural networks, which are loosely modeled on the structure of the human brain and learn to complete tasks in similar ways, chiefly through trial-and-error. During training, they’re fed vast amounts of information (for example, every New York Times bestseller published in 2022) and given a task to complete using that data, perhaps: “Write the blurb for a new novel.” Over time, they’re told which words and sentences make sense and which don’t, and subsequent attempts improve. It’s like a child learning to pronounce a difficult word under the instruction of a parent. Slowly, they learn and apply that ability to future efforts. What makes them so different to older computer systems is that the results are probabilistic, meaning responses will vary each time but will gradually get smarter, faster and more nuanced.
2. How does ChatGPT work?
ChatGPT is the latest iteration of GPT (Generative Pre-Trained Transformer), a family of text-generating AI programs developed by San Francisco-based laboratory OpenAI. GPTs are trained in a process called unsupervised learning, which involves finding patterns in a dataset without being given labeled examples or explicit instructions on what to look for. The most recent version, GPT-4, builds on its predecessor, GPT-3.5, which ingested text from across the web, including Wikipedia, news sites, books and blogs in an effort to make its answers relevant and well-informed. ChatGPT adds a conversational interface on top of the program. At their heart, systems like ChatGPT are generating convincing chains of words but have no inherent understanding of their significance, or whether they’re biased or misleading. All they know is that they sound like something a person would say.
3. Who is behind OpenAI?
It was co-founded as a nonprofit by programmer and entrepreneur Sam Altman to develop AI technology that
|
“benefits all of humanity.” Early investors included LinkedIn co-founder Reid Hoffman’s charitable foundation, Khosla Ventures and Elon Musk, who ended his involvement in 2018. OpenAI shifted to create a for-profit entity in 2019, when Microsoft invested $1 billion.
4. What’s been the response to ChatGPT?
More than a million people signed up to use it following the launch in late November. Social media has been abuzz with users trying fun, low-stakes uses for the technology. Some have shared its responses to obscure trivia questions. Others marveled at its sophisticated historical arguments, college “essays,” pop song lyrics, poems about cryptocurrency, meal plans that meet specific dietary needs and solutions to programming challenges. The flurry of interest also raised the profile of OpenAI’s other products, including software that can beat humans at video games and a tool known as Dall-E that can generate images – from the photorealistic to the fantastical – based on text descriptions.
5. Who’s going to make money from all this?
Tech giants like Microsoft have spotted generative AI’s potential to upend the way people navigate the web. Instead of scouring dozens of articles on a topic and firing back a line of relevant text from a website, these systems can deliver a bespoke response. Microsoft deepened its relationship with OpenAI in January with a multiyear investment valued at $10 billion that gave it a part-claim on OpenAI’s future profits in exchange for the computing power of Microsoft’s Azure cloud network. In February, Microsoft integrated a cousin of ChatGPT into its search engine Bing. The announcement was a challenge to rival search giant Google, which responded by trailing a launch of its own conversational AI service, Bard. China’s Baidu was also planning to introduce an AI chatbot. However, questions remain about how to monetize search when there aren’t pages of results into which you can insert ads.
6. How’s the competition going?
OpenAI spent the months since unleashing ChatGPT refining the program based on feedback identifying problems with accuracy, bias and safety. ChatGPT-4 is, the lab says, “40% more likely” to produce factual responses and is also more creative and collaborative. In Bloomberg tests, it still struggled to compose a cinquain poem about meerkats and regurgitated gender stereotypes. Google’s Bard got off to a rocky start when it made a mistake during a public demonstration in February, which sparked concerns that the company had lost ground in the race for the future of search. Facebook parent Meta Platforms Inc. was hurrying to put together a generative AI product group from teams that were previously scattered throughout the company.
7. What other industries could benefit?
The economic potential of generative AI systems goes far beyond web search. They could allow companies to take their automated customer service to a new level of sophistication, producing a relevant answer the first time so users aren’t left waiting to speak to a human. They could also draft blog posts and other types of PR content for companies that would otherwise require the help of a copywriter.
8. What are generative AI’s limitations?
The answers it pieces together from second-hand information can sound so authoritative that users may assume it has verified their accuracy. What it’s really doing is spitting out text that reads well and sounds smart but might be incomplete, biased, partly wrong or, occasionally, nonsense. These systems are only as good as the data they are trained with. Stripped from useful context such as the source of the information, and with few of the typos and other imperfections that can often signal unreliable material, ChatGPT’s content could be a minefield for those who aren’t sufficiently well-versed in a subject to notice a flawed response. This issue led StackOverflow, a computer programming website with a forum for coding advice, to ban ChatGPT responses because they were often inaccurate.
9. What about ethical risks?
As machine intelligence becomes more sophisticated, so does its potential for trickery and mischief-making. Microsoft’s AI bot Tay was taken down in 2016 after some users taught it to make racist and sexist remarks. Another developed by Meta encountered similar issues in 2022. OpenAI has tried to train ChatGPT to refuse inappropriate requests, limiting its ability to spout hate speech and misinformation. Altman, OpenAI’s chief executive officer, has encouraged people to “thumbs down” distasteful or offensive responses to improve the system. But some users have found work-arounds. Generative AI systems might not pick up on gender and racial biases that a human would notice in books and other texts. They are also a potential weapon for deceit. College teachers worry about students getting chatbots to do their homework. Lawmakers may be inundated with letters apparently from constituents complaining about proposed legislation and have no idea if they’re genuine or generated by a chatbot used by a lobbying firm.
--With assistance from Alex Webb and Nate Lanxon.
More stories like this are available on bloomberg.com
©2023 Bloomberg L.P.
|
Lebanese American poet Khalil Gibran is perhaps best known as the author of "The Prophet," a collection of poetic essays that sold more than 10 million copies since its publication a century ago, in 1923.
What's far less known is that Gibran wrote the first draft of his most prominent work at a farm in Buzzards Bay, Massachusetts.
How did a young man from the mountains of Lebanon come to write what the BBC once referred to as "the Bible of the counterculture" at the entrance to the Cape?
Radio Boston's Tiziana Dearing sat down with Jean Gibran, a relative and biographer of the poet, to find out.
Highlights from this interview have been lightly edited for clarity.
On how the 12-year-old Kahlil Gibran arrived in Massachusetts in 1895:
"Gibran came with his mother, his half brother, and his two sisters. They came to a street that was called then Oliver Place, and they settled in the same tenement building as my late husband's grandfather. In other words, there were already Gibrans there. So they stayed and they lived with that crowd of Syrians. They were called Syrians in those days — Lebanon was not a part of the population's name and actually their passports had the Ottoman Empire seals on it because the Ottoman Empire was what ruled that area.
"Gibran immediately was enrolled in the Quincy School, which was at the Tyler Street, and a settlement house called the Denison House. Settlement houses were very important because they involved young people and tried to make it easier for these new immigrants to assimilate."
On how Gibran's talents were discovered after he sketched a notorious sculpture at the Boston Public Library:
"There a sculpture in the atrium — a nude sculpture called the Bacchante. It was very sensational because this was a nude woman carrying a child, obviously drinking alcohol. The censors in Boston became irritated, but here was this adolescent, young Syrian boy who was keeping now a daybook of drawings. He drew the Bacchante. That's very providential because Bostonians got rid of that sculpture within two years of its being installed.
"So one of the fabulous thing is that as a result of all his sketches and his talent, the people at Denison House called in a very, well-known photographer who was also well known as a publisher, Fred Holland Day. And working with Day is what introduced the young Syrian adolescent to literature.
"There is no question that he may have never encountered poetry and literature the way he was so early. He started to run around doing errands, posing for Fred Holland Day, and got immersed in the intellectual bohemianism of the early 20th century."
On how Gibran chose to assimilate in his new home:
"He was terribly aware of either completely joining the intellectual community surrounding him in Boston and in New York or just staying with the immigrants. And it's interesting that when he returned to the United States [from a visit to Paris, France], instead of settling in the Syrian community, he brought his sister, Mariana, into Beacon Hill.
"People don't remember that he did not settle in Little Syria, which meant he was interested in assimilating. [Similarly,] when he went to New York, he settled in Greenwich Village, and became very friendly with so many people who led him to [other literary figures]. You see what I'm saying? Could he have stayed in the Syrian community? Yes, but I don't think he would have become an international figure."
On the fateful invitation Gibran received in 1918 to join writer and poet Marie Garland on a literary retreat in Buzzards Bay:
"I think he had more fun down there than he ever had. He actually was driving in an automobile. He said, 'I can do anything here!' Now there was no driving an automobile in New York or Boston. He said he had all sorts of freedom...
"[Among others, he was with] a writer who was Hindu and his spouse, who was an American. So you can see it was an interesting, diverse community. And they were all enjoying each other and working. I think it was one of the places that Gibran felt free, felt liberated and was completely assimilated.
"There were no ethnic worries or considerations [for Gibran,] it was completely bohemian. And [he started] what was called then the "Councils" — that's what the poet first called ["The Prophet"]. One of the words he also used for it was the "Commonwealth."
"Many people are always saying, 'Oh, this is where "The Prophet" was written,' either in Boston or Cambridge or Greenwich Village. but it was started at Bay End Farm, Buzzards Bay."
On why "The Prophet" remains relevant today:
"He took every part of living and showed us as immigrants, as people who might not be completely assimilated, how to survive in this universe. That's what it is.
"I really mean it when I tell you that it was so important for him to embrace the universe. ['The Prophet' is a reflection of] universal, essential items of what people have to cope with — how to take care of your children, how to take care of education — every single one of these items is something we all have to do, no matter where we are.
"That's why ['The Prophet'] has become a universal Bible without the religious accoutrement. If you are religious, you have to obey certain commandments, but he doesn't. We don't have to do that when reading 'The Prophet.' All you have to do is to become habituated with the major theme, which is obviously love."
This article was originally published on June 05, 2023.
This segment aired on June 5, 2023.
|
<urn:uuid:16cedb8d-e556-4672-b4a5-4296ec0eff64>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506329.15/warc/CC-MAIN-20230922034112-20230922064112-00143.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.991992712020874,
"pii_count": 0,
"score": 2.671875,
"token_count": 1220,
"url": "https://www.wbur.org/radioboston/2023/06/05/kahlil-gibran-prophet-massachusetts"
}
|
Lebanese American poet Khalil Gibran is perhaps best known as the author of "The Prophet," a collection of poetic essays that sold more than 10 million copies since its publication a century ago, in 1923.
What's far less known is that Gibran wrote the first draft of his most prominent work at a farm in Buzzards Bay, Massachusetts.
How did a young man from the mountains of Lebanon come to write what the BBC once referred to as "the Bible of the counterculture" at the entrance to the Cape?
Radio Boston's Tiziana Dearing sat down with Jean Gibran, a relative and biographer of the poet, to find out.
Highlights from this interview have been lightly edited for clarity.
On how the 12-year-old Kahlil Gibran arrived in Massachusetts in 1895:
"Gibran came with his mother, his half brother, and his two sisters. They came to a street that was called then Oliver Place, and they settled in the same tenement building as my late husband's grandfather. In other words, there were already Gibrans there. So they stayed and they lived with that crowd of Syrians. They were called Syrians in those days — Lebanon was not a part of the population's name and actually their passports had the Ottoman Empire seals on it because the Ottoman Empire was what ruled that area.
"Gibran immediately was enrolled in the Quincy School, which was at the Tyler Street, and a settlement house called the Denison House. Settlement houses were very important because they involved young people and tried to make it easier for these new immigrants to assimilate."
On how Gibran's talents were discovered after he sketched a notorious sculpture at the Boston Public Library:
"There a sculpture in the atrium — a nude sculpture called the Bacchante. It was very sensational because this was a nude woman carrying a child, obviously drinking alcohol. The censors in Boston became irritated, but here was this adolescent, young Syrian boy who was keeping now a daybook of drawings. He drew the Bacchante. That's very providential because Bostonians got rid of that sculpture within two years of its being installed.
"So one of the fabulous thing is that as a result of all his sketches and his talent, the people at Denison House called in a very, well-known photographer who was also well known as a publisher, Fred Holland Day. And working with Day is what introduced the young Syrian adolescent to literature.
|
"There is no question that he may have never encountered poetry and literature the way he was so early. He started to run around doing errands, posing for Fred Holland Day, and got immersed in the intellectual bohemianism of the early 20th century."
On how Gibran chose to assimilate in his new home:
"He was terribly aware of either completely joining the intellectual community surrounding him in Boston and in New York or just staying with the immigrants. And it's interesting that when he returned to the United States [from a visit to Paris, France], instead of settling in the Syrian community, he brought his sister, Mariana, into Beacon Hill.
"People don't remember that he did not settle in Little Syria, which meant he was interested in assimilating. [Similarly,] when he went to New York, he settled in Greenwich Village, and became very friendly with so many people who led him to [other literary figures]. You see what I'm saying? Could he have stayed in the Syrian community? Yes, but I don't think he would have become an international figure."
On the fateful invitation Gibran received in 1918 to join writer and poet Marie Garland on a literary retreat in Buzzards Bay:
"I think he had more fun down there than he ever had. He actually was driving in an automobile. He said, 'I can do anything here!' Now there was no driving an automobile in New York or Boston. He said he had all sorts of freedom...
"[Among others, he was with] a writer who was Hindu and his spouse, who was an American. So you can see it was an interesting, diverse community. And they were all enjoying each other and working. I think it was one of the places that Gibran felt free, felt liberated and was completely assimilated.
"There were no ethnic worries or considerations [for Gibran,] it was completely bohemian. And [he started] what was called then the "Councils" — that's what the poet first called ["The Prophet"]. One of the words he also used for it was the "Commonwealth."
"Many people are always saying, 'Oh, this is where "The Prophet" was written,' either in Boston or Cambridge or Greenwich Village. but it was started at Bay End Farm, Buzzards Bay."
On why "The Prophet" remains relevant today:
"He took every part of living and showed us as immigrants, as people who might not be completely assimilated, how to survive in this universe. That's what it is.
"I really mean it when I tell you that it was so important for him to embrace the universe. ['The Prophet' is a reflection of] universal, essential items of what people have to cope with — how to take care of your children, how to take care of education — every single one of these items is something we all have to do, no matter where we are.
"That's why ['The Prophet'] has become a universal Bible without the religious accoutrement. If you are religious, you have to obey certain commandments, but he doesn't. We don't have to do that when reading 'The Prophet.' All you have to do is to become habituated with the major theme, which is obviously love."
This article was originally published on June 05, 2023.
This segment aired on June 5, 2023.
|
Climate change is increasing the fire risk on the mostly treeless Great Plains
A MARTÍNEZ, HOST:
Climate change is increasing the fire risk on the mostly treeless Great Plains. Montana Public Radio's Aaron Bolton reports on efforts to get prairie dwellers to adapt to the new reality.
AARON BOLTON, BYLINE: Homebuilder Josh Poser lives in the small town of Denton, surrounded by the grasslands of eastern Montana. The last thing he thought he'd be doing is fighting a wildfire in December, but that's exactly what happened a couple of years ago as 70-mile-an-hour winds pushed flames across 10,000 acres.
JOSH POSER: Late that night, you know, we're putting some embers out in the yard and sprinkler on the roof, and they had patrols going all over the place.
BOLTON: Overnight, flames consumed Poser's house and 24 others. Don Pyrah is with Montana's state fire agency. He says firefighters were quickly overwhelmed because, unusually, there was no snow on the ground in December, and it was way too warm.
DON PYRAH: And it was 56 degrees in the middle of the night. That's not normal.
BOLTON: Researchers say the warming climate means more dry Decembers and a lot less snow cover across the Great Plains, meaning a lot more fire risk during a typically windier time of the year. University of Florida researcher Victoria Donovan led a 2017 study that found fire activity on the Great Plains has increased by 3 1/2 times in recent decades. She says that a century of fire suppression has also allowed more trees and woody vegetation to grow, making fires more intense.
VICTORIA DONOVAN: There's a lot more opportunities for these wildfires to occur and also for them to be a lot more destructive.
BOLTON: That kind of research isn't really embraced in conservative eastern Montana. Official growth policy in the county that had the big fire explicitly opposes President Biden's 2021 executive order on climate change. Mike DeVries is chief of the volunteer fire department in Denton, the town of 200 that was burned over a couple of Decembers ago.
MIKE DEVRIES: I mean, I grew up in Montana, and I know we've been through droughts. I don't know that people just attribute it to one thing.
BOLTON: But DeVries acknowledges that fire was well outside the norm.
DEVRIES: That was by far the most active and unbelievable year that we'd ever had.
BOLTON: Anika Peila with Montana's state fire agency is trying to help people better prepare their homes for fires. She says it's a tougher sell out here than in more forested parts of the state.
ANIKA PEILA: You can blame climate change, the drought, whatever you want to blame, but it ultimately starts with people's homes.
BOLTON: Peila will make suggestions for property owners, like shifting to metal roofs and less flammable building materials or cutting back trees and shrubs near their home. But there's been little interest so far.
PEILA: I feel like that's people's beauty. That's people's paradise.
BOLTON: But Josh Poser, who lost his house in the December fire, is still living with his family in a camper. He takes the threat of another wildfire more seriously now. Standing inside the unfinished walls of the new home he hopes to finish this fall, he says they are building in a more fire-resilient way to avoid losing their home again.
POSER: There will be concrete siding, metal roof. Before, we had - everything was wood - wood windows, wood siding, wood everything. So it was a recipe for disaster.
BOLTON: Those are exactly the kind of changes state fire managers would like to see more people in the Great Plains make. They're hoping others will be more likely to do the same if they see their neighbor do it first.
For NPR News, I'm Aaron Bolton in Denton, Mont.
(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.
|
<urn:uuid:83ccf2b8-2dd5-4f34-8304-8f543d71c1be>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473735.7/warc/CC-MAIN-20240222061937-20240222091937-00190.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9718618392944336,
"pii_count": 0,
"score": 2.875,
"token_count": 884,
"url": "https://www.kvpr.org/2023-05-29/climate-change-is-increasing-the-fire-risk-on-the-mostly-treeless-great-plains"
}
|
Climate change is increasing the fire risk on the mostly treeless Great Plains
A MARTÍNEZ, HOST:
Climate change is increasing the fire risk on the mostly treeless Great Plains. Montana Public Radio's Aaron Bolton reports on efforts to get prairie dwellers to adapt to the new reality.
AARON BOLTON, BYLINE: Homebuilder Josh Poser lives in the small town of Denton, surrounded by the grasslands of eastern Montana. The last thing he thought he'd be doing is fighting a wildfire in December, but that's exactly what happened a couple of years ago as 70-mile-an-hour winds pushed flames across 10,000 acres.
JOSH POSER: Late that night, you know, we're putting some embers out in the yard and sprinkler on the roof, and they had patrols going all over the place.
BOLTON: Overnight, flames consumed Poser's house and 24 others. Don Pyrah is with Montana's state fire agency. He says firefighters were quickly overwhelmed because, unusually, there was no snow on the ground in December, and it was way too warm.
DON PYRAH: And it was 56 degrees in the middle of the night. That's not normal.
BOLTON: Researchers say the warming climate means more dry Decembers and a lot less snow cover across the Great Plains, meaning a lot more fire risk during a typically windier time of the year. University of Florida researcher Victoria Donovan led a 2017 study that found fire activity on the Great Plains has increased by 3 1/2 times in recent decades. She says that a century of fire suppression has also allowed more trees and woody vegetation to grow, making fires more intense.
VICTORIA DONOVAN: There's a lot more opportunities for these wildfires to occur and also for them to be a lot more destructive.
BOLTON: That kind of research isn't really embraced in conservative eastern Montana. Official growth policy in the county that had the big fire explicitly opposes President Biden's 2021 executive order on climate change. Mike DeVries is chief of the volunteer fire department in Denton, the town of 200 that was burned over a couple of Decembers ago.
MIKE DEVRIES: I mean, I grew up in Montana, and I know we've been through droughts. I don't know that people just attribute it
|
to one thing.
BOLTON: But DeVries acknowledges that fire was well outside the norm.
DEVRIES: That was by far the most active and unbelievable year that we'd ever had.
BOLTON: Anika Peila with Montana's state fire agency is trying to help people better prepare their homes for fires. She says it's a tougher sell out here than in more forested parts of the state.
ANIKA PEILA: You can blame climate change, the drought, whatever you want to blame, but it ultimately starts with people's homes.
BOLTON: Peila will make suggestions for property owners, like shifting to metal roofs and less flammable building materials or cutting back trees and shrubs near their home. But there's been little interest so far.
PEILA: I feel like that's people's beauty. That's people's paradise.
BOLTON: But Josh Poser, who lost his house in the December fire, is still living with his family in a camper. He takes the threat of another wildfire more seriously now. Standing inside the unfinished walls of the new home he hopes to finish this fall, he says they are building in a more fire-resilient way to avoid losing their home again.
POSER: There will be concrete siding, metal roof. Before, we had - everything was wood - wood windows, wood siding, wood everything. So it was a recipe for disaster.
BOLTON: Those are exactly the kind of changes state fire managers would like to see more people in the Great Plains make. They're hoping others will be more likely to do the same if they see their neighbor do it first.
For NPR News, I'm Aaron Bolton in Denton, Mont.
(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.
|
Building on the global boom in viral surveillance during the pandemic, U.K. scientists on Tuesday unveiled an initiative to expand sequencing of the common seasonal respiratory bugs that have received comparatively little attention.
The Respiratory Virus & Microbiome Initiative, launched and funded by the Wellcome Sanger Institute, will track the evolution not just of SARS-CoV-2, the coronavirus that causes Covid-19, but also other coronaviruses, different flu families, RSV, and other pathogens that typically just cause the sniffles but collectively lead to waves of illness every year. Researchers hope the initiative will enable them to better monitor viruses in the U.K. as they change, alert them to any worrisome mutations, and get tipped to the emergence of new viruses.
“The ability to track and look for these events early is obviously something that’s really important,” said Ewan Harrison, the head of the initiative.
The program, a collaboration with the U.K. Health Security Agency and other scientists, hopes to generate tons of data for academics and public health officials to use in their work, and also aims to “supercharge” research that could ultimately lead to the development of vaccines and therapeutics, Harrison said. It’s also simply about better understanding these viruses. While flu has attracted lots of research over the years, some of the other bugs — like rhinovirus or adenovirus — are not as well-monitored. Scientists don’t even understand their transmission dynamics all that well, he said.
Viral sequencing exploded during the pandemic, with global efforts helping detect variants like Delta and Omicron (and all the Omicron sublineages) and guiding response strategies. A major moment in the pandemic was when, in early January 2020, scientists publicly released the genome of the virus, which allowed responders around the world to start developing diagnostic tests and served as a starter’s pistol for vaccine development. From there, scientists shared millions of sequences on public trackers.
More recently, the number of Covid infections being sequenced has collapsed as much of the world has moved past the emergency phase — a trend public health officials globally have lamented.
It’s not just the detection of major new variants that sequencing can enable. Sequencing viruses can help scientists track routes of transmission, whether in a hospital or from country to country. When combined with lab studies or epidemiological research, it can answer questions about the virus’s basic biology, whether the virus is becoming more transmissible, or whether the impact of an infection is changing — like how the Delta variant seemed to cause more severe disease. It’s also a tool that can help track how well vaccines are holding up against evolving viruses.
Expanding routine sequencing is the type of research that could come in handy with other viruses. In the United States, for example, experts are trying to figure out why an anticipated wave of a rare polio-like condition that can occur after an infection with a common enterovirus never materialized last fall, despite a surge of those enterovirus infections. Researchers are exploring the virus’ genome to see if it changed in some way.
The new initiative is designed to build an infrastructure that becomes part of routine viral surveillance, but also one that can be deployed during the next epidemic or pandemic, Harrison said. During the pandemic, researchers got their first experience with sequencing data helping inform responses to a public health crisis.
“It’s now something we now think is really important to build upon,” Harrison said.
As the team develops the techniques and tools they’ll use in their project, one goal is to keep it as low-cost as possible, with the idea that other research teams around the world could adopt such protocols. All of their methods and computational software will be made freely available.
“Sequencing know-how is incredibly widespread now, so I think the opportunity for this to happen globally is there,” said Judith Breuer, a professor of virology at University College London and one of the researchers involved in the new program.
Get your daily dose of health and medicine every weekday with STAT’s free newsletter Morning Rounds. Sign up here.
Create a display name to comment
This name will appear with your comment
|
<urn:uuid:5dcd2c6d-303e-4d09-8c6f-d3ca8f748b2d>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224657720.82/warc/CC-MAIN-20230610131939-20230610161939-00786.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9514234662055969,
"pii_count": 0,
"score": 3.359375,
"token_count": 898,
"url": "https://www.statnews.com/2023/01/10/genetic-sequencing-common-respiratory-viruses/"
}
|
Building on the global boom in viral surveillance during the pandemic, U.K. scientists on Tuesday unveiled an initiative to expand sequencing of the common seasonal respiratory bugs that have received comparatively little attention.
The Respiratory Virus & Microbiome Initiative, launched and funded by the Wellcome Sanger Institute, will track the evolution not just of SARS-CoV-2, the coronavirus that causes Covid-19, but also other coronaviruses, different flu families, RSV, and other pathogens that typically just cause the sniffles but collectively lead to waves of illness every year. Researchers hope the initiative will enable them to better monitor viruses in the U.K. as they change, alert them to any worrisome mutations, and get tipped to the emergence of new viruses.
“The ability to track and look for these events early is obviously something that’s really important,” said Ewan Harrison, the head of the initiative.
The program, a collaboration with the U.K. Health Security Agency and other scientists, hopes to generate tons of data for academics and public health officials to use in their work, and also aims to “supercharge” research that could ultimately lead to the development of vaccines and therapeutics, Harrison said. It’s also simply about better understanding these viruses. While flu has attracted lots of research over the years, some of the other bugs — like rhinovirus or adenovirus — are not as well-monitored. Scientists don’t even understand their transmission dynamics all that well, he said.
Viral sequencing exploded during the pandemic, with global efforts helping detect variants like Delta and Omicron (and all the Omicron sublineages) and guiding response strategies. A major moment in the pandemic was when, in early January 2020, scientists publicly released the genome of the virus, which allowed responders around the world to start developing diagnostic tests and served as a starter’s pistol for vaccine development. From there, scientists shared millions of sequences on public trackers.
More recently, the number of Covid infections being sequenced has collapsed as much of the world has moved past the emergency phase — a trend public health officials globally have lamented.
It’s not just the detection of major new variants that sequencing can enable. Sequencing viruses can help scientists track routes of transmission, whether in a hospital or from country to country. When combined with lab studies or epidemiological research, it can answer questions about the virus’s basic biology, whether the virus is becoming more transmissible, or whether the impact of an infection is changing — like how the Delta variant seemed to cause
|
more severe disease. It’s also a tool that can help track how well vaccines are holding up against evolving viruses.
Expanding routine sequencing is the type of research that could come in handy with other viruses. In the United States, for example, experts are trying to figure out why an anticipated wave of a rare polio-like condition that can occur after an infection with a common enterovirus never materialized last fall, despite a surge of those enterovirus infections. Researchers are exploring the virus’ genome to see if it changed in some way.
The new initiative is designed to build an infrastructure that becomes part of routine viral surveillance, but also one that can be deployed during the next epidemic or pandemic, Harrison said. During the pandemic, researchers got their first experience with sequencing data helping inform responses to a public health crisis.
“It’s now something we now think is really important to build upon,” Harrison said.
As the team develops the techniques and tools they’ll use in their project, one goal is to keep it as low-cost as possible, with the idea that other research teams around the world could adopt such protocols. All of their methods and computational software will be made freely available.
“Sequencing know-how is incredibly widespread now, so I think the opportunity for this to happen globally is there,” said Judith Breuer, a professor of virology at University College London and one of the researchers involved in the new program.
Get your daily dose of health and medicine every weekday with STAT’s free newsletter Morning Rounds. Sign up here.
Create a display name to comment
This name will appear with your comment
|
What's killing the swans in Swansea? Officials finally have an answer.
SWANSEA — Avian flu, also called bird flu, was the cause behind a cluster of deaths among the town’s swans, officials reported on Thursday.
Last month, Swansea’s animal control officer began monitoring a cluster of swans who were found dead in the town. Many of them turned up in the Compton’s Corner area of the Cole River in Ocean Grove. Animal Control Officer Lisa White notified the Massachusetts Division of Wildlife and Fisheries and sent some of the dead swans in for third-party testing.
In total, 24 swans and one goose have now died from the outbreak.
On Thursday, the town announced results from testing done by the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS), the National Wildlife Health Center and Tufts that said six dead swans and one good tested positive for Highly Pathogenic Avian Influenza, or HPIA.
“At this time, there have been no reports of Avian flu detected in humans or domestic livestock in Swansea or Bristol County,” the announcement read in part.
What is Avian flu? Is it dangerous to humans?
Avian flu, also called “bird flu,” is a common strain of influenza that is most commonly carried by waterbirds and is most dangerous for poultry like chickens and turkeys.
Avian flu is highly contagious among birds but is not commonly seen in humans. While rare, it can spread from birds to humans through saliva, nasal secretion and feces, according to the CDC.
"Human infections with bird flu viruses are rare but can occur, usually after close contact with infected birds," the CDC says on its website.
Most songbirds or other birds found in the yard, like cardinals, robins, sparrows, blue jays, crows, or pigeons, do not usually carry bird flu viruses that can impact people or poultry, the CDC says. It is also rare, although technically possible, for the virus to spread to animals such as foxes or cats who eat infected birds.
According to the CDC, Avian flu has been detected in around 6,500 wild birds in the U.S. since January of last year, the first time the virus has been seen in the country since 2016. The virus has mostly impacted poultry, with more than 58 million poultry animals infected since January of 2022.
Since the current outbreak among U.S. birds began, only one case of a person infected with Avian flu has been found, in New Mexico.
The U.S. Department of Agriculture has reported cases of HPAI in wild birds in several other counties in Massachusetts this year including Barnstable, Plymouth, Norfolk, Essex, Hamden and Worcester counties. The swans in Swansea are the first birds in Bristol County to be found to be suffering from the virus.
What should Swansea residents do about dead swans?
The town is advising that residents who have domestic flocks take care to not expose their birds to the virus through contaminated shoes, clothing or equipment and to keep wild waterbirds away from their flock.
Residents should also avoid contact with birds as much as possible, the town said.
“The Town of Swansea will continue to work with our state and federal partners to monitor bird activity in the area and, if needed, conduct further testing,” Board of Selectmen Chairman Christopher R. Carreiro said in the town’s statement. “We would like to strongly reiterate that at this time we have had no reports of Avian flu detected in any residents or their domestic livestock, however, we urge community members to exercise extreme caution and not interact with any wild birds.”
What to do if you find a dead bird
If you find a deceased bird in Swansea: call Animal Control Officer Lisa White at 508-679-6446.
If the bird is domestic: call MDAR Animal Health at 617-626-1795, or use the online Poultry Disease Reporting Form.
If the bird is wild: call MDFG Fisheries and Wildlife (MassWildlife) at 508-389-6300.
For more information about Avian flu from the Centers for Disease Control and Prevention, click here.
|
<urn:uuid:b4042d5c-d026-45c7-b0de-f6d71a2925fe>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224651325.38/warc/CC-MAIN-20230605053432-20230605083432-00170.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9606231451034546,
"pii_count": 0,
"score": 3.078125,
"token_count": 894,
"url": "https://www.heraldnews.com/story/news/local/2023/04/20/swans-in-swansea-died-from-bird-flu/70135792007/"
}
|
What's killing the swans in Swansea? Officials finally have an answer.
SWANSEA — Avian flu, also called bird flu, was the cause behind a cluster of deaths among the town’s swans, officials reported on Thursday.
Last month, Swansea’s animal control officer began monitoring a cluster of swans who were found dead in the town. Many of them turned up in the Compton’s Corner area of the Cole River in Ocean Grove. Animal Control Officer Lisa White notified the Massachusetts Division of Wildlife and Fisheries and sent some of the dead swans in for third-party testing.
In total, 24 swans and one goose have now died from the outbreak.
On Thursday, the town announced results from testing done by the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS), the National Wildlife Health Center and Tufts that said six dead swans and one good tested positive for Highly Pathogenic Avian Influenza, or HPIA.
“At this time, there have been no reports of Avian flu detected in humans or domestic livestock in Swansea or Bristol County,” the announcement read in part.
What is Avian flu? Is it dangerous to humans?
Avian flu, also called “bird flu,” is a common strain of influenza that is most commonly carried by waterbirds and is most dangerous for poultry like chickens and turkeys.
Avian flu is highly contagious among birds but is not commonly seen in humans. While rare, it can spread from birds to humans through saliva, nasal secretion and feces, according to the CDC.
"Human infections with bird flu viruses are rare but can occur, usually after close contact with infected birds," the CDC says on its website.
Most songbirds or other birds found in the yard, like cardinals, robins, sparrows, blue jays, crows, or pigeons, do not usually carry bird flu viruses that can impact people or poultry, the CDC says. It is also rare, although technically possible, for the virus to spread to animals such as foxes or cats who eat infected birds.
According to the CDC, Avian flu has been detected in around 6,500 wild birds in the U.S. since January of last year, the first time the virus has been seen in the country since 2016. The virus has mostly impacted poultry, with more than 58 million poultry animals infected since January of 2022.
Since the current outbreak among U
|
.S. birds began, only one case of a person infected with Avian flu has been found, in New Mexico.
The U.S. Department of Agriculture has reported cases of HPAI in wild birds in several other counties in Massachusetts this year including Barnstable, Plymouth, Norfolk, Essex, Hamden and Worcester counties. The swans in Swansea are the first birds in Bristol County to be found to be suffering from the virus.
What should Swansea residents do about dead swans?
The town is advising that residents who have domestic flocks take care to not expose their birds to the virus through contaminated shoes, clothing or equipment and to keep wild waterbirds away from their flock.
Residents should also avoid contact with birds as much as possible, the town said.
“The Town of Swansea will continue to work with our state and federal partners to monitor bird activity in the area and, if needed, conduct further testing,” Board of Selectmen Chairman Christopher R. Carreiro said in the town’s statement. “We would like to strongly reiterate that at this time we have had no reports of Avian flu detected in any residents or their domestic livestock, however, we urge community members to exercise extreme caution and not interact with any wild birds.”
What to do if you find a dead bird
If you find a deceased bird in Swansea: call Animal Control Officer Lisa White at 508-679-6446.
If the bird is domestic: call MDAR Animal Health at 617-626-1795, or use the online Poultry Disease Reporting Form.
If the bird is wild: call MDFG Fisheries and Wildlife (MassWildlife) at 508-389-6300.
For more information about Avian flu from the Centers for Disease Control and Prevention, click here.
|
The rule of law is the political philosophy that all citizens (including leaders and lawmakers) and institutions are accountable to the same laws. It is fragile today because society is reaching peak fragility. The emotional coddling of a generation, a growing hatred of America and its founding principles, and the advent and infiltration of technology are converging to destabilize the rule of law and its ability to govern public life.
This is no accident. The ideologies that overprotect have been at work in multiple domains of life for decades. Starting in the 1970s, teachers were trained that affirming every child’s unique specialness was paramount in the classroom. Instead of focusing on scholastic rigor, Master of Education programs began to prioritize students’ emotional wellbeing, as if the knowledge that one is special sufficiently prepares a child for the real world. The prevailing philosophy holds that the world can and should be rebuilt to serve the needs of those most oppressed and marginalized. This is at odds with the concept of the rule of law, which doesn’t favor anyone.
Emotional affirmation is the job of the family. But the women’s rights movements of the 1960s and ’70s attacked the family unit, in part by encouraging women to pursue their own interests and forestall or even forgo creating and caring for families. Public education seems to believe itself responsible for picking up the emotional slack. It’s not evident, however, that saturating children in the knowledge of their unique specialness produces healthy and responsible adults capable of abiding by the rule of law.
Yet society continued to pursue fragilization. Rather than teaching children how to lose and process their emotions healthily while remaining respectful of—and even happy for—those who win, participation trophies became the norm for an entire generation. Children didn’t have to confront disappointment or learn how to examine their performance for ways to improve. And parents didn’t have to deal with children’s distress after falling short of their goals. The upshot? Arrested development all around.
Bubble-wrapping children out of a sincere but shortsighted desire to protect them from every ill doesn’t produce functional adults, it breeds entitlement and vulnerability and delays the process of cognitive and emotional maturation. Today, as a society, we seem to be experiencing an indefinite extension of adolescence. This is a problem, not least because one of the traits for which teenagers are most famous is rule-breaking. Teenage rebellion won’t topple a society when it’s limited to a phase of life, but our society is training people to engage in rebellion as a way of life and calling it social-justice activism.
The fragilization process continued with the arrival of content notes and trigger warnings. Children who never had to work through difficult feelings of failure become adults who believe that other people are supposed to protect them from ever encountering anything uncomfortable. This corrodes the idea of personal responsibility. When taken to its logical conclusion, the widespread demand for content and trigger warnings makes accountability, even from the legal system, seem incongruent with the lesson society has been teaching children for decades: someone else is responsible for your feelings.
We can trace a direct line from content warnings to “harmful words” lists and microaggression trainings in which people are trained to be afraid to say anything, including what they would intend to be compliments. In this world, impact matters and intention does not. Someone could take offense at anything and demand an apology for the impact they alone defined. Society is training us—socially and in the workplace now—to be emotionally codependent on others rather than considering why we might be so easily offended that we hear kind words as insults.
Language policing leads to the safe spaces of today’s cancel culture. Rules apply based on what you look like and whether your ideas affirm the prevailing narrative, which understands victimhood to be empowering, relativism to be liberating, and the self as the ultimate god. In order to further the idea that narrative is reality, that feelings are facts, and that the merits of an idea are based on identity, the pillars of law itself must now be questioned and found to be deficient.
Abolishing the police is not a new idea, but it resurged mightily in the wake of George Floyd’s death in May 2020. Protestors demanded the defunding of police in order to stop what they perceived to be the wanton murder of black men. The Defund The Police movement has yet to provide data in support of their claim that reducing the number of officers reduces the number of lethal-force incidents. There is evidence to suggest that the opposite is true—as progressive cities slash their police budgets, the absence of law enforcement has seen crime rise and open-air drug scenes proliferate. Sweeping statements denouncing the police as structurally racist have become so common that they are now articles of faith. And because law enforcement is frequently equated with the law itself, millions have now concluded that the law is racist.
That includes the Constitution, the document the majority of Americans once respected as the enumerator of their rights and the ultimate authority and guidance on the rule of law. A dubious revisionist-history initiative like the 1619 Project is being taught in schools, even though its central claims are hotly contested by properly qualified historians. But because feelings have become facts, equal accountability before the law is now seen as oppressive because it doesn’t permit the remaking of reality in the image of emotion. But a culture of victimhood can only conceive of personal responsibility as oppressive—it is the institutions that are racist, the social structures that are responsible for individuals’ pain. This has engendered hatred for America and a rejection of the ideals it strives to uphold. It even encourages a denial that the founding fathers ever wanted to create a country in which everyone was equal in the eyes of the law.
Both the focus on identity and the rejection of personal responsibility in favor of structural accountability undermine the rule of law. On one hand, the self must be allowed to act on whatever feeling it has in the moment and do whatever it wants to do lest it be “oppressed.” Since there are so few scenarios in which someone can do whatever they like at all times, the feeling of being oppressed becomes constant. This leads to constant rage—which is why those who believe that natural limitation or restraint are the same as persecution exhort others to “stay angry.” When this rage is turned upon the legal system as an agent of injustice rather than a structure capable of protecting the rights of every individual, it diminishes respect for the rule of law: if the law itself is oppressive, why should we want to be ruled by it?
On the other hand, the very claim that institutions and structures are the problem encourages the belief that individuals are not accountable before the law. Individuals are therefore helpless against vague but powerful systems, from which they must be liberated to “live their truth.” But an individual is powerless to create the life they desire because there are so many oppressive systems keeping them down. The more we repeat the story about structural oppression to ourselves, the more we start to see oppression everywhere, even in places it’s not. The self demands ever more power to feed its insatiable appetites, rule of law be damned, even as it points to societal structures, especially the legal system, as the reason it cannot obtain all it desires.
In a culture this fragile, this sensitized to personal failure, and this averse to difficult emotions, it is inevitable that the main object of worship will be the self. Those partaking in the worship of the self will not see it as such—they will see it as self-protection, and anything that does not accord with their narratives will be perceived as a threat. The greater the focus on the self, the more threatening everything outside of it becomes. In this paradigm, the self must be kept front and center as a matter of survival. When the self is the ultimate authority, the most egregious sin is suffering. The worst offenders, therefore, are those who either inflict suffering or impede pleasure. In the current swamp of moral relativism, what impedes pleasure can be anything from a lack of affirmation to being told “no” by reality itself. The opponents of pleasure are those who disagree, and they are not just wrong but immoral. They therefore deserve harsher social (and even legal) punishments than those they are allegedly oppressing.
It might not take more than cradle-to-grave emotional coddling to produce a generation of self-worshippers, but our culture has more to contribute to that end. Because human beings will worship something, whether it is a deity, science, or even a lack of belief in anything, the worship of self displaced something. The decline and retreat of religion has stripped Western societies of their ultimate authority. Dethroning that common understanding has left society vulnerable to moral relativism. In a society where “my truth” and “your truth” can apparently coexist even if these truths are mutually exclusive, can there be much hope for a strong rule of law?
In 1882, Friedrich Nietzsche declared that God is dead and that we had killed Him. Now that the self reigns supreme, abusing others is encouraged, even celebrated in the name of “self-care.” Cultural chauvinism blinds us to the flaws of our time and enables the belief that postmodern individuals know better than ancient peoples who didn’t have the understanding produced by the advanced technology we now enjoy. But technology is also part of the problem.
Since the late 1990s, when the Internet became available to the everyday consumer, technology has billed itself as the amoral savior of humanity. Marketers told us that we needed greater convenience in our breakneck modern lives, and that technology was the answer to that problem. But there have been costs to this shift, and it is not obvious that technology even delivered on its promise of convenience in the first place.
In the 20-plus years since technology injected itself into every aspect of the average person’s daily life, we have ceded authority from human to machine. We now believe that technology will provide the solution to every problem plaguing humanity, including the problem of our humanity. Only now, after the entrenchment of the Internet and the proliferation of personal electronic devices, are we starting to ask questions about machine-learning bias and consider the implications of the fact that the origin of all machine output is still human input.
The idea—possibly wishful—that computers are never wrong has further weakened our trust in human-made institutions and philosophies like the rule of law. As a result, our culture is hurtling into a future in which machines will rule as if a digital dictatorship will be more beneficial than the “messy” human-made systems that rely on centuries of philosophy, experimentation of different ways to run a society, and scholarly reflections on those systems through the ages. Somehow, computers are more trustworthy than all that.
The rule of self is supplanting the rule of law. Individualistic structures, we are told, must be dismantled in favor of socialistic ones, even though being “free” to live one’s individual truth is now conceptualized as the entire point of life. Any restrictions or limitations are oppressive. The rage produced by the inability to act on our feelings at every moment betrays a belief that it is our right to live without limits. A society that rejects limits even as it celebrates victimhood will see the rule of law as both too restrictive as well as oppressive.
Until the self becomes smaller in importance, it will make our world smaller and more fragile, treating everything—including that which would otherwise equalize—as a threat to its own survival. Ultimately, the rule of law flies in the face of the current cultural narrative that some people are more subject to the law than others. And so, the law, along with anything that challenges the current narrative, must be burned to the ground in favor of god only knows what.
|
<urn:uuid:27bb2cc9-9e0a-495f-8f39-a9ac0b000ac1>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474852.83/warc/CC-MAIN-20240229170737-20240229200737-00247.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9670751690864563,
"pii_count": 0,
"score": 2.984375,
"token_count": 2495,
"url": "https://quillette.com/2023/04/21/the-sanctification-of-the-self/"
}
|
The rule of law is the political philosophy that all citizens (including leaders and lawmakers) and institutions are accountable to the same laws. It is fragile today because society is reaching peak fragility. The emotional coddling of a generation, a growing hatred of America and its founding principles, and the advent and infiltration of technology are converging to destabilize the rule of law and its ability to govern public life.
This is no accident. The ideologies that overprotect have been at work in multiple domains of life for decades. Starting in the 1970s, teachers were trained that affirming every child’s unique specialness was paramount in the classroom. Instead of focusing on scholastic rigor, Master of Education programs began to prioritize students’ emotional wellbeing, as if the knowledge that one is special sufficiently prepares a child for the real world. The prevailing philosophy holds that the world can and should be rebuilt to serve the needs of those most oppressed and marginalized. This is at odds with the concept of the rule of law, which doesn’t favor anyone.
Emotional affirmation is the job of the family. But the women’s rights movements of the 1960s and ’70s attacked the family unit, in part by encouraging women to pursue their own interests and forestall or even forgo creating and caring for families. Public education seems to believe itself responsible for picking up the emotional slack. It’s not evident, however, that saturating children in the knowledge of their unique specialness produces healthy and responsible adults capable of abiding by the rule of law.
Yet society continued to pursue fragilization. Rather than teaching children how to lose and process their emotions healthily while remaining respectful of—and even happy for—those who win, participation trophies became the norm for an entire generation. Children didn’t have to confront disappointment or learn how to examine their performance for ways to improve. And parents didn’t have to deal with children’s distress after falling short of their goals. The upshot? Arrested development all around.
Bubble-wrapping children out of a sincere but shortsighted desire to protect them from every ill doesn’t produce functional adults, it breeds entitlement and vulnerability and delays the process of cognitive and emotional maturation. Today, as a society, we seem to be experiencing an indefinite extension of adolescence. This is a problem, not least because one of the traits for which teenagers are most famous is rule-breaking. Teenage rebellion won’t topple a society when it’s limited
|
to a phase of life, but our society is training people to engage in rebellion as a way of life and calling it social-justice activism.
The fragilization process continued with the arrival of content notes and trigger warnings. Children who never had to work through difficult feelings of failure become adults who believe that other people are supposed to protect them from ever encountering anything uncomfortable. This corrodes the idea of personal responsibility. When taken to its logical conclusion, the widespread demand for content and trigger warnings makes accountability, even from the legal system, seem incongruent with the lesson society has been teaching children for decades: someone else is responsible for your feelings.
We can trace a direct line from content warnings to “harmful words” lists and microaggression trainings in which people are trained to be afraid to say anything, including what they would intend to be compliments. In this world, impact matters and intention does not. Someone could take offense at anything and demand an apology for the impact they alone defined. Society is training us—socially and in the workplace now—to be emotionally codependent on others rather than considering why we might be so easily offended that we hear kind words as insults.
Language policing leads to the safe spaces of today’s cancel culture. Rules apply based on what you look like and whether your ideas affirm the prevailing narrative, which understands victimhood to be empowering, relativism to be liberating, and the self as the ultimate god. In order to further the idea that narrative is reality, that feelings are facts, and that the merits of an idea are based on identity, the pillars of law itself must now be questioned and found to be deficient.
Abolishing the police is not a new idea, but it resurged mightily in the wake of George Floyd’s death in May 2020. Protestors demanded the defunding of police in order to stop what they perceived to be the wanton murder of black men. The Defund The Police movement has yet to provide data in support of their claim that reducing the number of officers reduces the number of lethal-force incidents. There is evidence to suggest that the opposite is true—as progressive cities slash their police budgets, the absence of law enforcement has seen crime rise and open-air drug scenes proliferate. Sweeping statements denouncing the police as structurally racist have become so common that they are now articles of faith. And because law enforcement is frequently equated with the law itself, millions have now concluded that the law is racist.
That includes the Constitution, the document the majority of Americans once respected as the enumerator of their rights and the ultimate authority and guidance on the rule of law. A dubious revisionist-history initiative like the 1619 Project is being taught in schools, even though its central claims are hotly contested by properly qualified historians. But because feelings have become facts, equal accountability before the law is now seen as oppressive because it doesn’t permit the remaking of reality in the image of emotion. But a culture of victimhood can only conceive of personal responsibility as oppressive—it is the institutions that are racist, the social structures that are responsible for individuals’ pain. This has engendered hatred for America and a rejection of the ideals it strives to uphold. It even encourages a denial that the founding fathers ever wanted to create a country in which everyone was equal in the eyes of the law.
Both the focus on identity and the rejection of personal responsibility in favor of structural accountability undermine the rule of law. On one hand, the self must be allowed to act on whatever feeling it has in the moment and do whatever it wants to do lest it be “oppressed.” Since there are so few scenarios in which someone can do whatever they like at all times, the feeling of being oppressed becomes constant. This leads to constant rage—which is why those who believe that natural limitation or restraint are the same as persecution exhort others to “stay angry.” When this rage is turned upon the legal system as an agent of injustice rather than a structure capable of protecting the rights of every individual, it diminishes respect for the rule of law: if the law itself is oppressive, why should we want to be ruled by it?
On the other hand, the very claim that institutions and structures are the problem encourages the belief that individuals are not accountable before the law. Individuals are therefore helpless against vague but powerful systems, from which they must be liberated to “live their truth.” But an individual is powerless to create the life they desire because there are so many oppressive systems keeping them down. The more we repeat the story about structural oppression to ourselves, the more we start to see oppression everywhere, even in places it’s not. The self demands ever more power to feed its insatiable appetites, rule of law be damned, even as it points to societal structures, especially the legal system, as the reason it cannot obtain all it desires.
In a culture this fragile, this sensitized to personal failure, and this averse to difficult emotions, it is inevitable that the main object of worship will be the self. Those partaking in the worship of the self will not see it as such—they will see it as self-protection, and anything that does not accord with their narratives will be perceived as a threat. The greater the focus on the self, the more threatening everything outside of it becomes. In this paradigm, the self must be kept front and center as a matter of survival. When the self is the ultimate authority, the most egregious sin is suffering. The worst offenders, therefore, are those who either inflict suffering or impede pleasure. In the current swamp of moral relativism, what impedes pleasure can be anything from a lack of affirmation to being told “no” by reality itself. The opponents of pleasure are those who disagree, and they are not just wrong but immoral. They therefore deserve harsher social (and even legal) punishments than those they are allegedly oppressing.
It might not take more than cradle-to-grave emotional coddling to produce a generation of self-worshippers, but our culture has more to contribute to that end. Because human beings will worship something, whether it is a deity, science, or even a lack of belief in anything, the worship of self displaced something. The decline and retreat of religion has stripped Western societies of their ultimate authority. Dethroning that common understanding has left society vulnerable to moral relativism. In a society where “my truth” and “your truth” can apparently coexist even if these truths are mutually exclusive, can there be much hope for a strong rule of law?
In 1882, Friedrich Nietzsche declared that God is dead and that we had killed Him. Now that the self reigns supreme, abusing others is encouraged, even celebrated in the name of “self-care.” Cultural chauvinism blinds us to the flaws of our time and enables the belief that postmodern individuals know better than ancient peoples who didn’t have the understanding produced by the advanced technology we now enjoy. But technology is also part of the problem.
Since the late 1990s, when the Internet became available to the everyday consumer, technology has billed itself as the amoral savior of humanity. Marketers told us that we needed greater convenience in our breakneck modern lives, and that technology was the answer to that problem. But there have been costs to this shift, and it is not obvious that technology even delivered on its promise of convenience in the first place.
In the 20-plus years since technology injected itself into every aspect of the average person’s daily life, we have ceded authority from human to machine. We now believe that technology will provide the solution to every problem plaguing humanity, including the problem of our humanity. Only now, after the entrenchment of the Internet and the proliferation of personal electronic devices, are we starting to ask questions about machine-learning bias and consider the implications of the fact that the origin of all machine output is still human input.
The idea—possibly wishful—that computers are never wrong has further weakened our trust in human-made institutions and philosophies like the rule of law. As a result, our culture is hurtling into a future in which machines will rule as if a digital dictatorship will be more beneficial than the “messy” human-made systems that rely on centuries of philosophy, experimentation of different ways to run a society, and scholarly reflections on those systems through the ages. Somehow, computers are more trustworthy than all that.
The rule of self is supplanting the rule of law. Individualistic structures, we are told, must be dismantled in favor of socialistic ones, even though being “free” to live one’s individual truth is now conceptualized as the entire point of life. Any restrictions or limitations are oppressive. The rage produced by the inability to act on our feelings at every moment betrays a belief that it is our right to live without limits. A society that rejects limits even as it celebrates victimhood will see the rule of law as both too restrictive as well as oppressive.
Until the self becomes smaller in importance, it will make our world smaller and more fragile, treating everything—including that which would otherwise equalize—as a threat to its own survival. Ultimately, the rule of law flies in the face of the current cultural narrative that some people are more subject to the law than others. And so, the law, along with anything that challenges the current narrative, must be burned to the ground in favor of god only knows what.
|
Black History Month: Greenville County libraries help African Americans research genealogy
Marie Mason wants to know more about her family's roots and heritage.
The Greenville County Library System is helping residents, like Mason, get started on their own genealogical research.
On Wednesday, Mason attended an African American Genealogy event at Berea Public Library (Sarah Dobey Jones Branch).
"I am now in the stage of the elder of the family. I'm just trying to get information to our younger generation and let them know where our families come from, who we originated from and let Generation X know that we have legacies and we have families that done things and we need to be proud of our heritage," Mason said.
The African American Genealogy event is part of Greenville County libraries' Black History Month event series. Ashley Bright, librarian at The South Carolina Room, uses the series to give an introduction on how to get started on genealogy research. Bright has worked in genealogy research for 15 years. During the event, she explains which records to search, how to search through different records and presents challenges that are specific to this research.
"Everyone has a story worth telling, even if that story is hard to find," Bright said. "A lot of people struggle but it is worth it."
Here's why it's difficult for African Americans to research genealogy
There's a struggle getting information past 1865 or 1870 in research. Unless your ancestors were free prior to 1865, they wouldn't appear in the census by name until 1870, Bright said.
To find records of ancestors, people have to check unorthodox records. Dig into church records, which would include enslaved persons' christenings and records of membership, mortgage records can include enslaved people as collateral and deeds of gifts could include enslaved people as gifts and newspapers can list enslaved people for various reasons.
Despite the obstacles, Mason is determined to find out more about her ancestry.
"I think it's very important because you have to know where you come from. If you don't have a place of stability, a foundation, roots, you don't know who you are," Mason said.
At the South Carolina Room, you can Book a Librarian for an hour of individualized research assistance.
The next African American genealogy event will be Thursday, Feb. 9 at the Travelers Rest branch, and Friday, Feb. 10 at the Pelham Road location.
|
<urn:uuid:315e8a39-c5fb-456d-a8fc-c93433d2d15e>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00220.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9690006375312805,
"pii_count": 0,
"score": 2.796875,
"token_count": 497,
"url": "https://www.greenvilleonline.com/story/news/local/greenville-roots/2023/02/09/african-american-genealogy-events-at-greenville-county-libraries/69882068007/"
}
|
Black History Month: Greenville County libraries help African Americans research genealogy
Marie Mason wants to know more about her family's roots and heritage.
The Greenville County Library System is helping residents, like Mason, get started on their own genealogical research.
On Wednesday, Mason attended an African American Genealogy event at Berea Public Library (Sarah Dobey Jones Branch).
"I am now in the stage of the elder of the family. I'm just trying to get information to our younger generation and let them know where our families come from, who we originated from and let Generation X know that we have legacies and we have families that done things and we need to be proud of our heritage," Mason said.
The African American Genealogy event is part of Greenville County libraries' Black History Month event series. Ashley Bright, librarian at The South Carolina Room, uses the series to give an introduction on how to get started on genealogy research. Bright has worked in genealogy research for 15 years. During the event, she explains which records to search, how to search through different records and presents challenges that are specific to this research.
"Everyone has a story worth telling, even if that story is hard to find," Bright said. "A lot of people struggle but it is worth it."
Here's why it's difficult for African Americans to research genealogy
There's a struggle getting information past 1865 or 1870 in research. Unless your ancestors were free prior to 1865, they wouldn't appear in the census by name until 1870, Bright said.
To find records of ancestors, people have to check unorthodox records. Dig into church records, which would include enslaved persons' christenings and records of membership, mortgage records can include enslaved people as collateral and deeds of gifts could include enslaved people as gifts and newspapers can list enslaved people for various reasons.
Despite the obstacles, Mason is determined to find out more about her ancestry.
"I think it's very important because you have to know where you come from. If you don't have a place of stability, a foundation, roots, you don't know who you are," Mason said.
At the South Carolina Room, you can Book a Librarian for an hour of individualized research assistance.
The next African American genealogy event will be Thursday, Feb. 9 at the Travelers Rest branch, and Friday, Feb. 1
|
0 at the Pelham Road location.
|
Haga clic aquí para leer este artículo en español.
BOZEMAN — Buried in Montana State University’s Mark and Robyn Jones School of Nursing, right next to the campus duck pond, is a laboratory with no microscopes, no latex and no goggles. The space is called the Moyce Immigrant Health Lab, also known as Proyecto SALUD — “Scientists And Latinos United against Disparities.” In Spanish, it also means “health project.”
The lab was created by assistant professor Dr. Sally Moyce and includes an interdisciplinary group of researchers working to address health disparities in the Latino community in Gallatin County, a population that has increased dramatically in recent years. According to the census, Gallatin County saw a 138.7% increase in Latino or Hispanic residents between 2010 and 2020.
When Moyce moved to Bozeman, having worked with Latino immigrants before in Oregon and California, she saw a need.
“When I moved here, I noticed a real absence of Spanish-language services and resources,” she said in an email. “And I knew the community needed basic preventive health care and access to health services. I thought the Immigrant Health Lab could be a place where students and other researchers interested in the same things could work together to find solutions.”
Through health fairs, surveys and community advisory boards, the MSU researchers are gathering data on health needs in the Latino immigrant community.
The hope is to use the data to work with health care providers Community Health Partners and Bozeman Health, as well as other organizations in the area, to create more health awareness and access for Spanish-speaking immigrants.
“We work with [immigrant] representatives of the community to inform us, ‘What do you want us as researchers to focus on? Where do you want us to put our time, energy and resources?’” said Danika Comey, the lab’s lead researcher. “We don’t want to come in as researchers, especially white researchers, and be like, ‘Hey, we know what you guys need.’”
Carlos Medina is a community representative on the Bozeman Community Advisory Board. He and the other board members meet monthly to discuss, in Spanish, the health needs of the Latino community and report back to Comey and Moyce. Originally from Mexico, Medina and his family moved to Bozeman in 2020.
“I have five years living in the U.S., and I have seen that the Latino community is afraid to approach the hospitals due to a lot of barriers: language, culture, knowledge of rules and high insurance costs,” he said in an email. “This project is important because we help people to receive free medical and dental check-ups, and also, they have the opportunity to participate in health programs that may help them to have a better quality of life no matter what country they are from.”
The lab’s work started in 2019 with quarterly health fairs. Comey said 40 to 50 participants attend each health fair. At the fairs, Proyecto SALUD partners with Smiles Across Montana, a mobile dental clinic, which Comey said is a big draw.
“Every time we have a health fair now people are waiting before we open, and we fill up all of the dental slots within the first hour,” Comey said.
While at the fair, Comey said, the participants can get tested for diabetes and high blood pressure, get vaccines from the Gallatin City-County Health Department, and learn about the local food bank. Attendees are also given a survey — part of the lab’s data intake — which asks about their home situation, employment, comfort with English, education level and, critically, health insurance.
According to the surveys, 83% of health fair attendees do not have health insurance. Genesis Chavez, a lab volunteer who is originally from Nicaragua, said there are a variety of reasons for the lack of coverage.
“Insurance in the country is pretty expensive, and that’s not something everyone can afford. There’s so much involved in it,” Chavez said. “So if there are ways we can support and provide free health care for those that really need them, then let’s do it.”
Comey said there is a unique fear for migrants who don’t have documentation to be in the country legally.
“There’s fear as you’re going into a more bureaucratic system,” Comey said. “Are they going to report you if they find out that you are not legal to work in the United States? Are you going to face discrimination? When you get there, are the front desk staff going to roll their eyes at you when you start speaking Spanish?”
A lack of health care providers who read or speak Spanish poses another large obstacle for migrants whose first language is Spanish. Proyecto SALUD’s surveys showed that 64% of participants self-reported their ability to speak English as “poor.”
“It’s challenging throughout the whole process,” said Comey, who used to work at Bozeman Deaconess Hospital. “From the minute people walk in the door, having language barriers left and right, like even directional signs, even knowing where the office is.”
Comey said health care providers in the area are working to increase their Spanish resources, but, she said, “it’s slow.” A spokesperson from Bozeman Health said the hospital averaged “18 interpretation calls per day in Spanish” in 2022. The hospital last year began using iPads capable of translation, is working on a designated Spanish number for its call center, and awarded grants to Proyecto SALUD for the health fairs, among other efforts.
For Proyecto SALUD’s part, everything is in Spanish. Volunteers even read the surveys out loud for those who aren’t comfortable reading.
Yet another barrier to accessing health care is transportation. More than half of survey responders at health fairs said they work in construction, and many of those jobs are in Big Sky, an hour’s commute.
“If you are working, your only day off is Sunday, and you are working six days a week for 12 hours a day, you can’t go into traditional health care services,” Comey said. For that reason, health fairs are always on Sundays.
Besides the fairs, Proyecto SALUD has launched several other initiatives, in part to broaden its efforts to include mental health.
“We asked the community advisory board what their priority issue was, and they said mental health,” Moyce wrote. “So we did a root-cause analysis into the causes of poor mental health in the population and found that the lack of Spanish-speaking providers combined with a general stigma around seeking mental health care were the problems.”
To address these obstacles, the researchers are trying two different projects. One is a study involving health workers who use a technique called “motivational interviewing,” counseling that guides more than instructs and helps immigrants make changes in their own lives, mindset and health.
“That intervention, delivered over the phone, did work, so now we’re looking to secure funding to test it in a larger group,” Moyce wrote.
The other project, instituted last fall, is called Mujeres Unidas — “Women United” — and consisted of a cohort of 10 women who met regularly to learn and talk about different elements of health.
Isabel Romero, a graduate student from Peru who is working towards her masters degree in counseling at MSU, led the mental health-oriented sections of the group.
“The part that I developed was how to manage stress, and how to understand what is stress, what happens inside of our body,” Romero said. “It was basically a conversation with them.”
Romero said the informal, collaborative nature of the meetings is intentional.
“There is a lot of healing in just talking,” Romero said. “Just listening to other peoples’ stories and being able to relate to other women created a sense of community. And I have the belief that the community is the first mental health support that you will get.”
Chavez, a volunteer who was recently hired as a resource coordinator for Community Health Partners, also noted a unique challenge with mental health for some immigrants coming from South American countries.
“The reason why [some] people immigrate is because there are no opportunities where we live,” Chavez said. “So we experience so many challenges that we come here, and there’s this mindset: We should not complain because we were living in a worse situation.”
The researchers hope to continue to combat stigma and increase awareness with the Mujeres Unidas project with two more cohorts of women. They are also collecting feedback at the next health fair on what participants like and would like to see more of. That event will be held at the county fairgrounds on Sunday, Feb. 12, and the researchers are always looking for volunteers who speak Spanish.
In-depth, independent reporting on the stories impacting your community from reporters who call it home.
The Session: GOP controls the budget and lawmakers collaborate on elk management
A 14 billion dollar budget passed out of the Montana House with support from the Republican supermajority. Democrats don’t have the power to change it, but they still tried. Host Nadya Faulx and reporters Eric Dietrich, Ellis Juhlin, and Arren Kimbel-Sannit discuss the debate over how the state should spend tax dollars. The spending plan…
End of COVID emergency will usher in changes across the U.S. health system
The Biden administration’s decision to end the COVID-19 public health emergency in May will institute sweeping changes across the health care system that go far beyond many people having to pay more for COVID tests.
Henrietta Mann, “Native America Calling,” receive National Humanities Medals
Legendary Native American studies professor and historian Henrietta Mann, Cheyenne, was all smiles as she made her way into the White House for the 2021 National Humanities Medals dinner and ceremony on Tuesday afternoon. Following close behind her was Shawn Spruce, Laguna Pueblo, Jaclyn Sallee, Inupiaq, Denise Morris, Aleut, and Art Hughes of Native America Calling and Koahnic…
|
<urn:uuid:2978dd93-4909-4314-9713-378cd3c712b2>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00255.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9570418000221252,
"pii_count": 0,
"score": 2.59375,
"token_count": 2239,
"url": "https://montanafreepress.org/2023/02/09/msu-lab-works-with-immigrants-to-address-health-disparities/"
}
|
Haga clic aquí para leer este artículo en español.
BOZEMAN — Buried in Montana State University’s Mark and Robyn Jones School of Nursing, right next to the campus duck pond, is a laboratory with no microscopes, no latex and no goggles. The space is called the Moyce Immigrant Health Lab, also known as Proyecto SALUD — “Scientists And Latinos United against Disparities.” In Spanish, it also means “health project.”
The lab was created by assistant professor Dr. Sally Moyce and includes an interdisciplinary group of researchers working to address health disparities in the Latino community in Gallatin County, a population that has increased dramatically in recent years. According to the census, Gallatin County saw a 138.7% increase in Latino or Hispanic residents between 2010 and 2020.
When Moyce moved to Bozeman, having worked with Latino immigrants before in Oregon and California, she saw a need.
“When I moved here, I noticed a real absence of Spanish-language services and resources,” she said in an email. “And I knew the community needed basic preventive health care and access to health services. I thought the Immigrant Health Lab could be a place where students and other researchers interested in the same things could work together to find solutions.”
Through health fairs, surveys and community advisory boards, the MSU researchers are gathering data on health needs in the Latino immigrant community.
The hope is to use the data to work with health care providers Community Health Partners and Bozeman Health, as well as other organizations in the area, to create more health awareness and access for Spanish-speaking immigrants.
“We work with [immigrant] representatives of the community to inform us, ‘What do you want us as researchers to focus on? Where do you want us to put our time, energy and resources?’” said Danika Comey, the lab’s lead researcher. “We don’t want to come in as researchers, especially white researchers, and be like, ‘Hey, we know what you guys need.’”
Carlos Medina is a community representative on the Bozeman Community Advisory Board. He and the other board members meet monthly to discuss, in Spanish, the health needs of the Latino community and report back to Comey and Moyce. Originally from Mexico, Medina and his family moved to Bozeman in 2020.
“I have five years living in the U.S., and I have seen that
|
the Latino community is afraid to approach the hospitals due to a lot of barriers: language, culture, knowledge of rules and high insurance costs,” he said in an email. “This project is important because we help people to receive free medical and dental check-ups, and also, they have the opportunity to participate in health programs that may help them to have a better quality of life no matter what country they are from.”
The lab’s work started in 2019 with quarterly health fairs. Comey said 40 to 50 participants attend each health fair. At the fairs, Proyecto SALUD partners with Smiles Across Montana, a mobile dental clinic, which Comey said is a big draw.
“Every time we have a health fair now people are waiting before we open, and we fill up all of the dental slots within the first hour,” Comey said.
While at the fair, Comey said, the participants can get tested for diabetes and high blood pressure, get vaccines from the Gallatin City-County Health Department, and learn about the local food bank. Attendees are also given a survey — part of the lab’s data intake — which asks about their home situation, employment, comfort with English, education level and, critically, health insurance.
According to the surveys, 83% of health fair attendees do not have health insurance. Genesis Chavez, a lab volunteer who is originally from Nicaragua, said there are a variety of reasons for the lack of coverage.
“Insurance in the country is pretty expensive, and that’s not something everyone can afford. There’s so much involved in it,” Chavez said. “So if there are ways we can support and provide free health care for those that really need them, then let’s do it.”
Comey said there is a unique fear for migrants who don’t have documentation to be in the country legally.
“There’s fear as you’re going into a more bureaucratic system,” Comey said. “Are they going to report you if they find out that you are not legal to work in the United States? Are you going to face discrimination? When you get there, are the front desk staff going to roll their eyes at you when you start speaking Spanish?”
A lack of health care providers who read or speak Spanish poses another large obstacle for migrants whose first language is Spanish. Proyecto SALUD’s surveys showed that 64% of participants self-reported their ability to speak English as “poor.”
“It’s challenging throughout the whole process,” said Comey, who used to work at Bozeman Deaconess Hospital. “From the minute people walk in the door, having language barriers left and right, like even directional signs, even knowing where the office is.”
Comey said health care providers in the area are working to increase their Spanish resources, but, she said, “it’s slow.” A spokesperson from Bozeman Health said the hospital averaged “18 interpretation calls per day in Spanish” in 2022. The hospital last year began using iPads capable of translation, is working on a designated Spanish number for its call center, and awarded grants to Proyecto SALUD for the health fairs, among other efforts.
For Proyecto SALUD’s part, everything is in Spanish. Volunteers even read the surveys out loud for those who aren’t comfortable reading.
Yet another barrier to accessing health care is transportation. More than half of survey responders at health fairs said they work in construction, and many of those jobs are in Big Sky, an hour’s commute.
“If you are working, your only day off is Sunday, and you are working six days a week for 12 hours a day, you can’t go into traditional health care services,” Comey said. For that reason, health fairs are always on Sundays.
Besides the fairs, Proyecto SALUD has launched several other initiatives, in part to broaden its efforts to include mental health.
“We asked the community advisory board what their priority issue was, and they said mental health,” Moyce wrote. “So we did a root-cause analysis into the causes of poor mental health in the population and found that the lack of Spanish-speaking providers combined with a general stigma around seeking mental health care were the problems.”
To address these obstacles, the researchers are trying two different projects. One is a study involving health workers who use a technique called “motivational interviewing,” counseling that guides more than instructs and helps immigrants make changes in their own lives, mindset and health.
“That intervention, delivered over the phone, did work, so now we’re looking to secure funding to test it in a larger group,” Moyce wrote.
The other project, instituted last fall, is called Mujeres Unidas — “Women United” — and consisted of a cohort of 10 women who met regularly to learn and talk about different elements of health.
Isabel Romero, a graduate student from Peru who is working towards her masters degree in counseling at MSU, led the mental health-oriented sections of the group.
“The part that I developed was how to manage stress, and how to understand what is stress, what happens inside of our body,” Romero said. “It was basically a conversation with them.”
Romero said the informal, collaborative nature of the meetings is intentional.
“There is a lot of healing in just talking,” Romero said. “Just listening to other peoples’ stories and being able to relate to other women created a sense of community. And I have the belief that the community is the first mental health support that you will get.”
Chavez, a volunteer who was recently hired as a resource coordinator for Community Health Partners, also noted a unique challenge with mental health for some immigrants coming from South American countries.
“The reason why [some] people immigrate is because there are no opportunities where we live,” Chavez said. “So we experience so many challenges that we come here, and there’s this mindset: We should not complain because we were living in a worse situation.”
The researchers hope to continue to combat stigma and increase awareness with the Mujeres Unidas project with two more cohorts of women. They are also collecting feedback at the next health fair on what participants like and would like to see more of. That event will be held at the county fairgrounds on Sunday, Feb. 12, and the researchers are always looking for volunteers who speak Spanish.
In-depth, independent reporting on the stories impacting your community from reporters who call it home.
The Session: GOP controls the budget and lawmakers collaborate on elk management
A 14 billion dollar budget passed out of the Montana House with support from the Republican supermajority. Democrats don’t have the power to change it, but they still tried. Host Nadya Faulx and reporters Eric Dietrich, Ellis Juhlin, and Arren Kimbel-Sannit discuss the debate over how the state should spend tax dollars. The spending plan…
End of COVID emergency will usher in changes across the U.S. health system
The Biden administration’s decision to end the COVID-19 public health emergency in May will institute sweeping changes across the health care system that go far beyond many people having to pay more for COVID tests.
Henrietta Mann, “Native America Calling,” receive National Humanities Medals
Legendary Native American studies professor and historian Henrietta Mann, Cheyenne, was all smiles as she made her way into the White House for the 2021 National Humanities Medals dinner and ceremony on Tuesday afternoon. Following close behind her was Shawn Spruce, Laguna Pueblo, Jaclyn Sallee, Inupiaq, Denise Morris, Aleut, and Art Hughes of Native America Calling and Koahnic…
|
A new breakthrough in treating pancreatitis | Pet Peeves
I am continually amazed at the breakthroughs in medicine occurring so frequently. In veterinary medicine, we will soon have a new treatment for pancreatitis. Pancreatitis, seen most often in older dogs, can be quite a deadly disease.
It helps to understand the mechanism of pancreatitis in the dog to fully understand the newest medicine being released for its treatment. The pancreas is an organ that is located right next to where the small intestine exits the stomach. It is situated there because the job of the pancreas is to create and release digestive enzymes that help break down the food as it leaves the stomach and travels through the small intestine. The pancreatic enzymes are especially effective in breaking down fats, sugars and proteins.
Usually, the cause of pancreatitis in dogs is dietary indiscretion. If the dog is fed a very fatty or rich meal, the pancreas goes into overdrive and releases too many enzymes, which causes the pancreatic tissue to become inflamed. That inflammation is the first step on the path to pancreatitis.
When the pancreas becomes inflamed, it triggers a group of anti-inflammatory factors called chemokines. The chemokines react to the inflammation by drawing white blood cells called neutrophils to the area. The neutrophils are supposed to help clean up the inflammation, but there are too many coming in and they release additional inflammatory mediators, causing more inflammation. This triggers the chemokines to draw in even more neutrophils. It becomes a vicious cycle. The pancreatic enzymes are damaging the pancreas. The white blood cells are attacking everything in their path. The area becomes a sterile abscess.
The new treatment for pancreatitis is called Panoquell-CA1. I believe it gets its name by combining “Pano” to represent pancreas and “quell,” which is defined as “putting an end to.” So, Panoquell-CA1 puts an end to the pancreatic inflammation. It does this by inhibiting LFA-1. LFA-1 is the antigen that is stimulated by the chemokines to draw in the white blood cell neutrophils to the pancreas.
By inhibiting LFA-1, the inflammatory response is turned off. The pancreas is not overrun with white blood cells, and it is able to quietly begin to heal.
When a pet has pancreatitis, the symptoms are vomiting, pain in the abdomen, decreased appetite, decreased activity levels and dehydration. This new medication used to treat pancreatitis uses these symptoms as an index to determine if the medication is working. The studies show a significant reduction in the clinical signs by day three of treatment. The most exciting part of this is that these pets that are treated with Panoquell-CA1 are able to stay out of the hospital. They go home during the treatment, not requiring hospitalization and intravenous fluid therapy.
Pancreatitis is a very serious disease in dogs. Sometimes we forget how bad it can be. In severe cases, it can lead to diabetes mellitus and exocrine pancreatic insufficiency. Severe cases can cause multiple organ failure and even death.
Panoquell-CA1 has been in use in Japan since 2018. There is a good bit of data for this product, and safety studies in the United States have shown this product to be extremely safe. It can be used along with conventional pancreatitis treatments and medications. Panoquell-CA1 will be available in the United States the first week of April, and the veterinary community will certainly be greatly benefitted as will any pets unlucky enough to develop a case of pancreatitis.
|
<urn:uuid:128406a6-86e2-490e-95c7-baa2b60ec5bf>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653608.76/warc/CC-MAIN-20230607042751-20230607072751-00599.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9564931988716125,
"pii_count": 0,
"score": 2.546875,
"token_count": 775,
"url": "https://www.nwfdailynews.com/story/lifestyle/pets/2023/04/02/a-new-breakthrough-in-treating-pancreatitis-pet-peeves/70050127007/"
}
|
A new breakthrough in treating pancreatitis | Pet Peeves
I am continually amazed at the breakthroughs in medicine occurring so frequently. In veterinary medicine, we will soon have a new treatment for pancreatitis. Pancreatitis, seen most often in older dogs, can be quite a deadly disease.
It helps to understand the mechanism of pancreatitis in the dog to fully understand the newest medicine being released for its treatment. The pancreas is an organ that is located right next to where the small intestine exits the stomach. It is situated there because the job of the pancreas is to create and release digestive enzymes that help break down the food as it leaves the stomach and travels through the small intestine. The pancreatic enzymes are especially effective in breaking down fats, sugars and proteins.
Usually, the cause of pancreatitis in dogs is dietary indiscretion. If the dog is fed a very fatty or rich meal, the pancreas goes into overdrive and releases too many enzymes, which causes the pancreatic tissue to become inflamed. That inflammation is the first step on the path to pancreatitis.
When the pancreas becomes inflamed, it triggers a group of anti-inflammatory factors called chemokines. The chemokines react to the inflammation by drawing white blood cells called neutrophils to the area. The neutrophils are supposed to help clean up the inflammation, but there are too many coming in and they release additional inflammatory mediators, causing more inflammation. This triggers the chemokines to draw in even more neutrophils. It becomes a vicious cycle. The pancreatic enzymes are damaging the pancreas. The white blood cells are attacking everything in their path. The area becomes a sterile abscess.
The new treatment for pancreatitis is called Panoquell-CA1. I believe it gets its name by combining “Pano” to represent pancreas and “quell,” which is defined as “putting an end to.” So, Panoquell-CA1 puts an end to the pancreatic inflammation. It does this by inhibiting LFA-1. LFA-1 is the antigen that is stimulated by the chemokines to draw in the white blood cell neutrophils to the pancreas.
By inhibiting LFA-1, the inflammatory response is turned off. The pancreas is not overrun with white blood cells, and it is able to quietly begin to heal.
When a pet has pancreatitis, the symptoms are vomiting, pain in the abdomen, decreased appetite, decreased activity levels and dehydration. This new medication used to treat pancreatitis uses these symptoms as an index to determine if the medication is working. The studies show a significant reduction in the clinical signs by day
|
three of treatment. The most exciting part of this is that these pets that are treated with Panoquell-CA1 are able to stay out of the hospital. They go home during the treatment, not requiring hospitalization and intravenous fluid therapy.
Pancreatitis is a very serious disease in dogs. Sometimes we forget how bad it can be. In severe cases, it can lead to diabetes mellitus and exocrine pancreatic insufficiency. Severe cases can cause multiple organ failure and even death.
Panoquell-CA1 has been in use in Japan since 2018. There is a good bit of data for this product, and safety studies in the United States have shown this product to be extremely safe. It can be used along with conventional pancreatitis treatments and medications. Panoquell-CA1 will be available in the United States the first week of April, and the veterinary community will certainly be greatly benefitted as will any pets unlucky enough to develop a case of pancreatitis.
|
Weak, declining lawns common in spring | Gardening
Many North Florida lawns are not looking too good this spring. Some lawns are weak, and some have dead areas that used to be alive and green. Unfortunately, this is relatively common.
Not being satisfied with the appearance of their lawn, many homeowners make matters worse by fertilizing too early. Sometimes, they are fertilizing already dead lawn grass.
All of our lawn grasses respond to daylength and temperature in order to resume growth in spring. The correct daylength happens first, slowly followed by the correct soil temperature.
Most lawns in our area begin to resume top growth (new green leaves) sometime in March, triggered by the correct daylength. However, the second trigger (warm soil) lags behind. It takes consistently warm nights to allow the root area (soil) to sufficiently warm to allow optimal root growth and subsequently efficient uptake of fertilizer. Most years, it’s not until mid-April or May before the soil warms up enough to allow the lawn roots to regrow and to efficiently take up the fertilizer. As a result, it’s best to wait until at least mid-April before applying any fertilizer to a North Florida lawn.
Fertilizing too soon, in February or March, can force new growth too soon only for that new, tender growth caused by early fertilization to be injured by the predictable last killing frost of mid-March. This weakens the lawn even more. In addition, fertilizing before the root area (soil) is sufficiently warm results in the fertilizer quickly leaching. And, fertilizer elements such as iron and potassium remain poorly available to lawn roots under cool soil conditions. Finally, during the transition from winter to spring, the lawn is attempting to grow a new root system. These young, tender roots are easily burned by early fertilizer applications.
Regardless of the cause, weak, declining areas within lawns are slow to recover in spring. Cool soil temperatures don’t allow rapid root regeneration in spring. Consistently warmer nights allow soil temperatures to warm, which will improve turf root growth, nutrient availability and lawn recovery. But this is a gradual process, not a quick recovery.
If your lawn has not made significant improvement by late spring or early summer, it may be time to consider replanting those dead, declining areas. And, take time to learn how to correctly manage a Florida lawn. But whatever you do, don’t continue to follow lawn maintenance practices that do not work in Florida and that contribute to your lawn’s demise.
Here is a link to a UF/IFAS Extension publication on Best Management Practices for a Florida lawn: edis.ifas.ufl.edu/ep236. Call 850-689-5850 to have a copy mailed to you.
|
<urn:uuid:13e746a6-7a20-48cb-a8b8-20ec42ddd014>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652116.60/warc/CC-MAIN-20230605121635-20230605151635-00573.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9377241134643555,
"pii_count": 0,
"score": 2.53125,
"token_count": 583,
"url": "https://www.nwfdailynews.com/story/lifestyle/home-garden/how-to/2023/04/23/weak-declining-lawns-common-in-spring-gardening/70120277007/"
}
|
Weak, declining lawns common in spring | Gardening
Many North Florida lawns are not looking too good this spring. Some lawns are weak, and some have dead areas that used to be alive and green. Unfortunately, this is relatively common.
Not being satisfied with the appearance of their lawn, many homeowners make matters worse by fertilizing too early. Sometimes, they are fertilizing already dead lawn grass.
All of our lawn grasses respond to daylength and temperature in order to resume growth in spring. The correct daylength happens first, slowly followed by the correct soil temperature.
Most lawns in our area begin to resume top growth (new green leaves) sometime in March, triggered by the correct daylength. However, the second trigger (warm soil) lags behind. It takes consistently warm nights to allow the root area (soil) to sufficiently warm to allow optimal root growth and subsequently efficient uptake of fertilizer. Most years, it’s not until mid-April or May before the soil warms up enough to allow the lawn roots to regrow and to efficiently take up the fertilizer. As a result, it’s best to wait until at least mid-April before applying any fertilizer to a North Florida lawn.
Fertilizing too soon, in February or March, can force new growth too soon only for that new, tender growth caused by early fertilization to be injured by the predictable last killing frost of mid-March. This weakens the lawn even more. In addition, fertilizing before the root area (soil) is sufficiently warm results in the fertilizer quickly leaching. And, fertilizer elements such as iron and potassium remain poorly available to lawn roots under cool soil conditions. Finally, during the transition from winter to spring, the lawn is attempting to grow a new root system. These young, tender roots are easily burned by early fertilizer applications.
Regardless of the cause, weak, declining areas within lawns are slow to recover in spring. Cool soil temperatures don’t allow rapid root regeneration in spring. Consistently warmer nights allow soil temperatures to warm, which will improve turf root growth, nutrient availability and lawn recovery. But this is a gradual process, not a quick recovery.
If your lawn has not made significant improvement by late spring or early summer, it may be time to consider replanting those dead, declining areas. And, take time to learn how to correctly manage a Florida lawn. But whatever you do, don’t continue to follow lawn maintenance practices that do not work in Florida and that contribute to
|
your lawn’s demise.
Here is a link to a UF/IFAS Extension publication on Best Management Practices for a Florida lawn: edis.ifas.ufl.edu/ep236. Call 850-689-5850 to have a copy mailed to you.
|
About 60 per cent of the food in our supermarket trolleys is “ultraprocessed”. But what exactly is the difference between processed and ultraprocessed?
The level of food processing is a continuum. Unprocessed food has little to no intervention – a bag of potatoes, for instance.
Processed foods use techniques such as freezing and drying to preserve the shelf-life of food – a tin of fish.
Ultraprocessed foods use advanced food manufacturing – extrusion, hydrogenation, hydrolysis and sensory-enhancing processes such as bulking, aerating and foaming. Think Pringles.
First Look: New south Co Dublin restaurant with wood-fired grills, Sunday roasts and inventive cocktails
There is one crucial difference between processed and ultraprocessed, and it is an economic one. Food is ultraprocessed not only to extend shelf life, but to make it “hyperpalatable”. In essence, you know if you are eating an ultraprocessed food if it delivers a taste bomb in your mouth at the same time your hand is reaching for the next one, always with a vague feeling of never quite getting enough of it.
[ Ireland's obesity crisis has reached epidemic proportions, says WHO ]
The era of ultraprocessing has led to a Cambrian explosion of food tastes and textures. The problem is that it works too well. The food tastes too good, and it is too cheap, too available, too efficient and too convenient. We now know that ultraprocessed foods have uniquely harmful effects on the human body, irrespective of their calorie content, and this contributes to obesity, diabetes, cancer and other health problems. This is why ultraprocessed foods are viewed by public-health actors as junk foods.
In response, the Irish Government (as well as almost every other world government) has instituted a policy of reformulation – in essence, reducing the levels of saturated fats, sugar and salt in these foods. On the face of it, this looks like a brilliant win-win: unlike other public-health policies, it has the potential also to benefit the ultraprocessed-food industry, because the focus is on changing the nutrient profile of a product rather than decreasing its overall consumption. Think Diet Coke.
Yet no ultraprocessed food manufacturer has decreased its overall output.
So where does this increasing food pile go?
In one of the most cited works in history, Garrett Hardin asked the reader to imagine a field that was open to all, a commons. Each herder would try to put as many cattle as possible on this commons, because they get a direct benefit from their own animals grazing, and suffer only a postponed cost from when their own and others’ cattle overgraze and the commons deteriorates. In other words, each herder is motivated to add more and more animals because they get a direct advantage from their own animals, but the herder carries only a small and delayed cost that arises from overgrazing.
Hardin sums up the situation of the commons with these words: “Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit – in a world that is limited.”
What is public health if not a biological commons? It is a limited and shared resource. Limited because there are only so many calories our bodies can take in without negative effects. Shared because, while many individuals are not sick, it is easy to see that the population is. We have many population illnesses – heart disease, obesity, alcoholism, tobacco, depression, gambling – and if they don’t affect your individual body, they affect your social body: your family, your hospital, your community, your workplace, your prisons, your next generation.
In our case, one can see how junk-food manufacturers compete for a shared but limited common pasture on which to graze. The population’s body is open to all junk-food companies, which compete intensely for a share of it. Each junk-food manufacturer acts rationally – they know there is only so much ultraprocessed food we can eat. But it is a logic shared by dozens of other competitors, encouraging a limitless production on a commons that is limited.
An interesting aspect of any commons dilemma, from traffic congestion to the nuclear-arms race, is that once individuals become aware that others are exploiting the commons, they are more likely to increase their own exploitation of it. This phenomenon is known as stimulated exploitation. Observing the success of competitors’ ultraprocessed foods in the market place is likely to spur competitor efforts to create even more hyperpalatable but nutritionally empty products.
Have you seen the bread aisle in the supermarket recently? Count how many bread brands have branched into lines of pancakes, brioches and flapjacks. Look at how every cereal manufacturer now has product extensions in breakfast bars, granolas and sweet spreads. Breads and cereals are biscuitised, dairy is dessertised, vegetables are crispified.
There is growing awareness of corporate practices such as greenwashing, pinkwashing, wokewashing, and leanwashing. This happens where a veneer of corporate social responsibility is applied intentionally to mask business as usual, deflecting attention towards visible but ultimately superficial changes in corporate agendas. “X-washing” is the intent to cynically deceive, manipulate, and aggressively persuade stakeholders.
In spite of this, we have not met a single food producer who wishes for their consumers to be ill. None of them wake up in the morning hoping to make a child overweight.
The junk-food industry ardently believes its efforts to tackle the obesity crisis are genuine, commensurate with the scale of the problem and well-intentioned. This psychology might be better understood as “leanwishing”– the use of corporate responsibility practices by the food industry with the sincere hope that these will reverse the obesity crisis and protect their customers from the harmful effects of their products.
We have been constantly losing the battle with the drivers of obesity in Ireland over the past 30 years. We need a more objective measurement tool – a ratio which compares the scale, scope and speed of food-industry reformulation against the scale, scope and speed of formulation, namely the creation of new products, product-line extensions, flavours, delivery channels, snacking occasions and portion sizes. We need to stop the mass infiltration of healthy food with ultraprocessed-food variations.
There is a world of difference between stabilising the extremely high prevalence of being overweight in Ireland and reversing it. Reversing it means accepting that obesity is as much about overproduction as it is about overconsumption. In order to reverse obesity, we must enclose and protect the biological commons using taxation, advertising legislation, channel restriction, plain packaging and mandatory reformulation, simultaneously.
The crisis of overproduction requires a more empathetic response to ultraprocessed food manufacturers too. They are currently obliged to engage in two contradictory actions: to increase production and reduce consumption. If a food manufacturer changed its entire product portfolio to protect the biological commons, they would be utterly foolish: a dozen other manufacturers would just race in to fill the void.
Effective policies to protect the commons would come as a welcome relief to junk food manufacturers, who currently have inadequate solutions at their disposal to limit their overgrazing.
Dr Norah Campbell is an associate professor of marketing at Trinity Business School; Dr Sarah Browne is an assistant professor of marketing at Trinity Business School; Dr Marius Claudy is associate professor of marketing at UCD School of Business; Kathryn Reilly is policy manager with the Irish Heart Foundation; Prof Francis Finucane is consultant endocrinologist at Galway University Hospitals and honorary full professor in medicine at University of Galway
|
<urn:uuid:7d934092-796c-494f-8d19-389be29652ce>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00575.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9543704986572266,
"pii_count": 0,
"score": 2.984375,
"token_count": 1603,
"url": "https://www.irishtimes.com/health/your-wellness/2023/01/30/it-delivers-a-taste-bomb-of-pure-pleasure-but-ultraprocessed-food-is-killing-us/"
}
|
About 60 per cent of the food in our supermarket trolleys is “ultraprocessed”. But what exactly is the difference between processed and ultraprocessed?
The level of food processing is a continuum. Unprocessed food has little to no intervention – a bag of potatoes, for instance.
Processed foods use techniques such as freezing and drying to preserve the shelf-life of food – a tin of fish.
Ultraprocessed foods use advanced food manufacturing – extrusion, hydrogenation, hydrolysis and sensory-enhancing processes such as bulking, aerating and foaming. Think Pringles.
First Look: New south Co Dublin restaurant with wood-fired grills, Sunday roasts and inventive cocktails
There is one crucial difference between processed and ultraprocessed, and it is an economic one. Food is ultraprocessed not only to extend shelf life, but to make it “hyperpalatable”. In essence, you know if you are eating an ultraprocessed food if it delivers a taste bomb in your mouth at the same time your hand is reaching for the next one, always with a vague feeling of never quite getting enough of it.
[ Ireland's obesity crisis has reached epidemic proportions, says WHO ]
The era of ultraprocessing has led to a Cambrian explosion of food tastes and textures. The problem is that it works too well. The food tastes too good, and it is too cheap, too available, too efficient and too convenient. We now know that ultraprocessed foods have uniquely harmful effects on the human body, irrespective of their calorie content, and this contributes to obesity, diabetes, cancer and other health problems. This is why ultraprocessed foods are viewed by public-health actors as junk foods.
In response, the Irish Government (as well as almost every other world government) has instituted a policy of reformulation – in essence, reducing the levels of saturated fats, sugar and salt in these foods. On the face of it, this looks like a brilliant win-win: unlike other public-health policies, it has the potential also to benefit the ultraprocessed-food industry, because the focus is on changing the nutrient profile of a product rather than decreasing its overall consumption. Think Diet Coke.
Yet no ultraprocessed food manufacturer has decreased its overall output.
So where does this increasing food pile go?
In one of the most cited works in history, Garrett Hardin asked the reader to imagine a field that was open to all, a commons. Each herder would try to put as many cattle as possible on this commons,
|
because they get a direct benefit from their own animals grazing, and suffer only a postponed cost from when their own and others’ cattle overgraze and the commons deteriorates. In other words, each herder is motivated to add more and more animals because they get a direct advantage from their own animals, but the herder carries only a small and delayed cost that arises from overgrazing.
Hardin sums up the situation of the commons with these words: “Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit – in a world that is limited.”
What is public health if not a biological commons? It is a limited and shared resource. Limited because there are only so many calories our bodies can take in without negative effects. Shared because, while many individuals are not sick, it is easy to see that the population is. We have many population illnesses – heart disease, obesity, alcoholism, tobacco, depression, gambling – and if they don’t affect your individual body, they affect your social body: your family, your hospital, your community, your workplace, your prisons, your next generation.
In our case, one can see how junk-food manufacturers compete for a shared but limited common pasture on which to graze. The population’s body is open to all junk-food companies, which compete intensely for a share of it. Each junk-food manufacturer acts rationally – they know there is only so much ultraprocessed food we can eat. But it is a logic shared by dozens of other competitors, encouraging a limitless production on a commons that is limited.
An interesting aspect of any commons dilemma, from traffic congestion to the nuclear-arms race, is that once individuals become aware that others are exploiting the commons, they are more likely to increase their own exploitation of it. This phenomenon is known as stimulated exploitation. Observing the success of competitors’ ultraprocessed foods in the market place is likely to spur competitor efforts to create even more hyperpalatable but nutritionally empty products.
Have you seen the bread aisle in the supermarket recently? Count how many bread brands have branched into lines of pancakes, brioches and flapjacks. Look at how every cereal manufacturer now has product extensions in breakfast bars, granolas and sweet spreads. Breads and cereals are biscuitised, dairy is dessertised, vegetables are crispified.
There is growing awareness of corporate practices such as greenwashing, pinkwashing, wokewashing, and leanwashing. This happens where a veneer of corporate social responsibility is applied intentionally to mask business as usual, deflecting attention towards visible but ultimately superficial changes in corporate agendas. “X-washing” is the intent to cynically deceive, manipulate, and aggressively persuade stakeholders.
In spite of this, we have not met a single food producer who wishes for their consumers to be ill. None of them wake up in the morning hoping to make a child overweight.
The junk-food industry ardently believes its efforts to tackle the obesity crisis are genuine, commensurate with the scale of the problem and well-intentioned. This psychology might be better understood as “leanwishing”– the use of corporate responsibility practices by the food industry with the sincere hope that these will reverse the obesity crisis and protect their customers from the harmful effects of their products.
We have been constantly losing the battle with the drivers of obesity in Ireland over the past 30 years. We need a more objective measurement tool – a ratio which compares the scale, scope and speed of food-industry reformulation against the scale, scope and speed of formulation, namely the creation of new products, product-line extensions, flavours, delivery channels, snacking occasions and portion sizes. We need to stop the mass infiltration of healthy food with ultraprocessed-food variations.
There is a world of difference between stabilising the extremely high prevalence of being overweight in Ireland and reversing it. Reversing it means accepting that obesity is as much about overproduction as it is about overconsumption. In order to reverse obesity, we must enclose and protect the biological commons using taxation, advertising legislation, channel restriction, plain packaging and mandatory reformulation, simultaneously.
The crisis of overproduction requires a more empathetic response to ultraprocessed food manufacturers too. They are currently obliged to engage in two contradictory actions: to increase production and reduce consumption. If a food manufacturer changed its entire product portfolio to protect the biological commons, they would be utterly foolish: a dozen other manufacturers would just race in to fill the void.
Effective policies to protect the commons would come as a welcome relief to junk food manufacturers, who currently have inadequate solutions at their disposal to limit their overgrazing.
Dr Norah Campbell is an associate professor of marketing at Trinity Business School; Dr Sarah Browne is an assistant professor of marketing at Trinity Business School; Dr Marius Claudy is associate professor of marketing at UCD School of Business; Kathryn Reilly is policy manager with the Irish Heart Foundation; Prof Francis Finucane is consultant endocrinologist at Galway University Hospitals and honorary full professor in medicine at University of Galway
|
The holy month of Ramadan begins for Muslims in Delaware
Ramadan, the holy month of Islam, begins Wednesday, March 22, at sundown for Delaware's Muslim community.
Here's what to know about the most important month of one of the largest and fastest-growing religions in the world.
What is Ramadan?
Ramadan is a monthlong observance dedicated to the celebration of the Quran and the strengthening of Muslims' relationship with God. In the Islamic faith, it is believed that during the holy month of Ramadan, the Prophet Muhammad was given the Quran, the holy book of Islam, and was revealed as a prophet to spread the messages and teachings of God.
Ramadan is the ninth month of the Islamic calendar. Since the calendar is connected to the lunar cycles, the beginning of Ramadan is celebrated around 10 to 12 days earlier every year. Ramadan ends with the holy holiday of Eid al-Fitr, which will fall on April 21.
It is predicted that in 2030 there will be two Ramadans, which hasn't occurred since 1997.
How is Ramadan celebrated?
Throughout the month Muslim individuals fast from sunup to sundown, abstaining from eating, drinking and other indulgences. A breaking of the fast occurs twice daily during Ramadan.
Dr. Naveed Baqir, executive director of the Delaware Council on Global and Muslim Affairs, spoke about what the practice of fasting means for those who celebrate Ramadan. According to Baqir, a major part of fasting is understanding the struggle of those who are less fortunate while also attaining righteousness as a community.
More:'If I'm walking up 10 flights of stairs, it's hard to not take a sip of water': A Ramadan day
"It creates a sense of elevated consciousness that allows us to become more conscious about our actions, how we deal with people and attempt to create an improvement," Baqir said.
Suhoor is the meal eaten before sunrise and iftar is eaten after sundown. Usually, the meals shared during this time are halal, meaning they don't contain certain ingredients that are not permissible under Islamic law such as pork products or alcohol.
It is not just food or drink that is abstained from during Ramadan. Any action that could be considered indulgent or harmful such as cursing or smoking is typically avoided during this time.
Nightly prayers, called Tarawih, are also a part of Ramadan celebrations. A portion of the Quran is recited in prayer each night, with the idea that by the end of the holy month, the practicing individual will have recited the entire holy book.
Meals and prayers are typically held in communal settings. In a post-pandemic world, this means even more to Baqir.
"In this day and age when we have issues with isolation, having daily iftar dinners with a community ... is a welcome break from all of that," Baqir said.
More:The Ramadan Kitchen Diary: Saying goodbye to Ramadan with a special treat
Ramadan ends with the holy day of Eid al-Fitr, which translates to "the feast of the breaking of the fast.”
During Eid, Muslims are advised to wear their nicest clothes and engage in morning prayer, which is usually held outdoors; the giving of gifts; and spending time with loved ones.
Islam is the second-largest and fastest-growing religion with almost 2 billion followers worldwide. According to the World Population Review, the United States has a total of around 3.5 million practicing Muslims. Delaware has one of the highest populations of Muslim people in the country, with an estimated 793 Muslims per 100,000 residents.
School districts around Delaware are implementing ways to acknowledge and celebrate Ramadan into their classrooms and curriculums. Recently, Christina School District began offering halal meal options for Muslim students and Ramadan accommodations. Eid al-Fitr will also be celebrated as a holiday for the entire school district starting next year.
Contact Molly McVety at <email-pii>. Follow her on Twitter @mollymcvety.
|
<urn:uuid:bcdcdba8-00ef-44b7-9974-57af6ee69b87>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655244.74/warc/CC-MAIN-20230609000217-20230609030217-00583.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9615811109542847,
"pii_count": 1,
"score": 3.296875,
"token_count": 840,
"url": "https://www.delawareonline.com/story/news/2023/03/22/ramadan-holy-month-of-islam-begins-sundown-muslims-fast-worship/70037212007/"
}
|
The holy month of Ramadan begins for Muslims in Delaware
Ramadan, the holy month of Islam, begins Wednesday, March 22, at sundown for Delaware's Muslim community.
Here's what to know about the most important month of one of the largest and fastest-growing religions in the world.
What is Ramadan?
Ramadan is a monthlong observance dedicated to the celebration of the Quran and the strengthening of Muslims' relationship with God. In the Islamic faith, it is believed that during the holy month of Ramadan, the Prophet Muhammad was given the Quran, the holy book of Islam, and was revealed as a prophet to spread the messages and teachings of God.
Ramadan is the ninth month of the Islamic calendar. Since the calendar is connected to the lunar cycles, the beginning of Ramadan is celebrated around 10 to 12 days earlier every year. Ramadan ends with the holy holiday of Eid al-Fitr, which will fall on April 21.
It is predicted that in 2030 there will be two Ramadans, which hasn't occurred since 1997.
How is Ramadan celebrated?
Throughout the month Muslim individuals fast from sunup to sundown, abstaining from eating, drinking and other indulgences. A breaking of the fast occurs twice daily during Ramadan.
Dr. Naveed Baqir, executive director of the Delaware Council on Global and Muslim Affairs, spoke about what the practice of fasting means for those who celebrate Ramadan. According to Baqir, a major part of fasting is understanding the struggle of those who are less fortunate while also attaining righteousness as a community.
More:'If I'm walking up 10 flights of stairs, it's hard to not take a sip of water': A Ramadan day
"It creates a sense of elevated consciousness that allows us to become more conscious about our actions, how we deal with people and attempt to create an improvement," Baqir said.
Suhoor is the meal eaten before sunrise and iftar is eaten after sundown. Usually, the meals shared during this time are halal, meaning they don't contain certain ingredients that are not permissible under Islamic law such as pork products or alcohol.
It is not just food or drink that is abstained from during Ramadan. Any action that could be considered indulgent or harmful such as cursing or smoking is typically avoided during this time.
Nightly prayers, called Tarawih, are also a part of Ram
|
adan celebrations. A portion of the Quran is recited in prayer each night, with the idea that by the end of the holy month, the practicing individual will have recited the entire holy book.
Meals and prayers are typically held in communal settings. In a post-pandemic world, this means even more to Baqir.
"In this day and age when we have issues with isolation, having daily iftar dinners with a community ... is a welcome break from all of that," Baqir said.
More:The Ramadan Kitchen Diary: Saying goodbye to Ramadan with a special treat
Ramadan ends with the holy day of Eid al-Fitr, which translates to "the feast of the breaking of the fast.”
During Eid, Muslims are advised to wear their nicest clothes and engage in morning prayer, which is usually held outdoors; the giving of gifts; and spending time with loved ones.
Islam is the second-largest and fastest-growing religion with almost 2 billion followers worldwide. According to the World Population Review, the United States has a total of around 3.5 million practicing Muslims. Delaware has one of the highest populations of Muslim people in the country, with an estimated 793 Muslims per 100,000 residents.
School districts around Delaware are implementing ways to acknowledge and celebrate Ramadan into their classrooms and curriculums. Recently, Christina School District began offering halal meal options for Muslim students and Ramadan accommodations. Eid al-Fitr will also be celebrated as a holiday for the entire school district starting next year.
Contact Molly McVety at <email-pii>. Follow her on Twitter @mollymcvety.
|
Zero-emissions agriculture is still years away, farmers and scientists say as food systems are set to come under the spotlight at this year's global climate summit.
- Food systems will be a key focus for the first time at global climate conference COP28
- Agriculture is deeply affected by climate change but also produces significant greenhouse gas emissions
- Farmers say food and fibre emissions are hard to cut back
Food production has largely been overlooked in the 30 years since world leaders first agreed to cut greenhouse gas emissions and halt climate change.
But agriculture will be on the agenda like never before when delegates meet at the United Nations climate conference COP28.
Host country the United Arab Emirates is drumming up support for a leaders' declaration on resilient and sustainable food systems, and climate action.
A day dedicated to discussing food, agriculture, and water will also be on the host's agenda for the first time.
Agriculture is deeply affected by climate change, bearing the brunt of increasingly extreme weather.
But it is also a significant producer of greenhouse gas emissions — with methane from livestock and carbon from land clearing the leading sources.
Extreme weather impact
"Agriculture is the sector in the world that is most affected by climate change," CEO of Farmers for Climate Action Natalie Collard said.
"In an era of climate change, repeat fires, floods, droughts, are sending insurance costs through the roof.
"But also, the fine art of farming successfully has got harder for every single Australian farmer."
Most sectors within agriculture have lost about 20 per cent productivity over the past 20 years due to climate change.
That's about $30,000 per farm, according to federal Agriculture Minister Murray Watt, who has launched consultation to produce a net zero plan for agriculture.
"Farmers are some of the frontline victims of the same thing that we're part of causing," Tammi Jonas, a pig producer and the president of the Australian Food Sovereignty Alliance, said.
"That interplay between causing those things and being a victim of those things puts farming in a unique position in climate change."
Cutting farm emissions
About 17 per cent of Australia's greenhouse gas emissions come from food and fibre production — and that share is set to increase as other industries decarbonise.
While alternative technologies are being developed in other sectors, agriculture is struggling to keep up.
Researchers are looking for ways to reduce the methane that ferments in a cow or sheep's stomach — their rumen — and is then belched into the atmosphere.
Methane is a short-lived but powerful greenhouse gas.
"It's quite likely that we won't ever be producing cattle that don't produce any methane," Jared Greenville, the executive director of the federal agriculture and water research body ABARES, said.
"We might have good technology that can help lower that amount, but at this stage with our technology it doesn't seem to be the case."
National Farmers' Federation president David Jochinke is leading a delegation of Australian producers headed for COP28.
He said Australia was leading the world in climate-friendly food production and should not be the target of international policies.
"We want to make sure we can still produce food in a sustainable way, but we're not going to cut our arms off in doing so," Mr Jochinke said.
"There are a lot of technologies and techniques out there that have been adopted already.
"But we also want to acknowledge that there is a limited amount that we can do, when it's dry or when we've got the soil types that we have, to be able to decarbonise."
Time to change what we eat?
The IPCC, the world's peak climate science advisory panel, has said global meat and dairy production needs to be cut back to reduce emissions.
But Richard Eckard, a professor of sustainable agriculture at the University of Melbourne, said the world had only just begun to explore how to curb livestock emissions.
"I'm a big believer in technology," Professor Eckard said.
"Before we go down the track of radical change to diets, I believe we haven't really given technology its full chance to solve the problem.
"The rumen of an animal took 50 million years to evolve to a steady state and we decided 20 years ago this was a problem we needed to change."
He said dedicated research programs were needed to approach the problem "seriously".
"If in 10 years' time [after] giving it 10 years of concentrated funding we still can't eliminate the methane from, say, the extensive cattle industry — well then, we have to think again," he said.
|
<urn:uuid:3be94070-bcc5-4573-bfc5-4571e1227e5f>
|
{
"dump": "CC-MAIN-2024-10",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474795.48/warc/CC-MAIN-20240229071243-20240229101243-00711.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9573395252227783,
"pii_count": 0,
"score": 3.40625,
"token_count": 957,
"url": "https://www.abc.net.au/news/rural/2023-11-30/agriculture-on-the-agenda-cop28-climate-change/103167496"
}
|
Zero-emissions agriculture is still years away, farmers and scientists say as food systems are set to come under the spotlight at this year's global climate summit.
- Food systems will be a key focus for the first time at global climate conference COP28
- Agriculture is deeply affected by climate change but also produces significant greenhouse gas emissions
- Farmers say food and fibre emissions are hard to cut back
Food production has largely been overlooked in the 30 years since world leaders first agreed to cut greenhouse gas emissions and halt climate change.
But agriculture will be on the agenda like never before when delegates meet at the United Nations climate conference COP28.
Host country the United Arab Emirates is drumming up support for a leaders' declaration on resilient and sustainable food systems, and climate action.
A day dedicated to discussing food, agriculture, and water will also be on the host's agenda for the first time.
Agriculture is deeply affected by climate change, bearing the brunt of increasingly extreme weather.
But it is also a significant producer of greenhouse gas emissions — with methane from livestock and carbon from land clearing the leading sources.
Extreme weather impact
"Agriculture is the sector in the world that is most affected by climate change," CEO of Farmers for Climate Action Natalie Collard said.
"In an era of climate change, repeat fires, floods, droughts, are sending insurance costs through the roof.
"But also, the fine art of farming successfully has got harder for every single Australian farmer."
Most sectors within agriculture have lost about 20 per cent productivity over the past 20 years due to climate change.
That's about $30,000 per farm, according to federal Agriculture Minister Murray Watt, who has launched consultation to produce a net zero plan for agriculture.
"Farmers are some of the frontline victims of the same thing that we're part of causing," Tammi Jonas, a pig producer and the president of the Australian Food Sovereignty Alliance, said.
"That interplay between causing those things and being a victim of those things puts farming in a unique position in climate change."
Cutting farm emissions
About 17 per cent of Australia's greenhouse gas emissions come from food and fibre production — and that share is set to increase as other industries decarbonise.
While alternative technologies are being developed in other sectors, agriculture is struggling to keep up.
Researchers are looking for ways to reduce the methane that ferments in a cow or sheep's stomach — their rumen — and is then bel
|
ched into the atmosphere.
Methane is a short-lived but powerful greenhouse gas.
"It's quite likely that we won't ever be producing cattle that don't produce any methane," Jared Greenville, the executive director of the federal agriculture and water research body ABARES, said.
"We might have good technology that can help lower that amount, but at this stage with our technology it doesn't seem to be the case."
National Farmers' Federation president David Jochinke is leading a delegation of Australian producers headed for COP28.
He said Australia was leading the world in climate-friendly food production and should not be the target of international policies.
"We want to make sure we can still produce food in a sustainable way, but we're not going to cut our arms off in doing so," Mr Jochinke said.
"There are a lot of technologies and techniques out there that have been adopted already.
"But we also want to acknowledge that there is a limited amount that we can do, when it's dry or when we've got the soil types that we have, to be able to decarbonise."
Time to change what we eat?
The IPCC, the world's peak climate science advisory panel, has said global meat and dairy production needs to be cut back to reduce emissions.
But Richard Eckard, a professor of sustainable agriculture at the University of Melbourne, said the world had only just begun to explore how to curb livestock emissions.
"I'm a big believer in technology," Professor Eckard said.
"Before we go down the track of radical change to diets, I believe we haven't really given technology its full chance to solve the problem.
"The rumen of an animal took 50 million years to evolve to a steady state and we decided 20 years ago this was a problem we needed to change."
He said dedicated research programs were needed to approach the problem "seriously".
"If in 10 years' time [after] giving it 10 years of concentrated funding we still can't eliminate the methane from, say, the extensive cattle industry — well then, we have to think again," he said.
|
One of the most fascinating examples of a large-scale transformation programme is the ‘great leap forward’ introduced by Mao Zedong in communist China in the late 1950s.
Mao was frustrated at the slow pace of China’s development and was determined to accelerate it. One of his key priorities was to increase grain production. To help with this, he introduced a campaign urging everyone in China to kill sparrows.
Sparrows were thought to consume 2kg of grain a year each, so Mao reasoned that every sparrow less would mean 2kg more grain that he could export to earn valuable foreign currency.
Nests were destroyed, eggs were smashed, chicks were killed. Millions of volunteers formed into groups, banging pots and pans under sparrows’ nests so they couldn’t rest and would eventually drop dead from exhaustion.
The campaign was remarkably effective. Within a few months, sparrows had all but disappeared from the Chinese countryside.
The problem was that, as well as eating grain, the sparrows had also been eating all the locusts and other insects that would otherwise have been attacking the crops.
As the sparrow population dwindled, the insect population surged, wreaking havoc in the grain fields: instead of a surplus, China found itself struggling with a shortage.
And this is where the story gets really dark.
Because it was such a priority to increase grain production, local party officials were under pressure to deliver ever-higher quotas. The rewards for reporting the biggest increases were spectacular – including the chance to meet Mao himself – while the penalties for failure were brutal.
As a result, officials competed with each other to report production figures that were up to ten times higher than reality. Delighted by the apparent success of his policies, Mao struck a series of deals to export grain to other countries.
In order to meet these export commitments, the officials were now required to deliver the ‘surplus’ they had reported. Grain stores all over the country were ransacked, leaving the people who had picked it to starve to death. Only when the stench of rotting corpses became too great to hide did the truth begin to emerge.
Some 30 million people are now known to have died in the Great Chinese Famine. Mao remained in power, but was edged aside from economic affairs. The reforms of the ‘great leap forward’ were quietly shut down. And 250,000 sparrows were imported from the Soviet Union to begin rebalancing China’s ecology.
What conclusion should we draw from this? That ambitious transformation projects are doomed to fail? Not necessarily – although KPMG estimates that 70% of major transformational change projects don’t work.
For me, there are two big lessons.
First: simple solutions to complex problems are always attractive, but there’s usually a reason why no-one’s tried them before. So, before you kill all the sparrows, spend a bit of time thinking about what will happen next.
Second: be honest about failure. Most change doesn’t work first time, however much you may want it to. If you incentivise people to pretend it’s working when it isn’t, you won’t be able to fix it until it’s too late.
|
<urn:uuid:680d31a3-b04d-40fd-bb94-3b4395e0feb1>
|
{
"dump": "CC-MAIN-2023-14",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00664.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9811513423919678,
"pii_count": 0,
"score": 3.15625,
"token_count": 690,
"url": "https://matthampshire.wordpress.com/2023/02/17/unintended-consequences/"
}
|
One of the most fascinating examples of a large-scale transformation programme is the ‘great leap forward’ introduced by Mao Zedong in communist China in the late 1950s.
Mao was frustrated at the slow pace of China’s development and was determined to accelerate it. One of his key priorities was to increase grain production. To help with this, he introduced a campaign urging everyone in China to kill sparrows.
Sparrows were thought to consume 2kg of grain a year each, so Mao reasoned that every sparrow less would mean 2kg more grain that he could export to earn valuable foreign currency.
Nests were destroyed, eggs were smashed, chicks were killed. Millions of volunteers formed into groups, banging pots and pans under sparrows’ nests so they couldn’t rest and would eventually drop dead from exhaustion.
The campaign was remarkably effective. Within a few months, sparrows had all but disappeared from the Chinese countryside.
The problem was that, as well as eating grain, the sparrows had also been eating all the locusts and other insects that would otherwise have been attacking the crops.
As the sparrow population dwindled, the insect population surged, wreaking havoc in the grain fields: instead of a surplus, China found itself struggling with a shortage.
And this is where the story gets really dark.
Because it was such a priority to increase grain production, local party officials were under pressure to deliver ever-higher quotas. The rewards for reporting the biggest increases were spectacular – including the chance to meet Mao himself – while the penalties for failure were brutal.
As a result, officials competed with each other to report production figures that were up to ten times higher than reality. Delighted by the apparent success of his policies, Mao struck a series of deals to export grain to other countries.
In order to meet these export commitments, the officials were now required to deliver the ‘surplus’ they had reported. Grain stores all over the country were ransacked, leaving the people who had picked it to starve to death. Only when the stench of rotting corpses became too great to hide did the truth begin to emerge.
Some 30 million people are now known to have died in the Great Chinese Famine. Mao remained in power, but was edged aside from economic affairs. The reforms of the ‘great leap forward’ were quietly shut down. And 250,000 sparrows were imported from the Soviet Union to begin re
|
balancing China’s ecology.
What conclusion should we draw from this? That ambitious transformation projects are doomed to fail? Not necessarily – although KPMG estimates that 70% of major transformational change projects don’t work.
For me, there are two big lessons.
First: simple solutions to complex problems are always attractive, but there’s usually a reason why no-one’s tried them before. So, before you kill all the sparrows, spend a bit of time thinking about what will happen next.
Second: be honest about failure. Most change doesn’t work first time, however much you may want it to. If you incentivise people to pretend it’s working when it isn’t, you won’t be able to fix it until it’s too late.
|
Mosquitoes carrying the West Nile virus (WNV) were detected in Boston for the first time this year.
The Department of Public Health confirmed that the virus was found in a mosquito sample collected July 6 in Brookline.
“We often find the first evidence of WNV in mosquitoes at about this time every year,” said DPH Commissioner Robert Goldstein.
There were 10 human cases of the virus in 2022. No human or animal cases have been detected so far this year.
West Nile virus is usually transmitted to humans through the bite of an infected mosquito. While it can infect people of all ages, people over the age of 50 are at higher risk for severe disease. Most people infected with the virus will have no symptoms. When present, symptoms tend to include fever and a flu-like illness. In rare cases, more severe illness can occur.
“With the recent rain and the warmer weather, mosquito populations will increase and we will start to see more of them carrying WNV,” said state epidemiologist Catherine M. Brown. She suggested several steps that could reduce potential exposure to the virus:
Avoid mosquito bites
Apply insect repellent when outdoors. Use a repellent with an EPA-registered ingredient Be aware that repellent products should not be used on infants under two months of age and should be used in concentrations of 30 percent or less on older children. Oil of lemon eucalyptus should not be used on children under three years of age.
Be aware of peak mosquito hours
The hours from dusk to dawn are peak biting times for many mosquitoes. Consider rescheduling outdoor activities that occur during evening or early morning in areas of high risk.
Clothing can help reduce mosquito bites
Wearing long sleeves, long pants and socks when outdoors will help keep mosquitoes away from your skin.
Mosquito-proof your home
Drain standing water, as mosquitoes lay their eggs there and check to see where water might pool in areas around your home, like unused flowerpots or wading pools. If you have a birdbath, change the water frequently. Keep mosquitoes outside by having tightly fitting screens on all of your windows and doors.
Protect your animals
Make sure you drain any containers or troughs your animals use on a regular basis. Water troughs especially provide excellent mosquito breeding habitats and should be flushed out at least once a week during the summer months to reduce mosquitoes near paddock areas. Horse owners should keep horses in indoor stalls at night to reduce their risk of exposure to mosquitoes. Talk to your vet about animal-approved mosquito repellents, and any vaccinations that are available to protect your pets or livestock.
|
<urn:uuid:93ee6ae2-e352-4f8b-b0c8-3dade932e4bd>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510730.6/warc/CC-MAIN-20230930213821-20231001003821-00456.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9496894478797913,
"pii_count": 0,
"score": 3.34375,
"token_count": 548,
"url": "https://www.wbur.org/news/2023/07/21/west-nile-virus-positive-mosquito-boston"
}
|
Mosquitoes carrying the West Nile virus (WNV) were detected in Boston for the first time this year.
The Department of Public Health confirmed that the virus was found in a mosquito sample collected July 6 in Brookline.
“We often find the first evidence of WNV in mosquitoes at about this time every year,” said DPH Commissioner Robert Goldstein.
There were 10 human cases of the virus in 2022. No human or animal cases have been detected so far this year.
West Nile virus is usually transmitted to humans through the bite of an infected mosquito. While it can infect people of all ages, people over the age of 50 are at higher risk for severe disease. Most people infected with the virus will have no symptoms. When present, symptoms tend to include fever and a flu-like illness. In rare cases, more severe illness can occur.
“With the recent rain and the warmer weather, mosquito populations will increase and we will start to see more of them carrying WNV,” said state epidemiologist Catherine M. Brown. She suggested several steps that could reduce potential exposure to the virus:
Avoid mosquito bites
Apply insect repellent when outdoors. Use a repellent with an EPA-registered ingredient Be aware that repellent products should not be used on infants under two months of age and should be used in concentrations of 30 percent or less on older children. Oil of lemon eucalyptus should not be used on children under three years of age.
Be aware of peak mosquito hours
The hours from dusk to dawn are peak biting times for many mosquitoes. Consider rescheduling outdoor activities that occur during evening or early morning in areas of high risk.
Clothing can help reduce mosquito bites
Wearing long sleeves, long pants and socks when outdoors will help keep mosquitoes away from your skin.
Mosquito-proof your home
Drain standing water, as mosquitoes lay their eggs there and check to see where water might pool in areas around your home, like unused flowerpots or wading pools. If you have a birdbath, change the water frequently. Keep mosquitoes outside by having tightly fitting screens on all of your windows and doors.
Protect your animals
Make sure you drain any containers or troughs your animals use on a regular basis. Water troughs especially provide excellent mosquito breeding habitats and should be flushed out at least once a week during the summer months to reduce mosquitoes near paddock areas. Horse owners should keep horses in indoor stalls at night to reduce their risk of exposure to
|
mosquitoes. Talk to your vet about animal-approved mosquito repellents, and any vaccinations that are available to protect your pets or livestock.
|
Messages in bottle found in Cotuit may be from World War II POWs held on Cape Cod
COTUIT — A day of landscaping turned into a voyage of historical discovery for Shane Adams last week. Adams was working at a property in the Point Isabella neighborhood of Cotuit when he stumbled upon a half-buried bottle on a hillside near the water's edge.
There were messages inside the bottle. Adams peered through the glass, trying to decipher them.
"There were three or four little notes," he said, some of them written on cardboard from a Quaker rice box.
"I saw German names and a date of 1944 and it said something like 'prisoner of war,'" said Adams. He could see the names of Johann Huppertz, Andreas Wollny and Lothar Gernert.
Realizing he had something that might hold historical value, Adams, of Marstons Mills, left the fragile notes in the bottle and brought it to the Historical Society of Santuit & Cotuit.
The society's administrator and archivist Amy Johnson was intrigued by the find. She wondered if the note might somehow be associated with the World War II-era training area in Cotuit known as Camp Can Do It, where troops learned techniques of amphibious warfare.
Johnson also thought the bottle might make a dandy historical exhibit, "once we research more of its provenance," she said.
Many Cape Codders might be surprised to learn that there was a sizeable population of German World War II prisoners of war (POWs) housed at Camp Edwards, located on Joint Base Cape Cod on the Upper Cape.
According to a Camp Edwards history published by the Massachusetts National Guard, "shortly after the Allies' North African campaign began in 1944, the U.S. Army built a prisoner of war (POW) camp for captured German soldiers at Camp Edwards. The POW camp, located at the south end of the runway, housed up to 2,000 POWs at a given time, many of whom were from Rommel's famed Afrika Korps."
Military:Soldiers' memories shape National Guard history at Joint Base Cape Cod
And the POWs didn't spend all their time at the base. According to the Massachusetts National Guard, "the prisoners worked around Camp Edwards much of the time, but were also sent to work in the area's farms and cranberry fields. German prisoners also assisted in salvaging millions of board-feet of lumber after the Otis (air base) vicinity was devastated by a hurricane in September 1944 … by the end of the war, the camp had received, processed and repatriated up to 5,000 POWs."
Joseph Yukna, co-founder of the Cape Cod Military Museum, thinks Adams' discovery could be the real deal.
"It all fits the narrative and history that I'm aware of," he said.
Yukna said that German POWs were involved in the demolition of Camp Can Do It after amphibious training ceased at the Cotuit site. "They tore down all the buildings and warehouses."
More:Hope floats: Man who launched a message in a bottle found in France says more are out there
He also wondered if the messages in the bottle might be more of a time capsule; buried in the hillside for posterity as opposed to launching it into the water.
Adams, who found the bottle, hopes it might be possible to get word of the messages to families of the German POWs. He said the discovery has made him reflect on the dangerous and turbulent World War II era.
"It made me think about how tough men had it in the 1940s," he said.
Contact Eric Williams at <email-pii>. Follow him on Twitter: @capecast
Stay connected with Cape Cod news, sports, restaurants and breaking news.Download our free app.
|
<urn:uuid:1308c39e-330b-43ad-a128-e271b6f49b48>
|
{
"dump": "CC-MAIN-2023-23",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655446.86/warc/CC-MAIN-20230609064417-20230609094417-00115.warc.gz",
"int_score": 3,
"language": "en",
"language_score": 0.9738990664482117,
"pii_count": 1,
"score": 2.671875,
"token_count": 804,
"url": "https://www.capecodtimes.com/story/news/history/2023/01/31/cape-cod-old-bottle-found-messages-world-war-ii-pows/69847319007/?utm_campaign=The%20Weekly%20Genealogist%20&utm_source=hs_email&utm_medium=email&_hsenc=p2ANqtz--Q8sV-v8Q1jI7ciAB4Elogpf_SNyo0I-ve52m9cgh_eth8eeQ9wO7GU2Y2ZggfJyv0knUv"
}
|
Messages in bottle found in Cotuit may be from World War II POWs held on Cape Cod
COTUIT — A day of landscaping turned into a voyage of historical discovery for Shane Adams last week. Adams was working at a property in the Point Isabella neighborhood of Cotuit when he stumbled upon a half-buried bottle on a hillside near the water's edge.
There were messages inside the bottle. Adams peered through the glass, trying to decipher them.
"There were three or four little notes," he said, some of them written on cardboard from a Quaker rice box.
"I saw German names and a date of 1944 and it said something like 'prisoner of war,'" said Adams. He could see the names of Johann Huppertz, Andreas Wollny and Lothar Gernert.
Realizing he had something that might hold historical value, Adams, of Marstons Mills, left the fragile notes in the bottle and brought it to the Historical Society of Santuit & Cotuit.
The society's administrator and archivist Amy Johnson was intrigued by the find. She wondered if the note might somehow be associated with the World War II-era training area in Cotuit known as Camp Can Do It, where troops learned techniques of amphibious warfare.
Johnson also thought the bottle might make a dandy historical exhibit, "once we research more of its provenance," she said.
Many Cape Codders might be surprised to learn that there was a sizeable population of German World War II prisoners of war (POWs) housed at Camp Edwards, located on Joint Base Cape Cod on the Upper Cape.
According to a Camp Edwards history published by the Massachusetts National Guard, "shortly after the Allies' North African campaign began in 1944, the U.S. Army built a prisoner of war (POW) camp for captured German soldiers at Camp Edwards. The POW camp, located at the south end of the runway, housed up to 2,000 POWs at a given time, many of whom were from Rommel's famed Afrika Korps."
Military:Soldiers' memories shape National Guard history at Joint Base Cape Cod
And the POWs didn't spend all their time at the base. According to the Massachusetts National Guard, "the prisoners worked around Camp Edwards much of the time, but were also sent to work in the area's farms and cranberry fields. German prisoners also assisted in salvaging millions of board-feet of lumber after the
|
Otis (air base) vicinity was devastated by a hurricane in September 1944 … by the end of the war, the camp had received, processed and repatriated up to 5,000 POWs."
Joseph Yukna, co-founder of the Cape Cod Military Museum, thinks Adams' discovery could be the real deal.
"It all fits the narrative and history that I'm aware of," he said.
Yukna said that German POWs were involved in the demolition of Camp Can Do It after amphibious training ceased at the Cotuit site. "They tore down all the buildings and warehouses."
More:Hope floats: Man who launched a message in a bottle found in France says more are out there
He also wondered if the messages in the bottle might be more of a time capsule; buried in the hillside for posterity as opposed to launching it into the water.
Adams, who found the bottle, hopes it might be possible to get word of the messages to families of the German POWs. He said the discovery has made him reflect on the dangerous and turbulent World War II era.
"It made me think about how tough men had it in the 1940s," he said.
Contact Eric Williams at <email-pii>. Follow him on Twitter: @capecast
Stay connected with Cape Cod news, sports, restaurants and breaking news.Download our free app.
|
Key dates in the history of Zimbabwe:- 1980: Independence -Zimbabwe is born on April 18, 1980, after 90 years as the British colony of Southern Rhodesia, which ended in a brutal seven-year war between black nationalists and white supremacists trying to prevent majority rule.Guerrilla leader Robert Mugabe becomes prime minister on a promise of reconciliation and democracy.- 1983-84: Massacres in Matabeleland -Shortly after he takes power Mugabe, seeking to establish a one-power state, deploys an elite North Korean-trained army unit to crack down on dissidents loyal to his rival Joshua Nkomo in the western Matabeleland region, the heartland of the Ndebele minority.At least 20,000 people are killed in operation “Gukurahundi”, a term in the majority Shona language, which translates loosely as “the early rain that washes away the chaff”.- 1987: Mugabe’s grip tightens -Mugabe’s Zimbabwe African National Union (ZANU) and Nkomo’s Zimbabwe’s African People’s Union (ZAPU) merge to form ZANU-PF.Mugabe changes the constitution to become an executive president.- 1999: Dawn of the opposition -Trade unionist Morgan Tsvangirai founds the opposition Movement for Democratic Change (MDC) which, alongside civic groups, backs the successful “No” campaign against a constitutional referendum on land redistribution and presidential powers.- 2000: White-owned farms seized -Smarting from the referendum defeat, Mugabe gives the go-ahead for black veterans of the liberation war and ruling party activists to seize white-owned farms. Hundreds of white commercial farmers are driven off their land in the first of a series of events that precipitate a severe economic crisis. Western countries impose sanctions on Zimbabwe and donors cut aid.- 2002: Election violence -Mugabe is re-elected president in an poll marked by widespread violence and intimidation of opposition supporters. The Commonwealth suspends Zimbabwe.Five years later Tsvangirai suffers a fractured skull after being badly beaten by police in detention after being arrested as officers broke up a rally.- 2009: Unity government -In March 2008 with the economy in freefall and record hyperinflation, the opposition claims victory in presidential and parliamentary elections. The state withholds the results for a month before announcing a run-off for president between Mugabe and Tsvangirai, which the latter boycotts. Under pressure from Zimbabwe’s neighbours, Mugabe agrees to a power-sharing government with the MDC, in which Tsvangirai becomes prime minister in February 2009.- 2017: Mugabe out, Mnangagwa in -After 37 years in power Mugabe, then aged 93, is deposed in November by the army after trying to position his unpopular wife Grace as his successor. He dies two years later in Singapore.Army-backed vice president Emmerson Mnangagwa, nicknamed the “Crocodile” for his political cunning, takes over from Mugabe. He wins elections in July 2018 by a wafer-thin majority. The army kills six people after it is called in to quell demonstrations after the disputed vote.- 2019: New crackdown -Mnangagwa is accused of emulating Mugabe’s tactics in January after a brutal military crackdown on nationwide demonstrations over a doubling of fuel prices. At least 17 people are killed and hundreds injured, many from gunshot wounds.- 2022: Launch of new opposition -After splits in the largest opposition MDC, Nelson Chamisa launches a new party, the Coalition of Citizens for Change (CCC), ahead of the August 2023 election.Opposition campaigns are hampered through banned meetings and jailing of opponents by the government.
|
<urn:uuid:62182bb7-1aa3-4c02-8993-d1f71f85776a>
|
{
"dump": "CC-MAIN-2023-40",
"file_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510707.90/warc/CC-MAIN-20230930181852-20230930211852-00842.warc.gz",
"int_score": 4,
"language": "en",
"language_score": 0.937868058681488,
"pii_count": 0,
"score": 3.734375,
"token_count": 786,
"url": "https://ec2-15-188-152-128.eu-west-3.compute.amazonaws.com/2023/08/21/zimbabwes-post-independence-history-in-10-datesmon-21-aug-2023-041336-gmt/"
}
|
Key dates in the history of Zimbabwe:- 1980: Independence -Zimbabwe is born on April 18, 1980, after 90 years as the British colony of Southern Rhodesia, which ended in a brutal seven-year war between black nationalists and white supremacists trying to prevent majority rule.Guerrilla leader Robert Mugabe becomes prime minister on a promise of reconciliation and democracy.- 1983-84: Massacres in Matabeleland -Shortly after he takes power Mugabe, seeking to establish a one-power state, deploys an elite North Korean-trained army unit to crack down on dissidents loyal to his rival Joshua Nkomo in the western Matabeleland region, the heartland of the Ndebele minority.At least 20,000 people are killed in operation “Gukurahundi”, a term in the majority Shona language, which translates loosely as “the early rain that washes away the chaff”.- 1987: Mugabe’s grip tightens -Mugabe’s Zimbabwe African National Union (ZANU) and Nkomo’s Zimbabwe’s African People’s Union (ZAPU) merge to form ZANU-PF.Mugabe changes the constitution to become an executive president.- 1999: Dawn of the opposition -Trade unionist Morgan Tsvangirai founds the opposition Movement for Democratic Change (MDC) which, alongside civic groups, backs the successful “No” campaign against a constitutional referendum on land redistribution and presidential powers.- 2000: White-owned farms seized -Smarting from the referendum defeat, Mugabe gives the go-ahead for black veterans of the liberation war and ruling party activists to seize white-owned farms. Hundreds of white commercial farmers are driven off their land in the first of a series of events that precipitate a severe economic crisis. Western countries impose sanctions on Zimbabwe and donors cut aid.- 2002: Election violence -Mugabe is re-elected president in an poll marked by widespread violence and intimidation of opposition supporters. The Commonwealth suspends Zimbabwe.Five years later Tsvangirai suffers a fractured skull after being badly beaten by police in detention after being arrested as officers broke up a rally.- 2009: Unity government -In March 2008 with the economy in freefall and record hyperinflation, the opposition claims victory in presidential and parliamentary elections. The state withholds
|
the results for a month before announcing a run-off for president between Mugabe and Tsvangirai, which the latter boycotts. Under pressure from Zimbabwe’s neighbours, Mugabe agrees to a power-sharing government with the MDC, in which Tsvangirai becomes prime minister in February 2009.- 2017: Mugabe out, Mnangagwa in -After 37 years in power Mugabe, then aged 93, is deposed in November by the army after trying to position his unpopular wife Grace as his successor. He dies two years later in Singapore.Army-backed vice president Emmerson Mnangagwa, nicknamed the “Crocodile” for his political cunning, takes over from Mugabe. He wins elections in July 2018 by a wafer-thin majority. The army kills six people after it is called in to quell demonstrations after the disputed vote.- 2019: New crackdown -Mnangagwa is accused of emulating Mugabe’s tactics in January after a brutal military crackdown on nationwide demonstrations over a doubling of fuel prices. At least 17 people are killed and hundreds injured, many from gunshot wounds.- 2022: Launch of new opposition -After splits in the largest opposition MDC, Nelson Chamisa launches a new party, the Coalition of Citizens for Change (CCC), ahead of the August 2023 election.Opposition campaigns are hampered through banned meetings and jailing of opponents by the government.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.